• flicker@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      I figure they’re those “early adopters” who buy the New Thing! as soon as it comes out, whether they need it or not, whether it’s garbage or not, because they want to be seen as on the cutting edge of technology.

  • Buelldozer@lemmy.today
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    2 months ago

    I’m fine with NPUs / TPUs (AI-enhancing hardware) being included with systems because it’s useful for more than just OS shenanigans and commercial generative AI. Do I want Microsoft CoPilot Recall running on that hardware? No.

    However I’ve bought TPUs for things like Frigate servers and various ML projects. For gamers there’s some really cool use cases out there for using local LLMs to generate NPC responses in RPGs. For “Smart Home” enthusiasts things like Home Assistant will be rolling out support for local LLMs later this year to make voice commands more context aware.

    So do I want that hardware in there so I can use it MYSELF for other things? Yes, yes I do. You probably will eventually too.

    • Codilingus@sh.itjust.works
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      I wish someone would make software that utilizes things like a M.2 coral TPU, to enhance gameplay like with frame gen, or up scaling for games and videos. Some GPUs are starting to even put M.2 slots on the GPU, if the latency from Mobo M.2 to PCIe GPU would be too slow.

  • Poutinetown@lemmy.ca
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    Tbh this is probably for things like DLSS, captions, etc. Not necessarily for chatbots or generative art.

  • NounsAndWords@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    I would pay for AI-enhanced hardware…but I haven’t yet seen anything that AI is enhancing, just an emerging product being tacked on to everything they can for an added premium.

    • aname@lemmy.one
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      My Samsung A71 has had devil AI since day one. You know that feature where you can mostly use fingerprint unlock but then once a day or so it ask for the actual passcode for added security. My A71 AI has 100% success rate of picking the most inconvenient time to ask for the passcode instead of letting me do my thing.

        • lmaydev@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          2 months ago

          I’m a programmer so when learning a new framework or library I use it as an interactive docs that allows follow up questions.

          I also use it to generate things like regex and SQL queries.

          It’s also really good at refactoring code and other repetitive tasks like that

          • Nachorella@lemmy.sdf.org
            link
            fedilink
            arrow-up
            0
            ·
            2 months ago

            it does seem like a good translator for the less human readable stuff like regex and such. I’ve dabbled with it a bit but I’m a technical artist and haven’t found much use for it in the things I do.

        • lmaydev@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          2 months ago

          An NPU, or Neural Processing Unit, is a dedicated processor or processing unit on a larger SoC designed specifically for accelerating neural network operations and AI tasks.

          Exactly what we are talking about.

            • lmaydev@lemmy.world
              link
              fedilink
              arrow-up
              0
              ·
              2 months ago

              It’s hardware specifically designed for running AI tasks. Like neural networks.

              An NPU, or Neural Processing Unit, is a dedicated processor or processing unit on a larger SoC designed specifically for accelerating neural network operations and AI tasks. Unlike general-purpose CPUs and GPUs, NPUs are optimized for a data-driven parallel computing, making them highly efficient at processing massive multimedia data like videos and images and processing data for neural networks

    • Fermion@feddit.nl
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      It’s like rgb all over again.

      At least rgb didn’t make a giant stock market bubble…

    • DerisionConsulting@lemmy.ca
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      In the 2010s, it was cramming a phone app and wifi into things to try to justify the higher price, while also spying on users in new ways. The device may even a screen for basically no reason.
      In the 2020s, those same useless features now with a bit of software with a flashy name that removes even more control from the user, and allows the manufacturer to spy on even further the user.

    • PriorityMotif@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      2 months ago

      Already had that Google thingy for years now. The USB/nvme device for image recognition. Can’t remember what it’s called now. Cost like $30.

      Edit: Google coral TPU

  • qaz@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    2 months ago

    I would pay extra to be able to run open LLM’s locally on Linux. I wouldn’t pay for Microsoft’s Copilot stuff that’s shoehorned into every interface imaginable while also causing privacy and security issues. The context matters.

    • Blue_Morpho@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      That’s why NPU’s are actually a good thing. The ability to run LLM local instead of sending everything to Microsoft/Open AI for data mining will be great.

      • schizo@forum.uncomfortable.business
        link
        fedilink
        arrow-up
        0
        ·
        2 months ago

        I hate to be that guy, but do you REALLY think that on-device AI is going to prevent all your shit being sent to anyone who wants it, in the form of “diagnostic data” or “usage telemetry” or whatever weasel-worded bullshit in the terms of service?’

        They’ll just send the results for “quality assurance” instead of doing the math themselves and save a bundle on server hosting.

        • alessandro@lemmy.ca
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          2 months ago

          All your unattended date will be taken (and some of the attended one). This doesn’t mean you should stop to attend your data. Even of you’re somehow forced to use Windows instead open alternative, it doesn’t mean you can’t dual boot or use other privacy conscious devices when dealing with your sensitive data.

          Closed/proprietary OS and hardware driver can’t be considered safe by design)

        • chicken@lemmy.dbzer0.com
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          2 months ago

          but do you REALLY think that on-device AI is going to prevent all your shit being sent to anyone who wants it

          Yes, obviously, especially if you are running all open source software.

  • cygnus@lemmy.ca
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    The biggest surprise here is that as many as 16% are willing to pay more…

    • ShinkanTrain@lemmy.ml
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      2 months ago

      I mean, if framegen and supersampling solutions become so good on those chips that regular versions can’t compare I guess I would get the AI version. I wouldn’t pay extra compared to current pricing though

  • BlackLaZoR@kbin.run
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    Unless you’re doing music or graphics design there’s no usecase. And if you do, you probably have high end GPU anyway

    • DarkThoughts@fedia.io
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      I could see use for local text gen, but that apparently eats quite a bit more than what desktop PCs could offer if you want to have some actually good results & speed. Generally though, I’d rather want separate extension cards for this. Making it part of other processors is just going to increase their price, even for those who have no use for it.

      • BlackLaZoR@kbin.run
        link
        fedilink
        arrow-up
        0
        ·
        2 months ago

        There are local models for text gen - not as good as chatGPT but at the same time they’re uncensored - so it may or may not be useful

        • DarkThoughts@fedia.io
          link
          fedilink
          arrow-up
          0
          ·
          2 months ago

          Yes, I know - that’s my point. But you need the necessary hardware to run those models in a performative way. Waiting a minute to produce some vaguely relevant gibberish is not going to be of much use. You could also use generative text for other applications, such as video game NPCs, especially all those otherwise useless drones you see in a lot of open world titles could gain a lot of depth.

  • rtxn@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    The dedicated TPM chip is already being used for side-channel attacks. A new processor running arbitrary code would be a black hat’s wet dream.

    • MajorHavoc@programming.dev
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      It will be.

      IoT devices are already getting owned at staggering rates. Adding a learning model that currently cannot be secured is absolutely going to happen, and going to cause a whole new large batch of breaches.

      • rtxn@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        2 months ago

        TPM-FAIL from 2019. It affects Intel fTPM and some dedicated TPM chips: link

        The latest (at the moment) UEFI vulnerability, UEFIcanhazbufferoverflow is also related to, but not directly caused by, TPM on Intel systems: link

        • barsquid@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          2 months ago

          That’s insane. How can they be doing security hardware and leave a timing attack in there?

          Thank you for those links, really interesting stuff.

  • ZILtoid1991@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    A big letdown for me is, except with some rare cases, those extra AI features useless outside of AI. Some NPUs are straight out DSPs, they could easily run OpenCL code, others are either designed to not be able to handle any normal floating point numbers but only ones designed for machine learning, or CPU extensions that are just even bigger vector multipliers for select datatypes (AMX).

  • Xenny@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    As with any proprietary hardware on a GPU it all comes down to third party software support and classically if the market isn’t there then it’s not supported.

    • Appoxo@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      Assuming theres no catch-on after 3-4 cycles I’d say the tech is either not mature enough, too expensive with too little results or (as you said) theres generally no interest in that.

      Maybe it needs a bit of marturing and a re-introduction at a later point.

    • Appoxo@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      raytracing is something I’d pay for even if unasked, assuming they meaningfully impact the quality and dont demand outlandish prices.
      And they’d need to put it in unasked and cooperate with devs else it won’t catch on quickly enough.
      Remember Nvidia Ansel?

  • Nora@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    I was recently looking for a new laptop and I actively avoided laptops with AI features.

    • cheee@lemmings.world
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      Look, me too, but, the average punter on the street just looks at AI new features and goes OK sure give it to me. Tell them about the dodgy shit that goes with AI and you’ll probably get a shrug at most