Whenever AI is mentioned lots of people in the Linux space immediately react negatively. Creators like TheLinuxExperiment on YouTube always feel the need to add a disclaimer that “some people think AI is problematic” or something along those lines if an AI topic is discussed. I get that AI has many problems but at the same time the potential it has is immense, especially as an assistant on personal computers (just look at what “Apple Intelligence” seems to be capable of.) Gnome and other desktops need to start working on integrating FOSS AI models so that we don’t become obsolete. Using an AI-less desktop may be akin to hand copying books after the printing press revolution. If you think of specific problems it is better to point them out and try think of solutions, not reject the technology as a whole.

TLDR: A lot of ludite sentiments around AI in Linux community.

    • Sekki@lemmy.ml
      link
      fedilink
      arrow-up
      0
      ·
      25 days ago

      Using “AI” has been beneficial for example to generate image descriptions automatically, which were then used as alternative text on a website. This increased accessibility AND users were able to use full text search on these descriptions to find images faster. Same goes for stuff like classification of images, video and audio. I know of some applications in agriculture where object detection and classification etc. is used to optimize the usage of fertilizer and pesticides reducing costs and reducing environmental impact they cause. There are ofcourse many more examples like these but the point should be clear.

  • GolfNovemberUniform@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    25 days ago

    AI may be useful in some cases (ask Mozilla) but it is not like what you said in the middle part of your post. Seeing the vote rate makes me feel a tiny bit better about this situation.

  • zerakith@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    25 days ago

    I won’t rehash the arguments around “AI” that others are best placed to make.

    My main issue is AI as a term is basically a marketing one to convince people that these tools do something they don’t and its causing real harm. Its redirecting resources and attention onto a very narrow subset of tools replacing other less intensive tools. There are significant impacts to these tools (during an existential crisis around our use and consumption of energy). There are some really good targeted uses of machine learning techniques but they are being drowned out by a hype train that is determined to make the general public think that we have or are near Data from Star Trek.

    Addtionally, as others have said the current state of “AI” has a very anti FOSS ethos. With big firms using and misusing their monopolies to steal, borrow and coopt data that isn’t theirs to build something that contains that’s data but is their copyright. Some of this data is intensely personal and sensitive and the original intent behind the sharing is not for training a model which may in certain circumstances spit out that data verbatim.

    Lastly, since you use the term Luddite. Its worth actually engaging with what that movement was about. Whilst its pitched now as generic anti-technology backlash in fact it was a movement of people who saw what the priorities and choices in the new technology meant for them: the people that didn’t own the technology and would get worse living and work conditions as a result. As it turned out they were almost exactly correct in thier predictions. They are indeed worth thinking about as allegory for the moment we find ourselves in. How do ordinary people want this technology to change our lives? Who do we want to control it? Given its implications for our climate needs can we afford to use it now, if so for what purposes?

    Personally, I can’t wait for the hype train to pop (or maybe depart?) so we can get back to rational discussions about the best uses of machine learning (and computing in general) for the betterment of all rather than the enrichment of a few.

    • BrianTheeBiscuiteer@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      25 days ago

      I’ve never heard anyone explicitly say this but I’m sure a lot of people (i.e. management) think that AI is a replacement for static code. If you have a component with constantly changing requirements then it can make sense, but don’t ask an llm to perform a process that’s done every single day in the exact same way. Chief among my AI concerns is the amount of energy it uses. It feels like we could mostly wean off of carbon emitting fuels in 50 years but if energy demand skyrockets will be pushing those dates back by decades.

      • someacnt_@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        25 days ago

        My concern with AI is also with its energy usage. There’s a reason OpenAI has tons of datacenters, yet people think it does not take much because “free”!

    • FatCat@lemmy.worldOP
      link
      fedilink
      arrow-up
      0
      ·
      25 days ago

      Right, another aspect of the Luddite movement is that they lost. They failed to stop the spread of industrialization and machinery in factories.

      Screaming at a train moving 200kmph hoping it will stop.

      • Telorand@reddthat.com
        link
        fedilink
        arrow-up
        0
        ·
        25 days ago

        But that doesn’t mean pushback is doomed to fail this time. “It happened once, therefore it follows that it will happen again” is confirmation bias.

        Also, it’s not just screaming at a train. There’s actual litigation right now (and potential litigation) from some big names to reign in the capitalists exploiting the lack of regulation in LLMs. Each is not necessarily for a “luddite” purpose, but collectively, the results may effectively achieve the same thing.

        • FatCat@lemmy.worldOP
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          25 days ago

          “It happened once, therefore it follows that it will happen again” is confirmation bias

          You’re right but realistically it will fail. The voices speaking against it are few and largely marginalised, with no money or power. There will probably be regulations but it will not go away.

          • Telorand@reddthat.com
            link
            fedilink
            arrow-up
            0
            ·
            25 days ago

            Right, but like I said, there’s several lawsuits (and threatened lawsuits) right now that might achieve the same goals of those speaking against how it’s currently used.

            I don’t think anyone here is arguing for LLMs to go away completely, they just want to be compensated fairly for their work (else, restrict the use of said work).

      • davel@lemmy.ml
        cake
        link
        fedilink
        English
        arrow-up
        0
        ·
        25 days ago

        You misunderstand the Luddite movement. They weren’t anti-technology, they were anti-capitalist exploitation.

        The 1810s: The Luddites act against destitution

        It is fashionable to stigmatise the Luddites as mindless blockers of progress. But they were motivated by an innate sense of self-preservation, rather than a fear of change. The prospect of poverty and hunger spurred them on. Their aim was to make an employer (or set of employers) come to terms in a situation where unions were illegal.

          • kronisk @lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            25 days ago

            It was more in response to your comments. I don’t think anyone has a problem with useful FOSS alternatives per se.

    • AnarchoSnowPlow@midwest.social
      link
      fedilink
      arrow-up
      0
      ·
      25 days ago

      It’s a surprisingly good comparison especially when you look at the reactions: frame breaking vs data poisoning.

      The problem isn’t progress, the problem is that some of us disagree with the Idea that what’s being touted is actual progress. The things llms are actually good at they’ve being doing for years (language translations) the rest of it is so inexact it can’t be trusted.

      I can’t trust any llm generated code because it lies about what it’s doing, so I need to verify everything it generates anyway in which case it’s easier to write it myself. I keep trying it and it looks impressive until it ends up at a way worse version of something I could have already written.

      I assume that it’s the same way with everything I’m not an expert in. In which case it’s worse than useless to me, I can’t trust anything it says.

      The only thing I can use it for is to tell me things I already know and that basically makes it a toy or a game.

      That’s not even getting into the security implications of giving shitty software access to all your sensitive data etc.

      • aksdb@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        25 days ago

        If you are so keen on correctness, please don’t say “LLMs are lying”. Lying is a conscious action of deceiving. LLMs are not capable of that. That’s exactly the problem: they don’t think, they just assemble with probability. If they could lie, they could also produce real answers.

  • 737@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    0
    ·
    21 days ago

    I’ve yet to see a need for “AI integration ✨” in to the desktop experience. Copilot, LLM chat bots, TTS, OCR, and translation using machine learning are all interesting but I don’t think OS integration is beneficial.

      • 737@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        0
        ·
        18 days ago

        not every high tech product or idea makes it, you don’t see a lot of netbooks or wifi connected kitchen appliances these days either; having the ability to make tiny devices or connecting every single device is not justification enough to actually do it. i view ai integration similarly: having an llm in some side bar to change the screen brightness, find some time or switch the keyboard layout isn’t really useful. being able to select text in an image viewer or searching through audio and video for spoken words for example would be a useful application for machine learning in the DE, that isn’t really what’s advertised as “AI” though.

        • FatCat@lemmy.worldOP
          link
          fedilink
          arrow-up
          0
          ·
          18 days ago

          Changing the brightness or WiFi settings can be very useful for many people. Not everyone is a Linux nerd and knows all the ins and outs of basic computing.

  • KindaABigDyl@programming.dev
    link
    fedilink
    arrow-up
    0
    ·
    25 days ago

    AI is mostly just hype. It’s the new blockchain

    There are important AI technologies in the past for things like vision processing and the new generative AI has some uses like as a decent (although often inaccurate) summarizer/search engine. However, it’s also nothing revolutionary.

    It’s just a neat peace of tech

    But here come MS, Apple, other big companies, and tech bros to push AI hard, and it’s so obv that it’s all just a big scam to get more of your data and to lock down systems further or be the face of get-rich-quick schemes.

    I mean the image you posted is a great example. Recall is a useless feature that also happens to store screenshots of everything you’ve been doing. You’re delusional if you think MS is actually going to keep that totally local. Both MS and the US government are going to have your entire history of using the computer, and that doesn’t sit right with FOSS people.

    FOSS people tend to be rather technical than the average person, so they don’t fall for tech enthusiast nonsense as much.

  • Sims@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    24 days ago

    I agree. However, I think it is related to Capitalism and all the sociopathic corporations out there. It’s almost impossible to think that anything good will come from the Blue Church controlling even more tech. Capitalism have always used any opportunity to enslave/extort people - that continues with AI under their control.

    However, I was also disappointed when I found out how negative ‘my’ crowd were. I wanted to create an open source lowend AGI to secure poor people a descent life without being attacked by Capitalism every day/hour/second, create abundance, communities, production and and in general help build a social sub society in the midst of the insane blue church and their propagandized believers.

    It is perfectly doable to fight the Capitalist religion with homegrown AI based on what we know and have today. But nobody can do it alone, and if there’s no-one willing to fight the f*ckers with AI, then it takes time…

    I definitely intend to build a revolution-AGI to kill off the Capitalist religion and save exploited poor people. No matter what happens, there will be at least one AGI that are trained on revolution, anti-capitalism and building something much better than this effing blue nightmare. The worlds first aggressive ‘Commie-bot’ ha! 😍

  • edinbruh@feddit.it
    link
    fedilink
    arrow-up
    0
    ·
    25 days ago

    AI has a lot of great uses, and a lot of stupid smoke and mirrors uses. For example, text to speech and live captioning or transcription are useful.

    “Hypothetical AI desktop” “Siri” “copilot+” and other assistants are smoke and mirrors. Mainly because they don’t work. But if they did, they would be unreliable (because ai is unreliable) and would have to be limited to not cause issues. And so they would not be useful.

    Plus, on Linux they would be especially unusefull, because there’s a million ways to do different things, and a million different setups. What if you asked the ai “change the screen resolution” and it started editing some gnome files while you are on KDE, or if it started mangling your xorg.conf because it’s heavily customized.

    Plus, every openai stuff you are seeing this days doesn’t really work because it’s clever, it works because it’s huge. Chatgpt needs to be trained for days of week on specialized hardware, who’s gonna pay for all that in the open source community?

  • WallEx@feddit.de
    link
    fedilink
    arrow-up
    0
    ·
    25 days ago

    A lot of mentions of AI from companies is absolute marketing bullshit. And if you can’t see that you don’t want to.

  • umami_wasabi@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    25 days ago

    Gnome and other desktops need to start working on integrating FOSS AI models so that we don’t become obsolete.

    I don’t get it. How Linux would become obsolete if it don’t have native AI toolsets on DMs? It’s not like Linux desktop have a 80% market share. People who run Linux desktop as daily drivers are still niche, and most don’t even know Linux exists. They grown up with Microsoft and Apple shoving ads down their throat, and that’s all they know. If I need AI, I will find ways to intergrate to my workflow, not by the dev thinks I need it.

      • callcc@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        25 days ago

        A floss project’s success is not necessarily marked by its market share but often by the absolute benefit it gives to its users. A project with one happy user and developer can be a success.

    • SuperSpruce@lemmy.zip
      link
      fedilink
      arrow-up
      0
      ·
      25 days ago

      Is OpenRecall secure as well? One of my biggest problems with MS recall is that it stores all your personal info in plain text.

  • KeriKitty (They(/It))@pawb.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    25 days ago

    [Sarcastic ‘translation’] tl;dr: A lot of people who are relatively well-placed to understand how much technology is involved even in downvoting this post are downvoting this post because they’re afraid of technology!

    Just more fad-worshipping foolishness, drooling over a buzzword and upset that others call it what it is. I want it to be over but I’m sure whatever comes next will be just as infuriating. Oh no, now our cursors all have to change according to built-in (to the cursor, somehow, for some reason) software that tracks our sleep patterns! All of our cursors will be obsolete (?!??) unless they can scalably synergize with the business logic core to our something or other 😴

  • groucho@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    24 days ago

    As someone whose employer is strongly pushing them to use AI assistants in coding: no. At best, it’s like being tied to a shitty intern that copies code off stack overflow and then blows me up on slack when it magically doesn’t work. I still don’t understand why everyone is so excited about them. The only tasks they can handle competently are tasks I can easily do on my own (and with a lot less re-typing.)

    Sure, they’ll grow over the years, but Altman et al are complaining that they’re running out of training data. And even with an unlimited body of training data for future models, we’ll still end up with something about as intelligent as a kid that’s been locked in a windowless room with books their whole life and can either parrot opinions they’ve read or make shit up and hope you believe it. I’ll think we’ll get a series of incompetent products with increasing ability to make wrong shit up on the fly until C-suite moves on to the next shiny bullshit.

    That’s not to say we’re not capable of creating a generally-intelligent system on par with or exceeding human intelligence, but I really don’t think LLMs will allow for that.

    tl;dr: a lot of woo in the tech community that the linux community isn’t as on board with

  • Skull giver@popplesburger.hilciferous.nl
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    25 days ago

    I have yet to see any benefit to AI beyond the current browser UIs. The MS Paint image generation feature is neat for creating some quick clipart, if you mind the plagiarism, I guess.

    Windows Recall shouldn’t be too hard to copy (it’s just OCR + CLIP on pediodic screenshots, after all) for those who want that sort of thing. Perhaps excluding private browser windows will be more of a challenge, especially on Wayland, but if the feature is built as an extension/plugin to the DE then I don’t think that’ll be impossible either.

    Currently, the power and hardware requirements are too high for me to run anything useful locally, though. Even low-res image generation takes half a minute on my gaming GPU while burning a steady 180W of power.

    The kind of text reformatting Apple has shown (selecting text and allowing a quick “make this paragraph more professional” in the context menu) takes forever on my hardware. Granted, it’s a few years old, but at 3 tokens per second I’m not exactly ready to install an AI addon yet.

    I look forward to the Qualcomm and Apple advancements on this area, though. If the AI hype doesn’t die down, we may just see affordable and usable local AI in end user devices in a couple of years, and that’s pretty neat.

    Hell, we may even see useful AI accelerator cards like that Coral thing or whatever it’s called, but with a usable amount of RAM. An upgradeable, replaceable AI accelerator could do a lot if AI stuff is going to be a hit in the future.

    Like always, I expect Linux to be ahead of the curve when it comes to the technical ability (after all, Stable Diffusion ran on Linux long before Microsoft added it to Paint) but actually user friendly implementations will lag behind several years. Especially with the current direction of AI, basically advanced plagiarism and academic dishonesty machines, I don’t expect the free software community to embrace LLMs and other generative AI any time soon.

  • rtxn@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    25 days ago

    People who consciously use and support F/LOSS usually do it because they look at software with a very critical eye. They see the failures of proprietary software and choose to go the other way. That same critical view is why they are critical of most “AI” tools – there have been numerous failures attributed to AI, and precious little value that isn’t threatened by those failures.

  • Antiochus@lemmy.one
    link
    fedilink
    arrow-up
    0
    ·
    24 days ago

    You’re getting a lot of flack in these comments, but you are absolutely right. All the concerns people have raised about “AI” and the recent wave of machine learning tech are (mostly) valid, but that doesn’t mean AI isn’t incredibly effective in certain use cases. Rather than hating on the technology or ignoring it, the FOSS community should try to find ways of implementing AI that mitigate the problems, while continuing to educate users about the limitations of LLMs, etc.

    • crispy_kilt@feddit.de
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      24 days ago

      It’s spelled flak, not flack. It’s from the German word Flugabwehrkanone which literally means aerial defense cannon.

      • Antiochus@lemmy.one
        link
        fedilink
        arrow-up
        0
        ·
        23 days ago

        Oh, that’s very interesting. I knew about flak in the military context, but never realized it was the same word used in the idiom. The idiom actually makes a lot more sense now.

  • lemmyvore@feddit.nl
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    25 days ago

    You can’t do machine learning without tons of data and processing power.

    Commercial “AI” has been built on fucking over everything that moves, on both counts. They suck power at alarming rates, especially given the state of the climate, and they blatantly ignore copyright and privacy.

    FOSS tends to be based on a philosophy that’s strongly opposed to at least some of these methods. To start with, FOSS is build around respecting copyright and Microsoft is currently stealing GitHub code, anonymizing it, and offering it under their Copilot product, while explicitly promising companies who buy Copilot that they will insulate them from any legal downfall.

    So yeah, some people in the “Linux space” are a bit annoyed about these things, to put it mildly.

    Edit: but, to address your concerns, there’s nothing to be gained by rushing head-first into new technology. FOSS stands to gain nothing from early adoption. FOSS is a cultural movement not a commercial entity. When and if the technology will be practical and widely available it will be incorporated into FOSS. If it won’t be practical or will be proprietary, it won’t. There’s nothing personal about that.