Lots of people on Lemmy really dislike AI’s current implementations and use cases.

I’m trying to understand what people would want to be happening right now.

Destroy gen AI? Implement laws? Hoping all companies use it for altruistic purposes to help all of mankind?

Thanks for the discourse. Please keep it civil, but happy to be your punching bag.

  • Rose@slrpnk.net
    link
    fedilink
    arrow-up
    0
    ·
    3 months ago

    The technology side of generative AI is fine. It’s interesting and promising technology.

    The business side sucks and the AI companies just the latest continuation of the tech grift. Trying to squeeze as much money from latest hyped tech, laws or social or environmental impact be damned.

    We need legislation to catch up. We also need society to be able to catch up. We can’t let the AI bros continue to foist more “helpful tools” on us, grab the money, and then just watch as it turns out to be damaging in unpredictable ways.

    • theherk@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      I agree, but I’d take it a step further and say we need legislation to far surpass the current conditions. For instance, I think it should be governments leading the charge in this field, as a matter of societal progress and national security.

  • Glitch@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    0
    ·
    3 months ago

    I don’t dislike ai, I dislike capitalism. Blaming the technology is like blaming the symptom instead of the disease. Ai just happens to be the perfect tool to accelerate that

  • Justdaveisfine@midwest.social
    link
    fedilink
    arrow-up
    0
    ·
    3 months ago

    I would likely have different thoughts on it if I (and others) was able to consent my data into training it, or consent to even have it rather than it just showing up in an unwanted update.

  • snooggums@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    I want all of the CEOs and executives that are forcing shitty AI into everything to get pancreatic cancer and die painfully in a short period of time.

    Then I want all AI that is offered commercially or in commercial products to be required to verify their training data and be severely punished for misusing private and personal data. Copyright violations need to be punished severely, and using copyrighted works being used for AI training counts.

    AI needs to be limited to optional products trained with properly sourced data if it is going to be used commercially. Individual implementations and use for science is perfectly fine as long as the source data is either in the public domain or from an ethically collected data set.

    • Xaphanos@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      So, a lot of our AI customers have no real use for LLM. It’s pharmaceutical and genetics companies looking for the treatments and cures for things like pancreatic cancer and Parkinson’s.

      It is a big problem to paint all generative AI with the “stealing IP” brush.

      It seems likely to me that an AI may be the only controller that can handle all of the rapidly changing parameters needed to maintain a safe fusion process. Yes it needs safeties. But it needs research, too.

      I urge much more consideration of the specific uses of this new technology. I agree that IP theft is bad. Let’s target the bad parts carefully.

    • Opinionhaver@feddit.uk
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      The term artificial intelligence is broader than many people realize. It doesn’t refer to a single technology or a specific capability, but rather to a category of systems designed to perform tasks that would normally require human intelligence. That includes everything from pattern recognition, language understanding, and problem-solving to more specific applications like recommendation engines or image generation.

      When people say something “isn’t real AI,” they’re often working from a very narrow or futuristic definition - usually something like human-level general intelligence or conscious reasoning. But that’s not how the term has been used in computer science or industry. A chess-playing algorithm, a spam filter, and a large language model can all fall under the AI umbrella. The boundaries of AI shift over time: what once seemed like cutting-edge intelligence often becomes mundane as we get used to it.

      So rather than being a misleading or purely marketing term, AI is just a broad label we’ve used for decades to describe machines that do things we associate with intelligent behavior. The key is to be specific about which kind of AI we’re talking about - like “machine learning,” “neural networks,” or “generative models” - rather than assuming there’s one single thing that AI is or isn’t.

  • FuryMaker@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    Lately, I just wish it didn’t lie or make stuff up. And after drawing attention to false information, it often doubles-down, or apologises, and just repeats the bs.

    If it doesn’t know something, it should just admit it.

    • Croquette@sh.itjust.works
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      LLM don’t know that they are wrong. It just mimics how we talk, but there is no conscious choice behind the words used.

      It just tries to predict which word to use next, trained on a ungodly amount of data.

  • HeartyOfGlass@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    My fantasy is for “everyone” to realize there’s absolutely nothing “intelligent” about current AI. There is no rationalization. It is incapable of understanding & learning.

    ChatGPT et al are search engines. That’s it. It’s just a better Google. Useful in certain situations, but pretending it’s “intelligent” is outright harmful. It’s harmful to people who don’t understand that & take its answers at face value. It’s harmful to business owners who buy into the smoke & mirrors. It’s harmful to the future of real AI.

    It’s a fad. Like NFTs and Bitcoin. It’ll have its die-hard fans, but we’re already seeing the cracks - it’s absorbed everything humanity’s published online & it still can’t write a list of real book recommendations. Kids using it to “vibe code” are learning how useless it is for real projects.

  • Brave Little Hitachi Wand@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    Part of what makes me so annoyed is that there’s no realistic scenario I can think of that would feel like a good outcome.

    Emphasis on realistic, before anyone describes some insane turn of events.

    • venusaur@lemmy.worldOP
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      Some jobs are automated and prices go down. That’s realistic enough. To be fair there’s good and bad likely in that scenario. So tack on some level of UBI. Still realistic? That’d be pretty good.

  • Hemingways_Shotgun@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    I don’t have negative sentiments towards A.I. I have negative sentiments towards the uses it’s being put towards.

    There are places where A.I can be super exciting and useful; namely places where the ability to quickly and accurately process large amounts of data can be critically life saving, ie) air traffic control, language translation, emergency response preparedness, etc…

    But right now it’s being used to paint shitty pictures so that companies don’t have to pay actual artists.

    If I had a choice, I’d say no AI in the arts; save it for the data processing applications and leave the art to the humans.

  • november@lemmy.vg
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    I want people to figure out how to think for themselves and create for themselves without leaning on a glorified Markov chain. That’s what I want.

    • Libra00@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      So your argument against AI is that it’s making us dumb? Just like people have claimed about every technology since the invention of writing? The essence of the human experience is change, we invent new tools and then those tools change how we interact with the world, that’s how it’s always been, but there have always been people saying the internet is making us dumb, or the TV, or books, or whatever.

      • november@lemmy.vg
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        Get back to me after you have a few dozen conversations with people who openly say “Well I asked ChatGPT and it said…” without providing any actual input of their own.

        • Libra00@lemmy.ml
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          Oh, you mean like people have been saying about books for 500+ years?

          • Cethin@lemmy.zip
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            3 months ago

            Not remotely the same thing. Books almost always have context on what they are, like having an author listed, and hopefully citations if it’s about real things. You can figure out more about it. LLMs create confident sounding outputs that are just predictions of what an output should look like based on the input. It didn’t reason and doesn’t tell you how it generated its response.

            The problem is LLMs are sold to people as Artifical Intelligence, so it sounds like it’s smart. In actuality, it doesn’t think at all. It just generates confident sounding results. It’s literally companies selling con(fidence) men as a product, and people fully trust these con men.

            • Libra00@lemmy.ml
              link
              fedilink
              English
              arrow-up
              0
              ·
              3 months ago

              Yeah, nobody has ever written a book that’s full of bullshit, bad arguments, and obvious lies before, right?

              Obviously anyone who uses any technology needs to be aware of the limitations and pitfalls, but to imagine that this is some entirely new kind of uniquely-harmful thing is to fail to understand the history of technology and society’s responses to it.

              • Cethin@lemmy.zip
                link
                fedilink
                English
                arrow-up
                0
                ·
                3 months ago

                You can look up the author and figure out if they’re a reliable source of information. Most authors either write bullshit or don’t, at least on a particular subject. LLMs are unreliable. Sometimes they return bullshit and sometimes they don’t. You never know, but it’ll sound just as confident either way. Also, people are lead to believe they’re actually thinking about their response, and they aren’t. They aren’t considering if it’s real or not, only if it is a statistically probable output.

                • Libra00@lemmy.ml
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  edit-2
                  3 months ago

                  You should check your sources when you’re googling or using chatGPT too (most models I’ve seen now cite sources you can check when they’re reporting factual stuff), that’s not unique to those those things. Yeah LLMs might be more likely to give bad info, but people are unreliable too, they’re biased and flawed and often have an agenda, and they are frequently, confidently wrong. Guess who writes books? Mostly people. So until we’re ready to apply that standard to all sources of information it seems unreasonable to arbitrarily hold LLMs to some higher standard just because they’re new.

              • november@lemmy.vg
                link
                fedilink
                English
                arrow-up
                0
                ·
                3 months ago

                Yeah, nobody has ever written a book that’s full of bullshit, bad arguments, and obvious lies before, right?

                Lies are still better than ChatGPT. ChatGPT isn’t even capable of lying. It doesn’t know anything. It outputs statistically probable text.

                • Libra00@lemmy.ml
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  3 months ago

                  How exactly? Bad information is bad information, regardless of the source.

    • givesomefucks@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      AI people always want to ignore the environmental damage as well…

      Like all that electricity and water are just super abundant things humans have plenty of.

      Everytime some idiot asks AI instead of googling it themselves the planet gets a little more fucked

      • Libra00@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        Are you not aware that Google also runs on giant data centers that eat enormous amounts of power too?

        • Aksamit@slrpnk.net
          link
          fedilink
          arrow-up
          0
          ·
          3 months ago

          Multiple things can be bad at the same time, they don’t all need to be listed every time any one bad thing is mentioned.

          • Libra00@lemmy.ml
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 months ago

            I wasn’t listing other bad things, this is not a whataboutism, this was a specific criticism of telling people not to use one thing because it uses a ton of power/water when the thing they’re telling people to use instead also uses a ton of power/water.

            • Aksamit@slrpnk.net
              link
              fedilink
              arrow-up
              0
              ·
              3 months ago

              Yeah, you’re right. I think I misread your/their comment initially or something. Sorry about that.

              And ai is in search engines now too, so even if asking chatfuckinggpt uses more water than google searching something used to, google now has its own additional fresh water resource depletor to insert unwanted ai into whatever you look up.

              We’re fucked.

              • Libra00@lemmy.ml
                link
                fedilink
                English
                arrow-up
                0
                ·
                3 months ago

                Fair enough.

                Yeah, the intergration of AI with chat will just make it eat even more power, of course.

          • Libra00@lemmy.ml
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 months ago

            Per: https://www.rwdigital.ca/blog/how-much-energy-do-google-search-and-chatgpt-use/

            Google search currently uses 1.05GWh/day. ChatGPT currently uses 621.4MWh/day

            The per-entry cost for google is about 10% of what it is for GPT but it gets used quite a lot more. So for one user ‘just use google’ is fine, but since are making proscriptions for all of society here we should consider that there are ~300 million cars in the US, even if they were all honda civics they would still burn a shitload of gas and create a shitload of fossil fuel emissions. All I’m saying if the goal is to reduce emissions we should look at the big picture, which will let you understand that taking the bus will do you a lot better than trading in your F-150 for a Civic.

            • givesomefucks@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              3 months ago

              Google search currently uses 1.05GWh/day. ChatGPT currently uses 621.4MWh/day

              And oranges are orange

              It doesn’t matter what the totals are when people are talking about one or the other for a single use.

              Less people commute to work on private jets than buses, are you gonna say jets are fine and buses are the issue?

              Because that’s where your logic ends up

      • garbagebagel@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        3 months ago

        This is my #1 issue with it. My work is super pushing AI. The other day I was trying to show a colleague how to do something in teams and as I’m trying to explain to them (and they’re ignoring where I’m telling them to click) they were like “you know, this would be a great use of AI to figure it out!”.

        I said no and asked them to give me their fucking mouse.

        People are really out there fucking with extremely powerful wasteful AI for something as stupid as that.

    • nimpnin@sopuli.xyz
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      People haven’t ”thought for themselves” since the printing press was invented. You gotta be more specific than that.

      • MudMan@fedia.io
        link
        fedilink
        arrow-up
        0
        ·
        3 months ago

        Ah, yes, the 14th century. That renowned period of independent critical thought and mainstream creativity. All downhill from there, I tell you.

        • nimpnin@sopuli.xyz
          link
          fedilink
          arrow-up
          0
          ·
          3 months ago

          Independent thought? All relevant thought is highly dependent of other people and their thoughts.

          That’s exactly why I bring this up. Having systems that teach people to think in a similar way enable us to build complex stuff and have a modern society.

          That’s why it’s really weird to hear this ”people should think for themselves” criticism of AI. It’s a similar justification to antivaxxers saying you ”should do your own research”.

          Surely there are better reasons to oppose AI?

          • MudMan@fedia.io
            link
            fedilink
            arrow-up
            0
            ·
            3 months ago

            I agree on the sentiment, it was just a weird turn of phrase.

            Social media has done a lot to temper my techno-optimism about free distribution of information, but I’m still not ready to flag the printing press as the decay of free-thinking.

            • nimpnin@sopuli.xyz
              link
              fedilink
              arrow-up
              0
              ·
              3 months ago

              Things are weirder than they seem on the surface.

              A math professor collegue of mine calls extremely restrictive use of language ”rigor”, for example.

              • Libra00@lemmy.ml
                link
                fedilink
                English
                arrow-up
                0
                ·
                edit-2
                3 months ago

                The point isn’t that it’s restrictive, the point is that words have precise technical meanings that are the same across authors, speakers, and time. It’s rigorous because of that precision and consistency, not just because it’s restrictive. It’s necessary to be rigorous with use of language in scientific fields where clear communication is difficult but important to get right due to the complexity of the ideas at play.

                • nimpnin@sopuli.xyz
                  link
                  fedilink
                  arrow-up
                  0
                  ·
                  3 months ago

                  Yeah sure buddy.

                  Have you tried to shoehorn real life stuff into mathematical notation? It is restrictive. You have pre-defined strict boxes that don’t have blurry lines. Free form thoughts are a lot more flexible than that.

                  Consistency is restrictive. I don’t know why you take issue with that.

          • Soleos@lemmy.world
            cake
            link
            fedilink
            arrow-up
            0
            ·
            3 months ago

            The usage of “independent thought” has never been “independent of all outside influence”, it has simply meant going through the process of reasoning–thinking through a chain of logic–instead of accepting and regurgitating the conclusions of others without any of one’s own reasoning. It’s a similar lay meaning as being an independent adult. We all rely on others in some way, but an independent adult can usually accomplish activities of daily living through their own actions.

            • nimpnin@sopuli.xyz
              link
              fedilink
              arrow-up
              0
              ·
              3 months ago

              Yeah but that’s not what we are expecting people to do.

              In our extremely complicated world, most thinking relies on trusting sources. You can’t independently study and derive most things.

              Otherwise everybody should do their own research about vaccines. But the reasonable thing is to trust a lot of other, more knowledgeable people.

              • Soleos@lemmy.world
                cake
                link
                fedilink
                arrow-up
                0
                ·
                edit-2
                3 months ago

                My comment doesn’t suggest people have to run their own research study or develop their own treatise on every topic. It suggests people have make a conscious choice, preferably with reasonable judgment, about which sources to trust and to develop a lay understanding of the argument or conclusion they’re repeating. Otherwise you end up with people on the left and right reflexively saying “communism bad” or “capitalism bad” because their social media environment repeats it a lot, but they’d be hard pressed to give even a loosly representative definition of either.

                • nimpnin@sopuli.xyz
                  link
                  fedilink
                  arrow-up
                  0
                  ·
                  3 months ago

                  This has very little to do with the criticism given by the first commenter. And you can use AI and do this, they are not in any way exclusive.

    • venusaur@lemmy.worldOP
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      I totally understand your point of view. AI seems like the nail in the coffin for digital dominance over humans. It will debilitate people by today’s standards.

      Can we compare gen AI tools to any other tools that currently eliminate some level of labor for us to do? e.g. drag and drop programs tools

      Where do we draw the line? Can people then think and create in different ways using different tools?

      Some GPT’s are already integrating historical conversations. We’re past Markov chain.

    • anomnom@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      Maybe if the actual costs—especially including environmental costs from its energy use—were included in each query, we’d start thinking for ourselves again. It’s not worth it for most things it’s used for at the moment

    • helloworld55@lemm.ee
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      I agree with this sentiment but I don’t see it actually convincing anyone of the dangers of AI. It reminds me a lot of how teachers said that calculators won’t always be available and we need to learn how to do mental math. That didn’t convince anyone then

  • AsyncTheYeen@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    3 months ago

    People have negative sentiments towards AI under a captalist system, where the most successful is equal to most profitable and that does not translate into the most useful for humanity

    We have technology to feed everyone and yet we don’t We have technology to house everyone and yet we don’t We have technology to teach everyone and yet we don’t

    Captalist democracy is not real democracy

    • Randomgal@lemmy.ca
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      This is it. People don’t have feelings for a machine. People have feelings for the system and the oligarchs running things, but said oligarchs keep telling you to hate the inanimate machine.

  • Retro_unlimited@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    3 months ago

    I was pro AI in the past, but seeing the evil ways these companies use AI just disgusts me.

    They steal their training data, and they manipulate the algorithm to manipulate the users. It’s all around evil how the big companies use AI.