I promise this question is asked in good faith. I do not currently see the point of generative AI and I want to understand why there’s hype. There are ethical concerns but we’ll ignore ethics for the question.

In creative works like writing or art, it feels soulless and poor quality. In programming at best it’s a shortcut to avoid deeper learning, at worst it spits out garbage code that you spend more time debugging than if you had just written it by yourself.

When I see AI ads directed towards individuals the selling point is convenience. But I would feel robbed of the human experience using AI in place of human interaction.

So what’s the point of it all?

  • Flaqueman@sh.itjust.works
    link
    fedilink
    arrow-up
    0
    ·
    11 days ago

    Money. It’s always about money. But more seriously, I also wonder what’s the point since all my interactions with GenAI have been disappointment after disappointment. But I read Dev saying that it’s great at creating drafts

  • Contramuffin@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    11 days ago

    Best use is to ask it questions that you’re not sure how to ask. Sometimes you come across a problem that you’re not really even sure how to phrase, which makes Googling difficult. LLM’s at least would give you a better sense of what to Google

  • Schorsch@feddit.org
    link
    fedilink
    arrow-up
    0
    ·
    11 days ago

    It’s kinda handy if you don’t want to take the time to write a boring email to your insurance or whatever.

    • Pechente@feddit.org
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 days ago

      I get the point here but I think it’s the wrong approach. If you feel the email needs too much business fluff, just write it more casual and get to the point quicker.

    • Random Dent@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 days ago

      Yeah that’s how I use it, essentially as an office intern. I get it to write cover letters and all the other mindless piddly crap I don’t want to do so I can free up some time to do creative things or read a book or whatever. I think it has some legit utility in that regard.

    • Odelay42@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      11 days ago

      I sorta disagree though, based on my experience with llms.

      The email it generates will need to be read carefully and probably edited to make sure it conveys your point accurately. Especially if it’s related to something as serious as insurance.

      If you already have to specifically create the prompt, then scrutinize and edit the output, you might as well have just written the damn email yourself.

      It seems only useful to write slop that doesn’t matter that only gets consumed by other machines and dutifully logged away in a slop container.

      • Random Dent@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        11 days ago

        It does sort of solve the ‘blank page problem’ though IMO. It sometimes takes me ages to start something like a boring insurance letter because I open up LibreOffice and the blank page just makes me want to give up. If I have AI just fart out a letter and then I start to edit it, I’m already mid-project so it actually does save me some time in that way.

        • iamanurd@midwest.social
          link
          fedilink
          English
          arrow-up
          0
          ·
          10 days ago

          I agree. By the time I’m done, I’ve written most of the document. It gets me past the part where I procrastinate because I don’t know how to begin.

      • Scrubbles@poptalk.scrubbles.tech
        link
        fedilink
        English
        arrow-up
        0
        ·
        11 days ago

        For us who are bad at writing though that’s exactly why we use it. I’m bad with greetings, structure, things that people expect and I’ve had people get offended at my emails because they come off as rude. I don’t notice those things. For that llms have been a godsend. Yes, I of course have to validate it, but it conveys the message I’m trying to usually

  • GuyFi@lemmy.sdf.org
    link
    fedilink
    arrow-up
    0
    ·
    11 days ago

    I have personally found it fantastic as a programming aid, and as a writing aid to write song lyrics. The art it creates lacks soul and any sense of being actually good but it’s great as a “oh I could do this cool thing” inspiration machine

  • dingus@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 days ago

    Never used it until recently. Now I use it to vent because I’m a crazy person.

  • UnRelatedBurner@sh.itjust.works
    link
    fedilink
    arrow-up
    0
    ·
    11 days ago

    Just today I needed a pdf with filler english text, not lorem. ChatGPT was perfect for that. Other times when I’m writing something I use it to check grammar. It’s way better at it than grammarly imo, and faster and makes the decisions for me BUT PROOF-READ IT. if you really fuck the tenses up it won’t know how to correct it, it’ll make things up. Besides these: text manipulation. I could learn vim, write a script, or I could just copy “remove the special characters” enter -> done.

    I use perplexity for syntax. I don’t code with it, but it’s the perfect one stop shop for “how does this work in this lang again” when coding. For advanced/new/unpopular APIs it’s back to the olds school docs, but you could try to give it the link so it parses it for you, it’s usually wonky tho.

  • mindbleach@sh.itjust.works
    link
    fedilink
    arrow-up
    0
    ·
    11 days ago

    What doesn’t exist yet, but is obviously possible, is automatic tweening. Human animators spend a lot of time drawing the drawings between other drawings. If they could just sketch out what’s going on, about once per second, they could probably do a minute in an hour. This bullshit makes that feasible.

    We have the technology to fill in crisp motion at whatever framerate the creator wants. If they’re unhappy with the machine’s guesswork, they can insert another frame somewhere in-between, and the robot will reroute to include that instead.

    We have the technology to let someone ink and color one sketch in a scribbly animatic, and fill that in throughout a whole shot. And then possibly do it automatically for all labeled appearances of the same character throughout the project.

    We have the technology to animate any art style you could demonstrate, as easily as ink-on-celluloid outlines or Phong-shaded CGI.

    Please ignore the idiot money robots who are rendering eye-contact-mouth-open crowd scenes in mundane settings in order to sell you branded commodities.

      • mindbleach@sh.itjust.works
        link
        fedilink
        arrow-up
        0
        ·
        11 days ago

        I had not. There’s a variety of demos for guessing what comes between frames, or what fills in between lines… because those are dead easy to train from. This technology will obviously be integrated into the process of animation, so anything predictable Just Works, and anything fucky is only as hard as it used to be.

    • Mr_Blott@feddit.uk
      link
      fedilink
      arrow-up
      0
      ·
      11 days ago

      For the 99% of us who don’t know what tweening is and were scared to Google it in case it was perverted, it’s short for in-betweening and means the short frames of an animation in-between two main scenes

  • theunknownmuncher@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    11 days ago

    It has value in natural language processing, like turning unstructured natural language data into structured data. Not suitable for all situations though, like situations that cannot tolerate hallucinations.

    Its also good for reorganizing information and presenting it in a different format; and also classification of semantic meaning of text. It’s good for pretty much anything dealing with semantic meaning, really.

    I see people often trying to use generative AI as a knowledge store, such as asking an AI assistant factual questions, but this is an invalid usecase.

  • nafzib@feddit.online
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 days ago

    I have had some decent experiences with Copilot and coding in C#. I’ve asked it to help me figure out what was wrong with a LINQ query I was doing with an XDocument and it pointed me in the right direction where I figured it out. It also occasionally has some super useful auto complete blocks of code that actually match the pattern of what I’m doing.

    As for art and such, sometimes people just want to see some random bizarre thing realized visually that they don’t have the ability (or time/dedication) to realize themselves and it’s not something serious that they would be commissioning an artist for anyway. I used Bing image creator recently to generate a little character portrait for an online DND game I’m playing in since I couldn’t find quite what I was looking for with an image search (which is what I usually do for those).

    I’ve seen managers at my job use it to generate fun, relevant imagery for slideshows that otherwise would’ve been random boring stock images (or just text).

    It has actual helpful uses, but every major corporation that has a stake in it just added to or listened to the propaganda really hard, which has caused problems for some people; like the idiot who proudly fired all of his employees because he replaced all their jobs with automation and AI, then started hunting for actual employees to hire again a couple months later because everything was terrible and nothing worked right.

    They’re just tools that can potentially aid people, but they’re terrible replacements for actual people. I write automated tests for a living, and companies will always need people for that. If they fired me and the other QAs tomorrow, things would be okay for a short while thanks to the automation we’ve built, but as more and more code changes go into our numerous and labyrinthine systems, more and more bugs would get through without someone to maintain the automation.

  • TORFdot0@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 days ago

    If you don’t know what you are doing and ask LLMs for code then you are gonna waste time debugging it without understanding but if you are just asking it for boiler plate stuff, or are asking it to add comments and print outs to console for existing code for debugging, it’s really great for that. Sometimes it needs chastising or corrections but so do humans.

    I find it very useful but not worth the environmental cost or even the monetary cost. With how enshittified Google has become now though I find that ChatGPT has become a necessary evil to find reliable answers to simple queries.

  • weeeeum@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 days ago

    I think LLMs could be great if they were used for education, learning and trained on good data. The encyclopedia Britannica is building an AI exclusively trained on its data.

    It also allows for room for writers to add more to the database, to provide broader knowledge for the AI, so people keep their jobs.

  • waka@discuss.tchncs.de
    link
    fedilink
    arrow-up
    0
    ·
    11 days ago

    Another point valid for GPTs is getting started on ideas and things, sorting out mind messes, getting useful data out of large amounts of clusterfucks of text, getting a general direction.

    Current downsides are you cannot expect factual answers on topics it has no access to as it’ll hallucinate on these without telling you, many GPT provides use your data so you cannot directly ask it sensitive topics, it’ll forget datapoints if your conversation goes on too long.

    As for image generation, it’s still often stuck in the uncanny valley. Only animation topics benefit right now within the amateur realm. Cannot say how much GPTs are professionally used currently.

    All of these are things you could certainly do yourself and often better/faster than an AI. But sometimes you just need a good enough solution and that’s where GPTs shine more and more often. It’s just another form of automation - if used for repetitive/stupid tasks, it’s fine. Just don’t expect it to just build you a piece of fully working bug-free software just by asking it. That’s not how automation works. At least not to date.

  • Affidavit@lemm.ee
    link
    fedilink
    arrow-up
    0
    ·
    11 days ago

    I’d say there are probably as many genuine use-cases for AI as there are people in denial that AI has genuine use-cases.

    Top of my head:

    • Text editing. Write something (e.g. e-mails, websites, novels, even code) and have an LLM rewrite it to suit a specific tone and identify errors.
    • Creative art. You claim generative AI art is soulless and poor quality, to me, that indicates a lack of familiarity with what generative AI is capable of. There are tools to create entire songs from scratch, replace the voice of one artist with another, remove unwanted background noise from songs, improve the quality of old songs, separate/add vocal tracks to music, turn 2d models into 3d models, create images from text, convert simple images into complex images, fill in missing details from images, upscale and colourise images, separate foregrounds from backgrounds.
    • Note taking and summarisation (e.g. summarising meeting minutes or summarising a conversation or events that occur).
    • Video games. Imagine the replay value of a video game if every time you play there are different quests, maps, NPCs, unexpected twists, and different puzzles? The technology isn’t developed enough for this at the moment, but I think this is something we will see in the coming years. Some games (Skyrim and Fallout 4 come to mind) have a mod that gives each NPC AI generated dialogue that takes into account the NPC’s personality and history.
    • Real time assistance for a variety of tasks. Consider a call centre environment as one example, a model can be optimised to evaluate calls based on language and empathy and correctness of information. A model could be set up with a call centre’s knowledge base that listens to the call and locates information based on a caller’s enquiry and tells an agent where the information is located (or even suggests what to say, though this is currently prone to hallucination).
  • mindbleach@sh.itjust.works
    link
    fedilink
    arrow-up
    0
    ·
    11 days ago

    Video generators are going to eat Hollywood alive. A desktop computer can render anything, just by feeding in a rough sketch and describing what it’s supposed to be. The input could be some kind of animatic, or yourself and a friend in dollar-store costumes, or literal white noise. And it’ll make that look like a Pixar movie. Or a photorealistic period piece starring a dead actor. Or, given enough examples, how you personally draw shapes using chalk. Anything. Anything you can describe to the point where the machine can say it’s more [thing] or less [thing], it can make every frame more [thing].

    Boring people will use this to churn out boring fluff. Do you remember Terragen? It’s landscape rendering software, and it was great for evocative images of imaginary mountains against alien skies. Image sites banned it, by name, because a million dorks went ‘look what I made!’ and spammed their no-effort hey-neat renders. Technically unique - altogether dull. Infinite bowls of porridge.

    Creative people will use this to film their pet projects without actors or sets or budgets or anyone else’s permission. It’ll be better with any of those - but they have become optional. You can do it from text alone, as a feral demo that people think is the whole point. The results are massively better from even clumsy effort to do things the hard way. Get the right shapes moving around the screen, and the robot will probably figure out which ones are which, and remove all the pixels that don’t look like your description.

    The idiots in LA think they’re gonna fire all the people who write stories. But this gives those weirdos all the power they need to put the wild shit inside their heads onto a screen in front of your eyeballs. They’ve got drawers full of scripts they couldn’t hassle other people into making. Now a finished movie will be as hard to pull off as a decent webcomic. It’s gonna get wild.

    And this’ll be great for actors, in ways they don’t know yet.

    Audio tools mean every voice actor can be a Billy West. You don’t need to sound like anything, for your performance to be mapped to some character. Pointedly not: “mapped to some actor.” Why would an animated character have to sound like any specific person? Do they look like any specific person? Does a particular human being play Naruto, onscreen? No. So a game might star Nolan North, exclusively, without any two characters really sounding alike. And if the devs need to add a throwaway line later, then any schmuck can half-ass the tone Nolan picked for little Suzy, and the audience won’t know the difference. At no point will it be “licensing Nolan North’s voice.” You might have no idea what he sounds like. He just does a very convincing… everybody.

    Video tools will work the same way for actors. You will not need to look like anything, to play a particular character. Stage actors already understand this - but it’ll come to movies and shows in the form of deep fakes for nonexistent faces. Again: why would a character have to look like any specific person? They might move like a particular actor, but what you’ll see is somewhere between motion-capture and rotoscoping. It’s CGI… ish. And it thinks perfect photorealism is just another artistic style.