• chaogomu@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    1 year ago

    The thing is, the LLM doesn’t actually know anything, and lies about it.

    So you go to How Stuff Works now, and you get bullshit lies instead of real information, you’ll also get nonsense that looks like language at first glance, but is gibberish pretending to be an article. Because sometimes the language model changes topics midway through and doesn’t correct, because it can’t correct. It doesn’t actually know what it’s saying.

    See, these language models are pre-trained, that the P in chatGPT. They just regurgitate the training data, but put together in ways that sort of look like more of the same training data.

    There are some hard coded filters and responses, but other than that, nope, just a spew of garbage out from the random garbage in.

    And yet, all sorts of people think this shit is ready to take over writing duties for everyone, saving money and winning court cases.

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      arrow-up
      0
      ·
      1 year ago

      Yeah, this is why I can’t really take anyone seriously when they say it’ll take over the world. It’s certainly cool, but it’s always going to be limited in usefulness.

      Some areas I can see it being really useful are:

      • generating believable text - scams, placeholder text, and general structure
      • distilling existing information - especially if it can actually cite sources, but even then I’d take it with a grain of salt
      • trolling people/deep fakes

      That’s about it.