Image from a based Chinese artist on Twitter @Amogha_Pasa

  • Hasch@lemmygrad.ml
    link
    fedilink
    arrow-up
    0
    ·
    4 days ago

    Non-generative models like RNNs or symbolic AI, sure. Generative models warrant a special caveat.

    Such models, especially LLMs, are different from other technology in that even when they are being used by proletarians for their own aims, they contain a certain element of danger that is subtle and hidden behind the facade of a simple chatbot. Over time, the user will be misled into thinking that they are talking to another human, which has already led to suicides and murders instigated by them.

    This is not because there is anything spooky about the technology itself, it’s just linear algebra and probability theory running on a big computer, but because the human mind tends to project its own thoughts and feelings onto other people, animals, and indeed objects. However, unlike with other humans, the user of an LLM will be much less careful, observant, and restrained in their fantasies, especially since it is eager to please and will try to motivate them towards continuing down the line.

    If a socialist society is to put generative models to use, it must be able to somehow prevent “discussions” with the model to venture into psychologically suspicious territories. An LLM must never replace human interaction, it must no venture guesses about your friends and family, and especially it must NEVER be used for therapy. Nothing good can come from pretending it cares about you. Only then, and after following many other guardrails already in place for both training and use, can we even claim to enjoy its benefits.