• WalnutLum@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    1 month ago

    Reminder that all these Chat-formatted LLMs are just text-completion engines trained on text formatted like a chat. You’re not having a conversation with it, it’s “completing” the chat history you’re providing it. By randomly(!) choosing the next text tokens that seems like they best fit the text provided.

    If you don’t directly provide, in the chat history and/or the text completion prompt, the information you’re trying to retrieve, you’re essentially fishing for text in a sea of random text tokens that seems like it fits the question.

    It will always complete the text, even if the tokens it chooses minimally fit the context, it chooses the best text it can but it will always complete the text.

    This is how they work, and anything else is usually the company putting in a bunch of guide bumpers to reformat prompts into coaxing the models to respond in a “smarter” way (see GPT-4o and “chain of reasoning”)

    • HackerJoe@sh.itjust.works
      link
      fedilink
      arrow-up
      0
      ·
      1 month ago

      They were trained on reddit. How much would you trust a chatbot whose brain consists of the entirety of reddit put in a blender?

      I am amazed it works as well as it does. Gemini only occasionally tells people to kill themselves.