• GissaMittJobb@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    LLMs do not work that way. They are a bit less smart about it.

    This is also why the first few generations of LLMs could never solve trivial math problems properly - it’s because they don’t actually do the math, so to speak.

    • tyler@programming.dev
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      Overtraining has actually shown to result in emergent math behavior (in multiple independent studies), so that is no longer true. The studies were done where the input math samples are “poisoned” with incorrect answers to example math questions. Initially the LLM responds with incorrect answers, then when overtrained it finally “figures out” the underlying math and is able to solve the problems, even for the poisoned questions.