• AppleTea@lemmy.zip
    link
    fedilink
    arrow-up
    0
    ·
    7 days ago

    “AI” in fiction has meant a machine with a mind like what people have. It’s had that meaning for decades. Very recently, there are programmes that do predictive text like what your phone does, but large. You can call the predictive text programme an “AI”, but as the novelty wears off, it’s gonna sound more and more like advertising than a real description.

    • jsomae@lemmy.ml
      link
      fedilink
      arrow-up
      0
      ·
      7 days ago

      I think it’s incredible that so much of what the human brain can do can be emulated with predictive models. It makes sense in retrospect – human brains are doing prediction at every level that we can model.

      • AppleTea@lemmy.zip
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        7 days ago

        A statistical model strings a sentence together with a great big web of statistical weights, settling onto the next most probable word, one by one. People write with the intent to share a meaning. It is not the same.

        That statistical (or “predictive”, if we’re gussying it up) model has no understanding in it - no more than any other programme. It’s a physical chain reaction, a calculation that runs until the sums even out to a state of rest. Wipe the web of statistical weights clean, and re-weigh them so the sums spit out the colour of pixels in a JPEG rather than the content of a .txt document.

        Hell, weigh the web at random and have it spit out nonsense numbers. It’ll do that for as long as keep the programme up. It will never ask you why you took the meaning out of its task. The machine makes no distinction between the sort of calculation you run it – people are what project meaning onto the blinking lights.

        • jsomae@lemmy.ml
          link
          fedilink
          arrow-up
          0
          ·
          7 days ago

          You could say the same thing about rewiring a human’s neurons randomly. It’s not the powerful argument you think it is.

          We don’t really know exactly how brains work. But when, say, Wernicke’s area is damaged (but not Broca’s area), then you can get people spouting meaningless but syntactically valid sentences that look a lot like autocorrect. So it could be that there’s some part of our language process which is essentially no more or less powerful than an LLM.

          Anyway, it turns out that you can do a lot with LLMs, and they can reason (insofar as they can produce logically valid chains of text, which is good enough). The takeaway for me is not that LLMs are really smart – rather it’s that the MVP of intelligence is a lot lower a bar than anyone was expecting.