It can predict that the word “scales” is unlikely to appear near “books”. Do you understand what I mean now? Sorry, neural networks can’t understand things. Can you make predictions based on what senses you received now?
Well given that an LLM produced the nonsense riddle above, obviously it cannot predict that. It can predict the structure of a riddle perfectly well, it can even get the rhyming right! But the extra layer of meaning involved in a riddle is beyond what LLMs are able to do at the moment. At least, all of them that I’ve seen - they all seem to fall flat with this level of abstraction.
It can predict that the word “scales” is unlikely to appear near “books”. Do you understand what I mean now? Sorry, neural networks can’t understand things. Can you make predictions based on what senses you received now?
Well given that an LLM produced the nonsense riddle above, obviously it cannot predict that. It can predict the structure of a riddle perfectly well, it can even get the rhyming right! But the extra layer of meaning involved in a riddle is beyond what LLMs are able to do at the moment. At least, all of them that I’ve seen - they all seem to fall flat with this level of abstraction.