booty [he/him]

  • 0 Posts
  • 87 Comments
Joined 4 years ago
cake
Cake day: August 11th, 2020

help-circle

  • Knowing the answer to some of history’s biggest mysteries, because you were there, but being unable to speak about them because, 1, that would expose you, 2, nobody would believe you either way because nobody expects you to be THAT old.

    IDK, I feel like researching for supporting evidence of a theory you already know is correct would be much easier than researching to try to piece together a theory from no information. I think you could put the truth out there as credible and well-regarded theories, even if there are incorrect alternative theories that people also have to consider.


  • I definitely heard the phrase before the show came out.

    No you absolutely did not, unless you were talking to dorks who started using out-of-context nonsensical phrases from their favorite fantasy book. If you think you heard the term not as a reference to ASOIAF you are misremembering. Its origins do not go back to the 1800s. The term in this context refers to a child who has lived their entire life in the years-long summers of the world of ASOIAF. That is what it means.




  • My first instinct was A, at the base of the neck. But now that I think about it I think I agree with this more. I think it could be argued that at the joint is where the neck really begins, and that the narrow part beneath that is still part of the body. And I think it would look better (and more professional!) if our weevil friend wore his tie there.


  • Have you ever used an LLM?

    Here’s a screenshot I took after spending literally 10 minutes with chatgpt very confidently stating incorrect answers to a simple question over and over. (from this thread) Not only is it completely incapable of coming up with a very simple correct answer to a very simple question, it is completely incapable of responding in a coherent way to the fact that none of its answers are correct. Humans don’t behave this way. Nothing that understands what is being said would respond this way. It responds this way because it has no understanding of the meaning of anything that is being said. It is responding based on statistical likelihoods of words and phrases following one another, like a markov chain but slightly more advanced.



  • I don’t see how it could be measured except from looking at inputs&outputs.

    Okay, then consider that when you input something into an LLM and regenerate the responses a few times, it can come up with outputs of completely opposite (and equally incorrect) meaning, proving that it does not have any functional understanding of anything and instead simply outputs random noise that sometimes looks similar to what one would output if they did understand the content in question.





  • So are LLMs reliable for research like that?

    No. Of course not. They’re not reliable for anything. They don’t have any kind of database of facts and don’t know or attempt to know anything at all.

    They’re just a more advanced version of your phone’s predictive text. All they do is try to figure out which words most likely go in what order as a response to the prompt. That’s it. There is no logic of any kind dictating what an LLM outputs.