Let’s grow
Actually, popcorn is surprisingly a good source of fiber. It’s a pretty healthy snack if you avoid the butter/oil/salt.
I’m afraid oils are pretty dang unhealthy. See 1, 2, 3,, and more here.
There is truth to what you speak, but that doesn’t change the fact I stated above. And eating only plants certainly lowers the death toll and inhumane treatment that you are contributing to by a huge amount.
Killing and eating animals is pretty fucked.
Now this is what I assume is usually happening on 4-chan
Woah, when and how did you learn that?
Ur fukd up
I don’t understand… but I like it.
I mean, this is a possibility. But under the Biden administration a lot of workers rights were improved, I wouldn’t be surprised if this one happens as well.
In related news. Watch Democracy Now’s coverage of “The Night Won’t End” which features the audio from Hind with translation in a part of it.
I’m a big hugger. I wish that there was more affection between men, I often worry I’m making other men uncomfortable and then in turn I get uncomfortable about it. The whole thing makes me far more stressed than I wish it did honestly.
I liked this bit.
“My entire bladder hurts. I was holding back throw-up. My legs are killing me,” said Daniel Allen, one of the individuals on the ride who called the 25-minute ordeal “just crazy.”
Teens who were stuck on the ride told KOIN 6 News being suspended in the air like that made them think about life.
“People praying to God, screaming for their life, throwing up, passing out, it was bad,” Jordan Harding said.
A woman who witnessed the ride getting stuck from the ground recalled praying for those upside down and strapped in for nearly a half-hour.
“The ride went up, the kids got stuck on the ride and they were just dangling,” said Lavina Waters. “And somebody came in and said ‘hey, the kids are stuck on the ride’ and I look up, and sure enough, they were stuck on the ride.”
Why you gotta rip on pitbulls, man? Don’t compare them with those likes.
I’ve been wanting to find more peeps to follow on Mastodon for a while, so I did some research. Here’s some lists and resources to find people.
Follow #introduction #introductions #followfriday
Trunk — https://communitywiki.org/trunk
Fediverse — https://fediverse.info/explore/people
Fedi Directory — https://fedi.directory/
I saw a YT vid where they mentioned that you hear things differently at a 45 degrees angle because of the different heights of your ears. So it makes sense as a behavior associated with trying to better understand and observe something.
Definitely. The thing you might want to consider as well is what you are using it for. Is it professional? Not reliable enough. Is it to try to understand things a bit better? Well, it’s hard to say if it’s reliable enough, but it’s heavily biased just as any source might be, so you have to take that into account.
I don’t have the experience to tell you how to suss out its biases. Sometimes, you can push it in one direction or another with your wording. Or with follow-up questions. Hallucinations are a thing but not the only concern. Cherrypicking, lack of expertise, the bias of the company behind the llm, what data the llm was trained on, etc.
I have a hard time understanding what a good way to double-check your llm is. I think this is a skill we are currently learning, as we have been learning how to sus out the bias in a headline or an article based on its author, publication, platform, etc. But for llms, it feels fuzzier right now. For certain issues, it may be less reliable than others as well. Anyways, that’s my ramble on the issue. Wish I had a better answer, if only I could ask someone smarter than me.
Oh, here’s gpt4o’s take.
When considering the accuracy and biases of large language models (LLMs) like GPT, there are several key factors to keep in mind:
1. Training Data and Biases
2. Accuracy and Hallucinations
3. Context and Ambiguity
4. Updates and Recency
5. Mitigating Biases and Ensuring Accuracy
6. Ethical Considerations
In summary, while LLMs can provide valuable assistance in generating text and answering queries, their accuracy is not guaranteed, and their outputs may reflect biases present in their training data. Users should use them as tools to aid in tasks, but not as infallible sources of truth. It is essential to apply critical thinking and, when necessary, consult additional reliable sources to verify information.