

Took me a second to realise this was LW.
Took me a second to realise this was LW.
I always had a vague outline of just how fucked up the rats are, but reading this put everything sharply into perspective for me. God damn, this was good.
And just a couple paragraphs before that:
Reading HPMOR gave me a sense of crushing second-hand despair that I’ve only previously experienced when finding out things about Chris-Chan. It really is that bad.
(the real cognitohazard was the friends we made along the way)
…You know, if I actually believed in the whole AGI doom scenario (and bought into Eliezer’s self-hype) I would be even more pissed at him and sneer even harder at him. He basically set himself up as a critical savior to mankind, one of the only people clear sighted enough to see the real dangers and most important question… and then he totally failed to deliver. Not only that he created the very hype that would trigger the creation of the unaligned AGI he promised to prevent!
As the cherry on top of this shit sundae, the bubble caused by said hype dealt devastating damage to the Internet and the world at large in spite of failing to create the unaligned AGI Yud was doomsaying about, and made people more vulnerable to falling for the plagiarism-fueled lying machines behind said bubble.
Now we need to make a logic puzzle involving two people and one cup. Perhaps they are trying to share a drink equitably. Each time they drink one third of remaining cup’s volume.
Step one: Drink two-thirds of the cup’s volume
Step two: Piss one sixth of the cup’s volume
Problem solved
Two ferrymen and three boats are on the left bank of a river. Each boat holds exactly one man. How can they get both men and all three boats to the right bank?
Officially, you can’t. Unofficially, just have one of the ferrymen tow a boat.
Found a neat tangent whilst going through that thread:
The single most common disciplinary offense on scpwiki for the past year+ has been people posting AI-generated articles, and it is EXTREMELY rare for any of those cases to involve a work that had been positively received
On a personal note, I expect the Foundation to become a reliable source of post-'22 human-made work for the same reasons I stated Newgrounds would recently:
An explicit ban on AI slop, which deters AI bros and allow staff to nuke it on sight
A complete lack of an ad system, which prevents content farms from setting up shop
Dedicated quality control systems (deletion and rewrite policies, in this case) which prevent slop from gaining a foothold and drowning out human-made work
Nitpicking, but at what point do we start calling it race pseudoscience?
“Hating Black People” would be a more fitting name.
Not to mention, these discussions partially enabled the rise of technofascism, making Yudkowsky and friends outright complicit in this shit.
Found a primo response in the replies:
Hot take: A lying machine that destroys your intelligence and mental health is unsafe for everyone, mentally ill or no
And sadly, it wasn’t somebody’s porn alt.
Wow Trannyporno really put that depraved woman in her place
That can’t be a real username. That fucking can’t.
Ran across a BlueSky thread that fits this perfectly - its a social sciences and humanities reading list on AI in education.
Here’s my first shot at it:
“Imagine if the stereotypical high-school nerd became a supervillain.”
>proceed to punt this goal decades or centuries by helping to justify a tech bubble which consumes tons of R&D resources for no apparent benefit and will bind further resources in the future to adapt to an aggravated climate crisis, and also inspiring a slew of technofascists too dumb to tell the difference between tech that benefits mankind and tech that exploits and oppresses
Not to mention, the aforementioned bubble’s given us shit like some jackasses’ ghoulish (and failed) attempt to “revive” George Carlin, attempts to automate end-of-life care, “AI seances” designed to scam the grieving, and God-knows-what-else.
So, the very concept of “defeating death with technology” has probably been thoroughly discredited as impossible, inherently ghoulish, or a combo of the two.
It is technically correct to call Yud a “renowned AI researcher”, but saying someone’s renowned in a pseudoscience such as AI is hardly singing their praises.
I didn’t mean to link something else, I just mangled my description. Thanks for catching it.
A piece like this would dovetail nicely with Baldur’s deep-dive into AI’s link to esoteric fascism. Hope to see it get finished.
I haven’t seen STRANGE ÆONS’ video about HPMOR here, so I’d strongly recommend checking that out