Gah. I’ve been nerd sniped into wanting to explain what LessWrong gets wrong.
Gah. I’ve been nerd sniped into wanting to explain what LessWrong gets wrong.
There’s a “critique of functional decision theory”… which turns out to be a blog post on LessWrong… by “wdmacaskill”? That MacAskill?!
If you want to read Yudkowsky’s explanation for why he doesn’t spend more effort on academia, it’s here.
spoiler alert: the grapes were totally sour
We have a few Wikipedians who hang out here, right? Is a preprint by Yud and co. a sufficient source to base an entire article on “Functional Decision Theory” upon?
You might think that this review of Yud’s glowfic is an occasion for a “read a second book” response:
Yudkowsky is good at writing intelligent characters in a specific way that I haven’t seen anyone else do as well.
But actually, the word intelligent is being used here in a specialized sense to mean “insufferable”.
Take a stereotypical fantasy novel, a textbook on mathematical logic, and Fifty Shades of Grey.
Ah, the book that isn’t actually about kink, but rather an abusive relationship disguised as kink — which would be a great premise for an erotic thriller, except that the author wasn’t sufficiently self-aware to know that’s what she was writing.
Among the leadership of the biggest AI capability companies (OpenAI, Anthropic, Meta, Deepmind, xAI), at least 4/5 have clearly been heavily influenced by ideas from LessWrong.
I’m trying, but I can’t not donate any harder!
The most popular LessWrong posts, SSC posts or books like HPMoR are usually people’s first exposure to core rationality ideas and concerns about AI existential risk.
Unironically the better choice: https://archiveofourown.org/donate
I think Eliezer Yudkowsky & many posts on LessWrong are failing at keeping things concise and to the point.
The replies: “Kolmogorov complexity”, “Pareto frontier”, “reference class”.
The lead-in to that is even “better”:
This seems particularly important to consider given the upcoming conservative administration, as I think we are in a much better position to help with this conservative administration than the vast majority of groups associated with AI alignment stuff. We’ve never associated ourselves very much with either party, have consistently been against various woke-ish forms of mob justice for many years, and have clearly been read a non-trivial amount by Elon Musk (and probably also some by JD Vance).
“The reason for optimism is that we can cozy up to fascists!”
The collapse of FTX also caused a reduction in traffic and activity of practically everything Effective Altruism-adjacent
Uh-huh.
“Ah,” said Arthur, “this is obviously some strange usage of the word scientist that I wasn’t previously aware of.”
Resolved: that people still active on Twitter are presumed morally bankrupt until proven otherwise.
Since I don’t think that one professor’s uploads can furnish hundreds of billions of tokens… yeah, that sounds exceedingly implausible.
Ah yes, the FRAMΞN, desert warriors of the planet DUNC·.
With significant human input and thorough human review of the material
Yeah, there’s no way I can make that any funnier than it already is. Except maybe by calling up a fond memory of rat dck pcks.
Fun Blake fact: I was one bureaucratic technicality away from getting a literature minor to go along with my physics major. I didn’t plan for that; we had a Byzantine set of course requirements that we had to meet by mixing and matching whatever electives were available, and somehow, the electives I took piled up to be almost enough for a lit minor. I would have had to take one more course on material written before some cutoff year — I think it was 1900 — but other than that, I had all the checkmarks. I probably could have argued my way to an exemption, since my professors liked me and the department would have gotten their numbers that little bit higher, but I didn’t discover this until spring semester of my senior year, when I was already both incredibly busy and incredibly tired.
From page 17:
Rather than encouraging critical thinking, in core EA the injunction to take unusual ideas seriously means taking one very specific set of unusual ideas seriously, and then providing increasingly convoluted philosophical justifications for why those particular ideas matter most.
ding ding ding
Abstract: This paper presents some of the initial empirical findings from a larger forth-coming study about Effective Altruism (EA). The purpose of presenting these findings disarticulated from the main study is to address a common misunderstanding in the public and academic consciousness about EA, recently pushed to the fore with the publication of EA movement co-founder Will MacAskill’s latest book, What We Owe the Future (WWOTF). Most people in the general public, media, and academia believe EA focuses on reducing global poverty through effective giving, and are struggling to understand EA’s seemingly sudden embrace of ‘longtermism’, futurism, artificial intelligence (AI), biotechnology, and ‘x-risk’ reduction. However, this agenda has been present in EA since its inception, where it was hidden in plain sight. From the very beginning, EA discourse operated on two levels, one for the general public and new recruits (focused on global poverty) and one for the core EA community (focused on the transhumanist agenda articulated by Nick Bostrom, Eliezer Yudkowsky, and others, centered on AI-safety/x-risk, now lumped under the banner of ‘longtermism’). The article’s aim is narrowly focused onpresenting rich qualitative data to make legible the distinction between public-facing EA and core EA.
From the linked Andrew Molitor item:
Why Extropic insists on talking about thermodynamics at all is a mystery, especially since “thermodynamic computing” is an established term that means something quite different from what Extropic is trying to do. This is one of several red flags.
I have a feeling this is related to wanking about physics in the e/acc holy gospels. They invoke thermodynamics the way that people trying to sell you healing crystals for your chakras invoke quantum mechanics.
They take a theory that is supposed to be about updating one’s beliefs in the face of new evidence, and they use it as an excuse to never change what they think.
xcancel link, since nitter.net is kaput.
Reply:
Yud writes back:
Ah, yes, the argumentum ad other-sources-must-exist-somewhere-um.