• 6 Posts
  • 119 Comments
Joined 2 years ago
cake
Cake day: August 29th, 2023

help-circle
  • Here: https://glowfic.com/posts/4508

    Be warned, the three quarters of the thread don’t have much of a plot and are basically two to three characters talking, then the last quarter time skips ahead and gives massive clunky worldbuilding dumps. (This is basically par for the course with glowfic, the format supports dialogue interaction heavy stories and it’s really easy to just kind of let the plot meander. Planecrash, for all of its bloat and diversions into eugenics lectures, is actually relatively plot heavy for glowfic.)

    On the upside, the first three quarters almost read like a sneer on rationalists.







  • Thanks for the lore, and sorry that you had to ingest all that at some point.

    Ironically, one of the biggest lore drops about dath ilan happens in a story I initially thought at the time was a parody of rationalists and the concept of dath ilan (Eliezer used a new penname for the story). The main dath ilan character (isekai’d into an Earth mostly similar to our own but with magic and uh… other worldbuilding conceits I won’t get into here) jumps to absurd wild conclusion throughout basically every moment of the story, and unlike HJPEV is actually wrong about basically every conclusion she jumps to. Of course, she’s a woman, and it comes up towards the ending that she is below average for dath ilan intelligence (but still above the Earth average, obviously), so don’t give Eliezer too much credit for allowing a rationalist character to be mostly wrong for once.

    I don’t know how he came up with the name… other fanfic writers in rationalist-adjacent space have complained about his amateurish attempts at conlanging, so there probably isn’t a sophisticated conlang explanation about phonemes involved. You might be on the right track guessing at weird anagrams?



  • In Eliezer’s “utopian” worldbuilding fiction concept, dath ilan, they erased their entire history just to cover up the any mention of any concept that might inspire someone to think of “superintelligence” (and as an added bonus purge other wrong-think concepts). The Philosopher Kings Keepers have also discouraged investment and improvement in computers (because somehow, despite now holding any direct power and the massive financial incentives and dath ilan being described as capitalist and libertarian, the Keepers can just sort of say their internal secret prediction market predicts bad vibes from improving computers too much and everyone falls in line). According to several worldbuiding posts, dath ilan has built an entire secret city that gets funded with 2% of the entire world’s GDP to solve AI safety in utter secrecy.


  • Given the USA legislature’s incompetence, I imagine they would leave some sort of massive loopholes. Depending on the exact wording, you could get around it with technically not GPUs (as you suggest), or subdividing companies so each subdivision can be technically under the limit, or cranking up the size of individual GPUs so 8 GPUs is a massive amount of compute. Of course, I really doubt it would get that far in the first place, look at how they killed California’s attempt at the most moderate AI legislation.


  • It’s a microcosm of lesswrong’s dysfunction: IQ veneration, elitism, and misunderstanding the problem in the first place. And even overlooking those problems, I think intellect only moderately correlates with an appreciation for science and an ability to understand science. Someone can think certain scientific subjects are really cool but only have a layman’s grasp of the technical details. Someone can do decently in introductory college level physics with just a willingness to work hard and being decent at math. And Eliezer could have avoided tangents about nuclear reactors or whatever to focus on stuff relevant to AI.


  • There’s also Eliezer’s nihilistic outlook, which is deftly woven into his parables-- his personal philosophy draws heavily from Godel Escher Bach, for instance. The fans understand this stuff; they have the intellectual capacity to truly appreciate the depths of his parables, to realize that they’re not just entertaining- they say something deep about the nature of Intelligence. As a consequence people who dislike IABIED truly ARE idiots- of course they wouldn’t appreciate, for instance, the motivation in Eliezier’s existencial catchphrase “Tsuyoku Naritai!,” which itself is a cryptic reference to Japanese culture. I’m smirking right now just imagining one of those addlepated simpletons scratching their heads in confusion as Nate Soares genius unfolds itself on their copy of IABIED. What fools… how I pity them. 😂 And yes by the way, I DO have a rationalist tattoo. And no, you cannot see it. It’s for the math pet’s eyes only- And even they have to demonstrate that they’re within 5 IQ points of my own (preferably lower) beforehand.







  • Poor historical accuracy in favor of meme potential is why our reality is so comically absurd. You can basically use the simulation hypothesis to justify anything you want by proposing some weird motive or goals of the simulators. It almost makes God-of-the-gaps religious arguments seem sane and well-founded by comparison!


  • Within the world-building of the story, the way the logic is structured makes sense in a ruthless utilitarian way (although Scott’s narration and framing is way too sympathetic to the murderously autistic angel that did it), but taken in the context outside the story of the sort of racism Scott likes to promote, yeah it is really bad.

    We had previous discussion of Unsong on the old site. (Kind of cringing about the fact that I liked the story at one point and only gradually noticed all the problematic stuff and poor writing quality stuff.)


  • I’ve seen this concept mixed with the simulation “hypothesis”. The logic goes that if future simulators are running a “rescue simulation” but only cared (or at least cared more) about the interesting or more agentic people (i.e. rich/white/westerner/lesswronger), they might only fully simulate those people and leave simpler scripts piloting the other people (i.e. poor/irrational/foreign).

    So basically literally positing a mechanism by which they are the only real people and other people are literally NPCs.


  • Depends what you mean by “steelman”. If you take their definition at it’s word, then they fail to try all the time, just look at any of their attempts at understanding leftist writing or thought. Of course, it often actually means “entirely rebuild the opposing argument into something different” (because they don’t have a basic humanities education or don’t want to actually properly read leftist thought) and they can’t resist doing that!