• 6 Posts
  • 130 Comments
Joined 2 years ago
cake
Cake day: August 29th, 2023

help-circle
  • this explicitly isn’t happening because the private sector is clamoring to get some of that EY expertise

    I mean, Peter Thiel might like him to bend the knee and I’m sure OpenAI/Anthropic would love to have him as a shill, idk if they’d actually pay 600K for it. Also it would be a betrayal of every belief about AI Eliezer claims to have, so in principle it really shouldn’t take lucrative compensation to keep him from it.

    paying me less would require me to do things that take up time and energy in order to get by with a smaller income

    Well… it is an improvement on cults making their members act as the leader’s servants/slaves because the leader’s time/effort is allegedly so valuable!


  • even assuming sufficient computation power, storage space, and knowledge of physics and neurology

    but sufficiently detailed simulation is something we have no reason to think is impossible.

    So, I actually agree broadly with you in the abstract principle but I’ve increasingly come around to it being computationally intractable for various reasons. But even if functionalism is correct…

    • We don’t have the neurology knowledge to do a neural-level simulation, and it would be extremely computationally expensive to actually simulate all the neural features properly in full detail, well beyond the biggest super computers we have now and “moore’s law” (scare quotes deliberate) has been slowing down such that I don’t think we’ll get there.

    • A simulation from the physics level up is even more out of reach in terms of computational power required.

    As you say:

    I think there would be other, more efficient means well before we get to that point

    We really really don’t have the neuroscience/cognitive science to find a more efficient way. And it is possible all of the neural features really are that important to overall cognition, so you won’t be able to do it that much more “efficiently” in the first place…

    Lesswrong actually had someone argue that the brain is within an order or magnitude or two of the thermodynamic limit on computational efficiency: https://www.lesswrong.com/posts/xwBuoE9p8GE7RAuhd/brain-efficiency-much-more-than-you-wanted-to-know



  • So one point I have to disagree with.

    More to the point, we know that thought is possible with far less processing power than a Microsoft Azure datacenter by dint of the fact that people can do it. Exact estimates on the storage capacity of a human brain vary, and aren’t the most useful measurement anyway, but they’re certainly not on the level of sheer computational firepower that venture capitalist money can throw at trying to nuke a problem from space. The problem simply doesn’t appear to be one of raw power, but rather one of basic capability.

    There are a lot of ways to try to quantify the human brain’s computational power, including storage (as this article focuses on, but I think its the wrong measure, operations, numbers of neural weights, etc.). Obviously it isn’t literally a computer and neuroscience still has a long way to go, so the estimates you can get are spread over like 5 orders of magnitude (I’ve seen arguments from 10^13 flops and to 10^18 or even higher, and flops is of course the wrong way to look at the brain). Datacenter computational power have caught up to the lowers ones, yes, but not the higher ones. The bigger supercomputing clusters, like El Capitan for example, is in the 10^18th range. My own guess would be at the higher end, like 10^18, with the caveat/clarification that evolution has optimized the brain for what it does really really well, so that the compute is being used really really efficiently. Like one talk I went to in grad school that stuck with me… the eyeball’s microsaccades are basically acting as a frequency filter on visual input. So literally before the visual signal has even got to the brain the information has already been processed in a clever and efficient way that isn’t captured in any naive flop estimate! AI boosters picked estimates on human brain power that would put it in range of just one more scaling as part of their marketing. Likewise for number of neurons/synapses. The human brain has 80 billion neurons with an estimated 100 trillion synapses. GPT 4.5, which is believed to have peaked on number of weights (i.e. they gave up on straight scaling up because it is too pricey), is estimated (because of course they keep it secret) like 10 trillion parameters. Parameters are vaguely analogs to synapses, but synapses are so much more complicated and nuanced. But even accepting that premise, the biggest model was still like 1/10th the size to match a human brain (and they may have lacked the data to even train it right).

    So yeah, minor factual issue, overall points are good, I just thought I would point it out, because this factual issue is one distorted by the AI boosters to make it look like they are getting close to human.




  • Very ‘ideological turing test’ failure levels.

    Yeah, his rational is something something “threats” something something “decision theory”, which has the obvious but insane implication that you should actually ignore all protests (even peaceful protestors that meet his lib centrist ideals of what protests ought to be) because that is giving into the protestors “threats” (i.e. minor inconveniences, at least in the case of lib-brained protests) and thus incentivizing them to threaten you in the first place.

    he tosses the animal rights people (partially) under the bus for no reason. EA animal rights will love that.

    He’s been like this a while, basically assuming that obviously animals don’t have qualia and obviously you are stupid and don’t understand neurology/philosophy if you think otherwise. No, he did not even explain any details of his certainty about this.


  • I haven’t looked into the Zizians in a ton of detail even now, among other reasons because I do not think attention should be a reward for crime.

    And it doesn’t occur to him to look into the Zizians in order to understand how cults keep springing up from the group he is a major thought leader in? Like if it was just one cult, I would sort of understand the desire just to shut ones eyes (but it certainly wouldn’t be a truth-seeking desire), but they are like the third cult (or 5th or 6th if we are counting broadly cult-adjacent group) (and this is not counting the entire rationalist project as cult). (For full on religious cults we have: leverage research, and the rationalist-Buddhist cult; for high-demand groups we have: the Vassarites, Dragon Army’s group home, and a few other sketchy group living situations (Nonlinear comes to mind)).

    Also, have an xcancel link, because screw Elon and some of the comments are calling Eliezer out on stuff: https://xcancel.com/allTheYud/status/1989825897483194583#m

    Funny sneer in the replies:

    I read the Sequences and all I got was this lousy thread about the glomarization of Eliezer Yudkowsky’s BDSM practices

    Serious sneer in the replies

    this seems like a good time to point folks towards my articles titled “That Time Eliezer Yudkowsky recommended a really creepy sci-fi book to his audience and called it SFW” and "That Time Eliezer Yudkowsky Wrote A Really Creepy Rationalist Sci-fi Story and called it PG-13


  • This is somewhat reassuring, as it suggests that he doesn’t fully understand how cultural critiques of LW affect the perception of LW more broadly;

    This. On Reddit (which isn’t actually mainstream common knowledge per se, but I still find it encouraging and indicative that the common sense perspective is winning out) whenever I see the topic of lesswrong or AI Doom come up on unrelated subreddits, I’ll see a bunch of top upvoted comments mentioning the cult spin offs or that the main thinker’s biggest achievement is Harry Potter fanfic or Roko’s Basilisk or any of the other easily comprehensible indicators that these are not serious thinkers with legitimate thoughts.


  • “You don’t understand how Eliezer has programmed half the people in your company to believe in that stuff,” he is reported to have told Altman at a dinner party in late 2023. “You need to take this more seriously.” Altman “tried not to roll his eyes,” according to Wall Street Journal reporter Keach Hagey.

    I wonder exactly when this was. The attempted oust of Sam Altman was November 17, 2023. So either this warning was timely (but something Sam already had the pieces in place to make a counterplay against), or a bit too late (as Sam had recently just beaten an attempt by the true believers to oust him).

    Sam Altman has proved adept at keeping the plates spinning and wheedling his way through various deals, I agree with the common sentiment here that he his underlying product just doesn’t work well enough, in a unique/proprietary enough way for him to actually use that to get profitable company. Pivot-to-AI and Ed Zitron have a guess of 2027 for the plates to come crashing down, but with an IPO on the way to infuse more cash into OpenAI I wouldn’t be that surprised if he delays the bubble pop all the way to 2030, and personally gets away cleanly with no legal liability for it and some stock sales lining his pockets.



  • Here: https://glowfic.com/posts/4508

    Be warned, the three quarters of the thread don’t have much of a plot and are basically two to three characters talking, then the last quarter time skips ahead and gives massive clunky worldbuilding dumps. (This is basically par for the course with glowfic, the format supports dialogue interaction heavy stories and it’s really easy to just kind of let the plot meander. Planecrash, for all of its bloat and diversions into eugenics lectures, is actually relatively plot heavy for glowfic.)

    On the upside, the first three quarters almost read like a sneer on rationalists.







  • Thanks for the lore, and sorry that you had to ingest all that at some point.

    Ironically, one of the biggest lore drops about dath ilan happens in a story I initially thought at the time was a parody of rationalists and the concept of dath ilan (Eliezer used a new penname for the story). The main dath ilan character (isekai’d into an Earth mostly similar to our own but with magic and uh… other worldbuilding conceits I won’t get into here) jumps to absurd wild conclusion throughout basically every moment of the story, and unlike HJPEV is actually wrong about basically every conclusion she jumps to. Of course, she’s a woman, and it comes up towards the ending that she is below average for dath ilan intelligence (but still above the Earth average, obviously), so don’t give Eliezer too much credit for allowing a rationalist character to be mostly wrong for once.

    I don’t know how he came up with the name… other fanfic writers in rationalist-adjacent space have complained about his amateurish attempts at conlanging, so there probably isn’t a sophisticated conlang explanation about phonemes involved. You might be on the right track guessing at weird anagrams?



  • In Eliezer’s “utopian” worldbuilding fiction concept, dath ilan, they erased their entire history just to cover up the any mention of any concept that might inspire someone to think of “superintelligence” (and as an added bonus purge other wrong-think concepts). The Philosopher Kings Keepers have also discouraged investment and improvement in computers (because somehow, despite now holding any direct power and the massive financial incentives and dath ilan being described as capitalist and libertarian, the Keepers can just sort of say their internal secret prediction market predicts bad vibes from improving computers too much and everyone falls in line). According to several worldbuiding posts, dath ilan has built an entire secret city that gets funded with 2% of the entire world’s GDP to solve AI safety in utter secrecy.