• 4 Posts
  • 72 Comments
Joined 2 years ago
cake
Cake day: August 29th, 2023

help-circle


  • Actually, as some of the main opponents of the would-be AGI creators, us sneerers are vital to the simulation’s integrity.

    Also, since the simulator will probably cut us all off once they’ve seen the ASI get started, by delaying and slowing down rationalists’ quest to create AGI and ASI, we are prolonging the survival of the human race. Thus we are the most altruistic and morally best humans in the world!



  • He’s set up a community primed to think the scientific establishment’s focus on falsifiablility and peer review is fundamentally worse than “Bayesian” methods, and that you don’t need credentials or even conventional education or experience to have revolutionary good ideas, and strengthened the already existing myth of lone genii pushing science forward (as opposed to systematic progress). Attracting cranks was an inevitable outcome. In fact, Eliezer occasionally praises cranks when he isn’t able to grasp their sheer crankiness (for instance, GeneSmith’s ideas are total nonsense for anyone with more familiarity with genetics than skimming relevant-sounding scientific publications and garbage pop-sci journalism, but Eliezer commented favorably). The only thing that has changed is ChatGPT and it’s clones glazing cranks first making them even more deluded. And of course, someone (cough Eliezer) was hyping up ChatGPT as far back as GPT-2, so it’s only to be expected that cranks would think LLMs were capable of providing legitimate useful feedback.

    Not a fan of yud but getting daily emails from delulus would drive me to wish for the basilisk

    He’s deliberately cultivated an audience willing to hear cranks out, so this is exactly what he deserves.








  • He claims he was explaining what others believe not what he believes, but if that is so, why are you so aggressively defending the stance?

    Literally the only difference between Scott’s beliefs and AI:2027 as a whole is his prophecy estimate is a year or two later. (I bet he’ll be playing up that difference as AI 2027 fails to happen in 2027, then also doesn’t happen in 2028.)

    Elsewhere in the thread he whines to the mods that the original poster is spamming every subreddit vaguely lesswrong or EA related with engagement bait. That poster is katxwoods… as in Kat Woods… as in a member of Nonlinear, the EA “organization” whose idea of philanthropic research was nonstop exotic vacations around the world. And, iirc, they are most infamous among us sneerer for “hiring” an underpaid (really underpaid, like couldn’t afford basic necessities) intern they also used as a 24/7 live-in errand girl, drug runner, and sexual servant.




  • I was just about to point out several angles this post neglects but it looks like from the edit this post is just intended to address a narrower question. Among the angles outside the intended question: philanthropy by the ultra-wealthy often serves as a tool for reputation laundering and influence building. I guess the same criticism can be made about a lot of conventional philanthropy, but I don’t think that should absolve EA.

    This post somewhat frames the question as a comparison between EA and conventional philanthropy and foreign aid efforts… which okay, but that is a low bar especially when you look at some of the stuff the US has done with it’s foreign aid.




  • Yeah, he thinks Cyc was a switch from the brilliant meta-heuristic soup of Eurisko to the dead end of expert systems, but according to the article I linked, Cycorp was still programming in extensive heuristics and meta-heuristics with the expert system entries they were making as part of it’s general resolution-based inference engine, it’s just that Cyc wasn’t able to do anything useful with these heuristics and in fact they were slowing it down extensively, so they started turning them off in 2007 and completely turned off the general inference system in 2010!

    To be fair far too charitable to Eliezer, this little factoid has cites from 2022 and 2023 when Lenat wrote more about lessons from Cyc, so it’s not like Eliezer could have known this back in 2008. To sneer be actually fair to Eliezer, he should have figured they guy that actually wrote and used Eurisko and talked about how Cyc was an extension of it and repeatedly refers back to lessons of Eurisko would in fact try to include a system of heuristics and meta-heuristics in Cyc! To properly sneer at Eliezer… it probably wouldn’t have helped even if Lenat kept the public up to date on the latest lessons from Cyc through academic articles, Eliezer doesn’t actually keep up with the literature as it’s published.



  • You need to translate them into lesswrongese before you try interpreting them together.

    probability: he made up a number to go with his feelings about a topic

    subjective: the number is even more made up and feelings based than is normal for lesswrong

    noticeable: the number is really tiny, but big enough for Eliezer to fearmonger about!

    No, you don’t get to actually know what the number is, then you could penalize Eliezer for predicting it wrongly or question why that number specifically. Just trust that the bayesianified language shows Eliezer thought really hard about it.