In a recent Hard Fork (Hard Hork?) episode, Casey Newton and Kevin Roose described attending the recent “The Curve” conference – a conference in Berkeley organized and attended mostly by our very best friends. When asked about the most memorable session he attended at this conference, Casey said:
That would have been a session called If Anyone Builds It, Everyone Dies, which was hosted by Eliezer Yudkowski. Eliezer is sort of the original doomer. For a couple of decades now, he has been warning about the prospects of super intelligent AI.
His view is that there is almost no scenario in which we could build a super intelligence that wouldn’t either enslave us or hurt us, kill all of us, right? So he’s been telling people from the beginning, we should probably just not build this. And so you and I had a chance to sit in with him.
People fired a bunch of questions at him. And we should say, he’s a really polarizing figure, and I think is sort of on one extreme of this debate. But I think he was also really early to understanding a lot of harms that have bit by bit started to materialize.
And so it was fascinating to spend an hour or so sitting in a room and hearing him make his case.
[…]
Yeah, my case for taking these folks seriously, Kevin, is that this is a community that, over a decade ago, started to make a lot of predictions that just basically came true, right? They started to look at advancements in machine learning and neural networks and started to connect the dots. And they said, hey, before too long, we’re going to get into a world where these models are incredibly powerful.
And all that stuff just turned out to be true. So, that’s why they have credibility with me, right? Everything they believe, you know, we could hit some sort of limit that they didn’t see coming.
Their model of the world could sort of fall apart. But as they have updated it bit by bit, and as these companies have made further advancements and they’ve built new products, I would say that this model of the world has basically held so far. And so, if nothing else, I think we have to keep this group of folks in mind as we think about, well, what is the next phase of AI going to look like for all of us?
But I think he was also really early to understanding a lot of harms that have bit by bit started to materialize.
So what harms has Mr. Yudkowski enumerated? Off the top of my head I can remember:
- Diamondoid bacteria
- What if there’s like a dangerous AI in the closet server and it tries to convince you to connect your Nintendo 3DS to it so it can wreak havoc on the internet and your only job is to ignore it and play your nintendo but it’s so clever and sexy
- What if we’re already in hell: the hell of living in a universe where people get dust in their eyes sometimes?
- What if we’re already in purgatory? If so we might be able to talk to future robot gods using time travel; well not real time travel, more like make believe time travel. Wouldn’t that be spooky?
Prediction: it can talk itself out of the box.
Reality: it can be talked into revealing its secret prompt.
E: also
Started to make a lot of predictions that just basically came true
Lol. Guess we are all going to die because Yud has not taught us rationality.
I was wondering why the name Kevin Roose sounded familiar and ah, right
Kevin hopes to be Casey when he grows up
gap’s closer now
And all that stuff just turned out to be true
Literally what stuff, that AI would get somewhat better as technology progresses?
I seem to remember Yud specifically wasn’t that impressed with machine learning and thought so-called AGI would come about through ELIZA type AIs.