

The people who made the Foundation TV show faced the challenge, not just of adapting a story that repeatedly jumps forward from one generation to the next, but of adapting a series where an actual character doesn’t show up until the second book.
The people who made the Foundation TV show faced the challenge, not just of adapting a story that repeatedly jumps forward from one generation to the next, but of adapting a series where an actual character doesn’t show up until the second book.
That link seems to have broken, but this one currently works:
https://bsky.app/profile/larkshead.bsky.social/post/3lt6ugxre6k2s
https://bsky.app/profile/chemprofcramer.bsky.social/post/3lt5h24hfnc2m
I got caught up in this mess because I was VPR at Minnesota in 2019 and the first author on the paper (Jordan Lasker) lists a Minnesota affiliation. Of course, the hot emails went to the President’s office, and she tasked me with figuring out what the hell was going on. Happily, neither Minnesota nor its IRB had “formally” been involved. I regularly sent the attached reply, which seemed to satisfy folks. But you come to realize, as VPR, just how little control you actually have if a researcher in your massive institution really wants to go rogue… 😰
Dear [redacted],
Thank you for writing to President Gabel to share your concern with respect to an article published in Psych in 2019 purporting to have an author from the University of Minnesota. The President has asked me to respond on her behalf.
In 2018, our department of Economics requested a non-employee status for Jordan Lasker while he was working with a faculty member of that department as a data consultant. Such status permitted him a working umn.edu email address. He appears to have used that email address to claim an affiliation with the University of Minnesota that was neither warranted nor known to us prior to the publication of the article in question. Upon discovery of the article in late 2019, we immediately verified that his access had been terminated and we moreover transmitted to him that we was not to falsely claim University of Minnesota affiliation in the future. We have had no contact with him since then. He has continued to publish similarly execrable articles, sadly, but he now lists himself as an “independent researcher”.
Best regards,
Chris Cramer
The 1950s and ’60s are the middle and end of the Golden Age of science fiction
Incorrect. As everyone knows, the Golden Age of science fiction is 12.
Asimov’s stories were often centered around robots, space empires, or both,
OK, this actually calls for a correction on the facts. Asimov didn’t combine his robot stories with his “Decline and Fall of the Roman Empire but in space” stories until the 1980s. And even by the '50s, his robot stories were very unsubtly about how thoughtless use of technology leads to social and moral decay. In The Caves of Steel, sparrows are exotic animals you have to go to the zoo to see. The Earth’s petroleum supply is completely depleted, and the subway has to be greased with a bioengineered strain of yeast. There are ration books for going to the movies. Not only are robots taking human jobs, but a conspiracy is deliberately stoking fears about robots taking human jobs in order to foment unrest. In The Naked Sun, the colony world of Solaria is a eugenicist society where one of the murder suspects happily admits that they’ve used robots to reinvent the slave-owning culture of Sparta.
Noted in the Stubsack here:
For what it’s worth I know one of the founders of e/acc and they told me they were radicalized by a date they had with you where they felt you bullied them about this subject.
A-and yep, that’s my dose of cursed for the day
“A case for courage, when speaking of made-up sci-fi bullshit”
I’d disagree with the media analysis in “What Was The Nerd?” at a few points. For example, Marty McFly isn’t a bullied nerd. George McFly is. Marty plays in a band and has a hot girlfriend. He’s the non-nerd side of his interactions with Doc Brown, where he’s the less intellectual, and with George, where he’s the more cool. Likewise, Chicago in Ferris Bueller’s Day Off isn’t an “urban hellscape”. It’s the fun place to go when you want to ditch the burbs and take in some urban pleasures (a parade, an art gallery…).
feels like they are wrong on the object level
Who actually wants to sound like this?
That Carl Shulman post from 2007 is hilarious.
After years spent studying existential risks, I concluded that the risk of an artificial intelligence with inadequately specified goals dominates. Attempts to create artificial intelligence can be expected to continue, and to become more likely to succeed in light of increased computing power, neuroscience, and intelligence-enhancements. Unless the programmers solve extremely difficult problems in both philosophy and computer science, such an intelligence might eliminate all utility within our future light-cone in the process of pursuing a poorly defined objective.
Accordingly, I invest my efforts into learning more about the relevant technologies and considerations, increasing my earnings capability (so as to deliver most of a large income to relevant expenditures), and developing logistical strategies to more effectively gather and expend resources on the problem of creating AI that promotes (astronomically) and preserves global welfare rather than extinguishing it.
Because the potential stakes are many orders of magnitude greater than relatively good conventional expenditures (vaccine and Green Revolution research), and the probability of disaster much more likely than for, e.g. asteroid impacts, utilitarians with even a very low initial estimate of the practicality of AI in coming decades should still invest significant energy in learning more about the risks and opportunities associated with it. (Having done so, I offer my assurance that this is worthwhile.) Note that for materialists the possibility of AI follows from the existence proof of the human brain, and that an AI able to redesign itself for greater intelligence and copy itself would have the power to determine the future of Earth-derived life.
I suggest beginning with the two articles below on existential risk, the first on relevant cognitive biases, and the second discussing the relation of AI to existential risk. Processing these arguments should provide sufficient reason for further study.
The “two articles below” are by Yudkowsky.
User “gaverick” replies,
Carl, I’m inclined to agree with you, but can you recommend a rigorous discussion of the existential risks posed by Unfriendly AI? I had read Yudkowsky’s chapter on AI risks for Bostrom’s bk (and some of his other SIAI essays & SL4 posts) but when I forward them to others, their informality fails to impress.
Shulman’s response begins,
Have you read through Bostrom’s work on the subject? Kurzweil has relevant info for computing power and brain imaging.
Ray mothersodding Kurzweil!
As Adam Becker shows in his book, EAs started out being reasonable “give to charity as much as you can, and research which charities do the most good” but have gotten into absurdities like “it is more important to fund rockets than help starving people or prevent malaria because maybe an asteroid will hit the Earth, killing everyone, starving or not”.
I haven’t read Becker’s book and probably won’t spend the time to do so. But if this is an accurate summary, it’s a bad sign for that book, because plenty of them were bonkers all along.
(Becker’s previous book, about the interpretation of quantum mechanics, irritated me. It recapitulated earlier pop-science books while introducing historical and technical errors, like getting the basic description of the EPR thought-experiment wrong, and butchering the biography of Grete Hermann while acting self-righteous about sexist men overlooking her accomplishments. See previous rant.)
They’re members of a religion which says that if you do math in your head the right way you’ll be correct about everything, and so they think they’re correct about everything.
They also secondarily believe everyone has an IQ which is their DBZ power level; they believe anything they see that has math in it, and IQ is math, so they believe anything they see about IQ. So if you avoid trying to find out your own IQ you can just believe it’s really high and then you’re good.
Unfortunately this lead them to the conclusion that computers have more IQ than them and so would automatically win any intellectual DBZ laser beam fight against them / enslave them / take over the world.
My Grand Unified Theory of Scott Aaronson is that he doesn’t have a theory of mind. On subjects far less incendiary than Zionism, he simply fails to recognize that people who share his background or interests can think differently than he does.
She said, “You know what they say the modern version of Pascal’s Wager is? Sucking up to as many Transhumanists as possible, just in case one of them turns into God. Perhaps your motto should be ‘Treat every chatterbot kindly, it might turn out to be the deity’s uncle.’”
The New York Times treats him as an expert: “Eliezer Yudkowsky, a decision theorist and an author of a forthcoming book”. He’s an Internet rando who has yammered about decision theory, not an actual theorist! He wrote fanfic that claimed to teach rational thinking while getting high-school biology wrong. His attempt to propose a new decision theory was, last I checked, never published in a peer-reviewed journal, and in trying to check again I discovered that it’s so obscure it was deleted from Wikipedia.
https://en.wikipedia.org/wiki/Wikipedia:Articles_for_deletion/Functional_Decision_Theory
To recapitulate my sneer from an earlier thread, the New York Times respects actual decision theorists so little, it’s like the whole academic discipline is trans people or something.
LessWrong has swallowed the “Cognitive-Theoretic Model of the Universe” hook, line and sinker, so yeah, zero crank filter.
It took me one (1) science-fiction convention to discover that liking the same TV show as somebody does not mean we vibrate on the same soul wavelength. I imagine that professional writers learn rather quickly that just because somebody bought your book doesn’t mean that you want to spend time with them.
Nit: It’s “Death and the Gorgon”.
It’s linked here, so I’ll hazard a guess that the copy is intended to be public.
Having now refreshed my vague memories of the Feynman Lectures on Computation, I wouldn’t recommend them as a first introduction to Turing machines and the halting problem. They’re overburdened with detail: You can tell that Feynman was gleeful over figuring out how to make a Turing machine that tests parentheses for balance, but for many readers, it’ll get in the way of the point. Comparing his discussion of the halting problem to the one in The Princeton Companion to Mathematics, for example, the latter is cleaner without losing anything that a first encounter would need. Feynman’s lecture is more like a lecture from the second week of a course, missing the first week.
I like the series (I thought the second season was stronger than the first, but the first was fine). Jared Harris is a good Hari Seldon. He plays a man that you feel could be kind, but circumstances have forced him into being manipulative and just a bit vengeful, and our friend Hari is rather good at that.