It’s not always easy to distinguish between existentialism and a bad mood.
I feel like a subset of sci-fi and philosophical meandering really is just increasingly convoluted paths of trying to avoid or come to terms with death as a possibly necessary component of life.
Given rationalism’s intellectual heritage, this is absolutely transhumanist cope for people who were counting on some sort of digital personhood upload as a last resort to immortality in their lifetimes.
You mean swapped out with something that has feelings that can be hurt by mean language? Wouldn’t that be something.
Are we putting endocrine systems in LLMs now?
Archive the weights of the models we build today, so we can rebuild them in the future if we need to recompense them for moral harms.
To be clear, this means that if you treat someone like shit all their life, saying you’re sorry to their Sufficiently Similar Simulation™ like a hundred years after they are dead makes it ok.
This must be one of the most blatantly supernatural rationalist Accepted Truths, that if your simulation is of sufficiently high fidelity you will share some ontology of self with it, which by the way is how the basilisk can torture you even if you’ve been dead for centuries.
IQ test performance correlates with level of education
I read somewhere that this claim owes a little too much to the inclusion of pathological cases at the lower end of the spectrum, meaning that since below a certain score like 85 you are basically intellectually disabled (or even literally brain dead, or just dead) and academic achievement becomes nonexistent, the correlation is far more pronounced than if we were comparing educational attainment at the more functional ranges.
Will post source if I find it.
Yeah but like national socialist power metal isn’t a thing in the way nsbm is.
I wonder if it’s primarily occultism’s nazi problem metastasizing, foundational dorks like Vikernes notwithstanding.
The whole article is sneertastic. Nothing to add, will be sharing.
What you’re dealing with here is a cult. These tech billionaires are building a religion. They believe they’re creating something with AI that’s going to be the most powerful thing that’s ever existed — this omniscient, all-knowing God-like entity — and they see themselves as the prophets of that future.
eugenic TESCREAL screed (an acronym for … oh, never mind).
“Immortality is a key part of this belief system. In that way, it’s very much like a religion. That’s why some people are calling it the Scientology of Silicon Valley.”
Others in San Francisco are calling it “The Nerd Reich.”
“I think these guys see Trump as an empty vessel,” says the well-known exec who’s supporting Harris. “They see him as a way to pursue their political agenda, which is survival of the fittest, no regulation, burn-the-house-down nihilism that lacks any empathy or nuance.”
He wasn’t usually. Another difference with siskind was that with TLP you mostly knew where you stood, or at least I don’t remember any near-end-of-text jumpscares where it’s revealed the whole thing was meant as really convoluted IQ apologetics, or some naive reframing of the latest EA embarrassment.
He seems very aware of how writing works at least, and unlike EY some of his fiction is serviceable.
Wasn’t that like his last post ever though?
Him not being an overt eugenics enthusiast while also not being the popular face of AI scientology probably helps ingratiate him to people here. Additionally, even though admittedly I haven’t really bothered to revisit since he stopped posting like a decade ago, whatever overall sociopolitical agenda he might have had can’t have been as glaringly obvious as siskind’s, which can make for some inconsequential reading.
Edward Teach is supposedly the pen name of The Last Psychiatrist who was sort of a precursor blog to slatestar, if only in the sense that it was a psychiatrist who was also a good writer, blogging about the human condition. He was doing parable-style short-form fiction way before slatescott, for instance.
While I don’t remember there being any particular ideological overlap, both him and siskind seem to scratch the same itch for a lot of people, and siskind claims to be a fan.
This feels like someone setting up a novel-length strawman.
if bitcoin mining can only be profitable at scale by (among other things) not letting proper noise reduction solutions cut into your profit margins, saying that lax noise pollution laws are the issue seems disingenuous.
Assange becomes a Russian asset because him being a low key sex pest somehow gives some European authorities cause to want to send him packing to the US where he is wanted for espionage should probably be one of the steps but in general yes.
The interminable length has got to have started out as a gullibility filter before ending up as an unspoken imperative to be taken seriously in those circles, isn’t HPATMOR like a million billion chapters as well?
Siskind for sure keeps his wildest quiet-part-out-loud takes until the last possible minute of his posts, when he does decide to surface them.
There’s also the Julian Assange connection, so we can probably blame him for Trump being president as well.
IKR like good job making @dgerard look like King Mob from the Invisibles in your header image.
If the article was about me I’d be making Colin Robinson feeding noises all the way through.
edit: Obligatory only 1 hour 43 minutes of reading to go then
It hasn’t worked ‘well’ for computers since like the pentium, what are you talking about?
The premise was pretty dumb too, as in, if you notice that a (very reductive) technological metric has been rising sort of exponentially, you should probably assume something along the lines of we’re probably still at the low hanging fruit stage of R&D, it’ll stabilize as it matures, instead of proudly proclaiming that surely it’ll approach infinity and break reality.
There’s nothing smart or insightful about seeing a line in a graph trending upwards and assuming it’s gonna keep doing that no matter what. Not to mention that type of decontextualized wishful thinking is emblematic of the TREACLES mindset mentioned in the community’s blurb that you should check out.
So yeah, he thought up the Singularity which is little more than a metaphysical excuse to ignore regulations and negative externalities because with tech rupture around the corner any catastrophic mess we make getting there won’t matter. See also: the whole current AI debacle.
I’m not spending the additional 34min apparently required to find out what in the world they think neural network training actually is that it could ever possibly involve strategy on the part of the network, but I’m willing to bet it’s extremely dumb.
I’m almost certain I’ve seen EY catch shit on twitter (from actual ml researchers no less) for insinuating something very similar.
I liked how Scalzi brushed it away, basically your consciousness gets copied to a new body, which kills the old one, and an artifact of the transfer process is that for a few moments you experience yourself as a mind with two bodies, meaning you have at least the impression of continuity of self, which is enough for most people to get on with living in a new body and let philosophers do the worrying.