

All I know is that I didn’t do anything to make those mushrooms grow in a circle like that and the sweetbread I left there in the morning was completely gone by lunchtime and that evening all my family’s shoes got fixed up.
All I know is that I didn’t do anything to make those mushrooms grow in a circle like that and the sweetbread I left there in the morning was completely gone by lunchtime and that evening all my family’s shoes got fixed up.
I recommend it because we know some of these LLM-based services still rely on the efforts of A Guy Instead to make up for the nonexistence and incoherence of AGI. If you’re an asshole to the frontend there’s a nonzero chance that a human person is still going to have to deal with it.
Also I have learned an appropriate level of respect and fear for the part of my brain that, half-asleep, answers the phone with “hello this is YourNet with $CompanyName Support.” I’m not taking chances around unthinkingly answering an email with “alright you shitty robot. Don’t lie to me or I’ll barbecue this old commodore 64 that was probably your great uncle or whatever”
if it has morals its hard to tell how much of it is illusory and token prediction!
It’s generally best to assume 100% is illusory and pareidolia. These systems are incredibly effective at mirroring whatever you project onto it back at you.
New rule: you’re not allowed to tell people to shut up and look at the numbers unless you’re actually good at math.
If I have the right read of his personality (based, I must admit, solely on his public work) I would guess that it’s the narcissism of assuming that anything that disagrees with his preferred sequence of events (that is to say, the singularity happening in his lifetime and with him playing a key role) is necessarily incorrect.
If that’s true then how has he maintained whatever passes for his career in Sci-fi whining these days?
I feel like I’ve learned a lot about world cultures from forum threads and YouTube videos unpacking and debating translation errors in anime and games, and I’m not sure if this says more about my media diet or the world at large.
Back in my edgy atheist era I found the ontological argument in apologetics to be compelling, though not terribly convincing. And here I thought they would content themselves reinventing Pascal’s wager. Now let me see if I can find one of those copypasta about the best conceivable plate of nachos.
I’m even more appalled at the description they’re using, because that is categorically not what happened here. Buncha finance bros convinced themselves they were doing the right thing by stealing money to live like lords in the fuckin’ Bahamas is more like it.
There’s got to be a pithy way of describing something so stupid that an adult person couldn’t possibly believe it unless it was in their direct personal interest to do so. Like, “kernel mode” isn’t even a particularly clever or interesting form of prompt injection, and probably predates a lot of the current preprocessors. It’s the most blatant version of “it scared me after I told it to tell me a scary story” I think we’ve seen yet, and while that would be one thing for a young person on a forum it doesn’t strike me that this one’s had much time or incentive to grow up.
I was thinking more Bunsen Honeydew, actually.
I am of course referring here to KAT WOODS the fictional corporate person, and not x_X_69_kat-of-the-family-woods_69_X_x the flesh and blood woman created by our Lord and Savior.
Heartwarming: the worst person you know just outed themselves as a fucking moron
Even the people who are disagreeing are still kinda sneerable though. Like this guy:
Even in the worst case, DOGE firing too many people is not a particularly serious danger. Aside from Skynet, you should be worried about people using AI to help engineer deadly viruses or nuclear weapons, not firing government employees.
That’s still assuming that the AI is a valuable tool for the purpose of genetic engineering or nuclear weapons manufacturing or whatever! Like, the hard part of building a nuke is very much in acquiring the materials, engineering everything to go off at the right time, and actually building it without killing yourself. Very little of that is meaningfully assisted by LLMs even if they did work as advertised. And there are so many people in that very thread alone going into detail on how biological engineering is incredibly hard in ways that similarly aren’t bottlenecked by the kinds of things current AI structures can do. The level of comedically missing the point of the folks who keep trying to explain reality is off the charts.
True. I’ll admit my primary thought was about the Chinese Belt and Road initiative, but I didn’t want to discount other aspiring neocolonialists trying their hands at it.
I’m not very up on my Elder Scrolls lore, but I think this is where I’m supposed to say something about CHIM?
I would be more inclined to agree if there was an actual better alternative wait to fill in the gap. Instead we’re probably going to see the loss of US soft power be replaced by EU, Russian, and particularly Chinese soft power. I’m not sufficiently propagandized to say that’s strictly worse than being under US soft power, especially as practiced by the kinds of people that support EA. But it also isn’t really an improvement in terms of enabling autonomous development.
Yeah. I don’t think you need the full ideological framework and all its baggage to get to “medical interventions and direct cash transfers are consistently shown to have strong positive impacts relative to the resources invested.” That framework prevents you from adding on “they also avoid some of the negative impact that foreign aid can have on domestic institution-building processes” which is a really important consideration. Of course, that assumes the goal is to mitigate and remediate the damage done by colonialism and imperialism rather than perpetuting the same structures in a way that the imperialists at the top can feel good about. And for a lot of the donor class that EA orgs are chasing I don’t think that’s actually the case.
I also think that some of the long-termism criticisms are not so easily severable from the questions he does address about epistemology and listening to the local people receiving aid. The long-termist nutjobs aren’t an aberration of EA-type utilitarianism. They are it’s logical conclusion. Even if this chapter ends with common sense prevailing over sci-fi nonsense it’s worth noting that this kind of absurdity can’t arise if you define effectiveness as listening to people and helping them get what they need rather than creating your own metrics that may or may not correlate outside of the most extreme cases.
I love this. Especially the ending, talking about the titanic struggle to make AI competent enough to outsmart the people who think it’s going to be omniscient. Glad to see I’ve got another writer to chase down that I had somehow missed previously.
I mean I think the whole AI consciousness emerged from science fiction writers who wanted to interrogate the economic and social consequences of totally dehumanizing labor, similar to R.U.R. and Metropolis. The concept had sufficient legs that it got used to explore things like “what does it mean to be human?” in a whole bunch of stories. Some were pretty good (Bicentennial Man, Aasimov 1976) and others much less so (Bicentennial Man, Columbus 1999). I think the TESCREAL crowd had a lot of overlap with the kind of people who created, expanded, and utilized the narrative device and experimented with related technologies in computer science and robotics, but saying they originated it gives them far too much credit.