• 0 Posts
  • 150 Comments
Joined 1 year ago
cake
Cake day: March 22nd, 2024

help-circle
  • I mean I think the whole AI consciousness emerged from science fiction writers who wanted to interrogate the economic and social consequences of totally dehumanizing labor, similar to R.U.R. and Metropolis. The concept had sufficient legs that it got used to explore things like “what does it mean to be human?” in a whole bunch of stories. Some were pretty good (Bicentennial Man, Aasimov 1976) and others much less so (Bicentennial Man, Columbus 1999). I think the TESCREAL crowd had a lot of overlap with the kind of people who created, expanded, and utilized the narrative device and experimented with related technologies in computer science and robotics, but saying they originated it gives them far too much credit.



  • I recommend it because we know some of these LLM-based services still rely on the efforts of A Guy Instead to make up for the nonexistence and incoherence of AGI. If you’re an asshole to the frontend there’s a nonzero chance that a human person is still going to have to deal with it.

    Also I have learned an appropriate level of respect and fear for the part of my brain that, half-asleep, answers the phone with “hello this is YourNet with $CompanyName Support.” I’m not taking chances around unthinkingly answering an email with “alright you shitty robot. Don’t lie to me or I’ll barbecue this old commodore 64 that was probably your great uncle or whatever”









  • There’s got to be a pithy way of describing something so stupid that an adult person couldn’t possibly believe it unless it was in their direct personal interest to do so. Like, “kernel mode” isn’t even a particularly clever or interesting form of prompt injection, and probably predates a lot of the current preprocessors. It’s the most blatant version of “it scared me after I told it to tell me a scary story” I think we’ve seen yet, and while that would be one thing for a young person on a forum it doesn’t strike me that this one’s had much time or incentive to grow up.




  • Heartwarming: the worst person you know just outed themselves as a fucking moron

    Even the people who are disagreeing are still kinda sneerable though. Like this guy:

    Even in the worst case, DOGE firing too many people is not a particularly serious danger. Aside from Skynet, you should be worried about people using AI to help engineer deadly viruses or nuclear weapons, not firing government employees.

    That’s still assuming that the AI is a valuable tool for the purpose of genetic engineering or nuclear weapons manufacturing or whatever! Like, the hard part of building a nuke is very much in acquiring the materials, engineering everything to go off at the right time, and actually building it without killing yourself. Very little of that is meaningfully assisted by LLMs even if they did work as advertised. And there are so many people in that very thread alone going into detail on how biological engineering is incredibly hard in ways that similarly aren’t bottlenecked by the kinds of things current AI structures can do. The level of comedically missing the point of the folks who keep trying to explain reality is off the charts.




  • I would be more inclined to agree if there was an actual better alternative wait to fill in the gap. Instead we’re probably going to see the loss of US soft power be replaced by EU, Russian, and particularly Chinese soft power. I’m not sufficiently propagandized to say that’s strictly worse than being under US soft power, especially as practiced by the kinds of people that support EA. But it also isn’t really an improvement in terms of enabling autonomous development.


  • Yeah. I don’t think you need the full ideological framework and all its baggage to get to “medical interventions and direct cash transfers are consistently shown to have strong positive impacts relative to the resources invested.” That framework prevents you from adding on “they also avoid some of the negative impact that foreign aid can have on domestic institution-building processes” which is a really important consideration. Of course, that assumes the goal is to mitigate and remediate the damage done by colonialism and imperialism rather than perpetuting the same structures in a way that the imperialists at the top can feel good about. And for a lot of the donor class that EA orgs are chasing I don’t think that’s actually the case.


  • I also think that some of the long-termism criticisms are not so easily severable from the questions he does address about epistemology and listening to the local people receiving aid. The long-termist nutjobs aren’t an aberration of EA-type utilitarianism. They are it’s logical conclusion. Even if this chapter ends with common sense prevailing over sci-fi nonsense it’s worth noting that this kind of absurdity can’t arise if you define effectiveness as listening to people and helping them get what they need rather than creating your own metrics that may or may not correlate outside of the most extreme cases.