• 2 Posts
  • 115 Comments
Joined 2 years ago
cake
Cake day: June 25th, 2023

help-circle






  • Please calm down.

    for some reason this has gotten people very worked up

    Seriously I don’t know what I said that is so controversial or hard to understand.

    I don’t know why it’s controversial here.

    imagine coming into a conversation with people you don’t fucking know, taking a swing and a miss at one of them, and then telling the other parties in the conversation that they need to calm down — about racism.

    the rest of your horseshit post is just you restating your original point. we fucking got it. and since you missed ours, here it is one more time:

    race science isn’t real. we’re under no obligation to use terms invented by racists that describe nothing. if we’re feeling particularly categorical about our racists on a given day, or pointing out that one is using the guise of race science? sure, use the term if you want.

    tone policing people who want to call a racist a racist ain’t fucking it. what in the fuck do you think you added to this conversation? what does anyone gain from your sage advice that “X is Y but Y isn’t X” when the other poster didn’t say that Y is X but instead that Y doesn’t exist?

    so yeah no I’m not calm, go fuck yourself. we don’t need anyone tone policing conversations about racism in favor of the god damn racists


  • Race pseudoscience is racist

    yes, V0ldek said this

    but not all racism is racial pseudoscience

    they didn’t say this though, you did. race science is an excuse made up by racists to legitimize their own horseshit, just like how fascists invent a thousand different names to avoid being called what they are. call a spade a fucking spade.

    why are you playing bullshit linguistic games in a discussion about racism? this is the exact same crap the “you can’t call everyone a nazi you know, that just waters down the term” tone police would pull when I’d talk about people who, shockingly, turned out to be fucking nazis.

    “all nazis are fascists but not all fascists are nazis” who gives a shit, really. fascists and racists are whatever’s convenient for them at the time. a racist will and won’t believe in race science at any given time because it’s all just a convenient justification for the racist to do awful shit.



  • no problem! I don’t mean to give you homework, just threads to read that might be of interest.

    yeah, a few of us are Philosophy Tube fans, and I remember they’ve done a couple of good videos about parts of TESCREAL — their Effective Altruism and AI videos specifically come to mind.

    if you’re familiar with Behind the Bastards, they’ve done a few videos I can recommend dissecting TESCREAL topics too:

    • their episodes about the Zizians are definitely worth a listen; they explore and critique the group as a cult offshoot of LessWrong Rationalism.
    • they did a couple of older videos on AI cults and their origins that are very good too.

  • also fair enough. you might still enjoy a scroll through our back archive of threads if you’ve got time for it — there is a historical context to transhumanism that people like Musk exploit to further their own goals, and that’s definitely something to be aware of, especially as TESCREAL elements gain overt political power. there are positive versions of transhumanism and the article calls one of them out — the Culture is effectively a model for socialist transhumanism — but one must be familiar with the historical baggage of the philosophy or risk giving cover to people currently looking to cause harm under transhumanism’s name.


  • fair enough!

    but I don’t actually enjoy arguing and don’t have the skills for formalized “debate” anyway.

    it’s ok, nobody does. that’s why we ban it unless it’s amusing (which effectively bans debate for everyone unless they know their audience well enough to not fuck up) — shitty debatelords take up a lot of thread space and mental energy and give essentially nothing back.

    wherever “here” is

    SneerClub is a fairly old community if you count in its Reddit origins; part of what we do here is sneering at technofascists and other adherents to the TESCREAL belief package, though SneerClub itself tends to focus on the LessWrong Rationalists. that’s the context we tend to apply to articles like the OP.


  • There is a certain irony to everyone involved in this argument, if it can be called that.

    don’t do this debatefan here crap here, thanks

    This, and similar writing I’ve seen, seems to make a fundamental mistake in treating time like only the next few, decades maybe, exist, that any objective that takes longer than that is impossible and not even worth trying, and that any problem that emerges after a longer period of time may be ignored.

    this isn’t the article you’re thinking of. this article is about Silicon Valley technofascists making promises rooted in Golden Age science fiction as a manipulation tactic. at no point does the article state that, uh, long-term objectives aren’t worth trying because they’d take a long time??? and you had to ignore a lot of the text of the article, including a brief exploration of the techno-optimists and their fascist ties (and contrasting cases where futurism specifically isn’t fascist-adjacent), to come to the wrong conclusion about what the article’s about.

    unless you think the debunked physics and unrealistic crap in Golden Age science fiction will come true if only we wish long and hard enough in which case, aw, precious, this article is about you!





  • some experts genuinely do claim it as a possibility

    zero experts claim this. you’re falling for a grift. specifically,

    i keep using Claude as an example because of the thorough welfare evaluation that was done on it

    asking the LLM about “its mental state” is part of a very old con dating back to mechanical Turks playing chess and horses that do math. of course the LLM generated some interesting sentences when prompted about its internal state — it was trained on appropriated copies of every piece of fiction in existence, including world-class works of sci-fi (with sentient AIs and everything!), and it was tuned to generate “interesting” (see: profitable, and there’s nothing more profitable than a con with enough marks) responses. that’s why the others keep mentioning pareidolia — the only intelligence in the loop is the reader assigning meaning to the slop they’re reading, and if you step out of that role, it really does become clear that what you’re reading is absolute slop.

    s i don’t really think there’s any harm in thinking about the possibility under certain circumstances. I don’t think Yud is being genuine in this though he’s not exactly a Michael Levin mind philosopher he just wants to score points by implying it has agency

    you don’t think there’s any harm in thinking about the possibility, but all Yud does is create harm by grifting people who buy into that possibility. Yud’s Rationalist cult is the original driving force behind the people telling you LLMs must be sentient. do you understand that?

    Like it has atleast the same amount of value as like letting an insect out instead of killing it

    that insect won’t go on to consume so much energy and water and make so much pollution it creates an environmental crisis. the insect doesn’t exist as a product of the exploitation of third-world laborers or of artists and writers whose work was plagiarized. the insect isn’t a stupid fucking product of capitalism designed to maximize exploitation. I don’t acknowledge the utterly slim possibility that the insect might be or do any of the previous, because ignoring events with a near-zero probability of occurring is part of how I avoid looking like a god damn clown.

    you say you acknowledge the harms done by LLMs, but I’m not seeing it.


  • centrism will kill us all, exhibit [imagine an integer overflow joke here, I’m tired]:

    i won’t say that claude is conscious but i won’t say that it isn’t either and its always better to air on the side of caution

    the chance that Claude is conscious is zero. it’s goofy as fuck to pretend otherwise.

    claims that LLMs, in spite of all known theories of computer science and information theory, are conscious, should be treated like any other pseudoscience being pushed by grifters: systemically dangerous, for very obvious reasons. we don’t entertain the idea that cryptocurrencies are anything but a grift because doing so puts innocent people at significant financial risk and helps amplify the environmental damage caused by cryptocurrencies. likewise, we don’t entertain the idea of a conscious LLM “just in case” because doing so puts real, disadvantaged people at significant risk.

    if you don’t understand that you don’t under any circumstances “just gotta hand it to” the grifters pretending their pet AI projects are conscious, why in fuck are you here pretending to sneer at Yud?

    schizoposting

    fuck off with this

    even if its wise imo to try not to be abusive to AI’s just incase

    describe the “incase” to me. either you care about the imaginary harm done to LLMs by being “abusive” much more than you care about the documented harms done to people in the process of training and operating said LLMs (by grifters who swear their models will be sentient any day now), or you think the Basilisk is gonna get you. which is it?