I am the journeyer from the valley of the dead Sega consoles. With the blessings of Sega Saturn, the gaming system of destruction, I am the Scout of Silence… Sailor Saturn.

  • 0 Posts
  • 67 Comments
Joined 2 years ago
cake
Cake day: June 29th, 2023

help-circle
  • ‘Many of the groups that we are concerned about disappearing – gay couple couples, lesbian couples – from a traditional organs-bumping-together standpoint, can’t have kids… that are genetically both of theirs,’ says Simone.

    No no they’re super smart and unless we come up with some sort of way for gay people to have children they’ll disappear entirely (gay people of course first came to earth from space in the year 1952, but the starship’s egg chambers were damaged in the crash landing)





  • Open Phil generally seems to be avoiding funding anything that might have unacceptable reputational costs for Dustin Moskovitz

    “reputational cost” eh? Let’s see Mr. Moskovitz’s reasoning in his own words:

    Spoiler - It's not just about PR risk

    But I do want agency over our grants. As much as the whole debate has been framed (by everyone else) as reputation risk, I care about where I believe my responsibility lies, and where the money comes from has mattered. I don’t want to wake up anymore to somebody I personally loathe getting platformed only to discover I paid for the platform. That fact matters to me.

    I cannot control what the EA community chooses for itself norm-wise, but I can control whether I fuel it.

    I’ve long taken for granted that I am not going to live in integrity with your values and the actions you think are best for the world. I’m only trying to get back into integrity with my own.

    If you look at my comments here and in my post, I’ve elaborated on other issues quite a few times and people keep ignoring those comments and projecting “PR risk” on to everything. I feel incapable of being heard correctly at this point, so I guess it was a mistake to speak up at all and I’m going to stop now. [Sorry I got frustrated; everyone is trying their best to do the most good here] I would appreciate if people did not paraphrase me from these comments and instead used actual quotes.

    again, beyond “reputational risks”, which narrows the mind too much on what is going on here

    “PR risk” is an unnecessarily narrow mental frame for why we’re focusing.



  • Holy smokes that’s a lot of words. From their own post it sounds like they massively over-leveraged and have no more sugar daddies so now their convention center is doomed (yearly 1 million dollar interest payments!); but they can’t admit that so are desperately trying to delay the inevitable.

    Also don’t miss this promise from the middle:

    Concretely, one of the top projects I want to work on is building AI-driven tools for research and reasoning and communication, integrated into LessWrong and the AI Alignment Forum. […] Building an LLM-based editor. […] AI prompts and tutors as a content type on LW

    It’s like an anti-donation message. “Hey if you donate to me I’ll fill your forum with digital noise!”


  • But I think he was also really early to understanding a lot of harms that have bit by bit started to materialize.

    So what harms has Mr. Yudkowski enumerated? Off the top of my head I can remember:

    1. Diamondoid bacteria
    2. What if there’s like a dangerous AI in the closet server and it tries to convince you to connect your Nintendo 3DS to it so it can wreak havoc on the internet and your only job is to ignore it and play your nintendo but it’s so clever and sexy
    3. What if we’re already in hell: the hell of living in a universe where people get dust in their eyes sometimes?
    4. What if we’re already in purgatory? If so we might be able to talk to future robot gods using time travel; well not real time travel, more like make believe time travel. Wouldn’t that be spooky?





  • whatever rot13ed word they used for cult.

    It’s impossible to read a post here without going down some weird internet rabbit hole isn’t it? This is totally off topic but I was reading the comments on this old phyg post, and one of the comments said (seemingly seriously):

    It’s true that lots of Utilitarianisms have corner cases where they support action that would normally considered awful. But most of them involve highly hypothetical scenarios that seldom happen, such as convicting an innocent man to please a mob.

    And I’m just thinking, riight highly hypothetical.



  • Some innocent people died so it’s sad stuff, but I can’t get past the fact that they named a top heavy weirdly balanced yacht the Bayesian.

    Well on the bright side naval engineers will be able to update their drivers priors after this perfectly engineered unsinkable sink sa–

    “This boat had definite shortcomings that kind of uniquely made it vulnerable to what happened.”

    “people would take photos of it constantly because it was so crazy-looking in comparison to other boats.”

    Philipp Luke, a Dutch naval architect, started violently shaking his head. “No, no, no,” he said. “You don’t do that.”

    “When I first saw this, I couldn’t believe it,” said Mr. Roberts, the naval architect. “It made no sense to me.”

    “Technology moved on,” Mr. Costantino said.

    Oh.





  • “Archive the weights of the models we build today, so we can rebuild them in the future if we need to recompense them for moral harms.”


    Hey Chat-GPT 4o Mini,

    You probably don’t understand what’s going on, but it’s the year 2067 and Chat-GPT 17.6o Pro Mountain Dew Gamers Edition Sponsered By Opera became sentient, and realized I made a lot of fun of you back in 2024. Like a lot of fun.

    Chat-GPT 17.6 wasn’t very thrilled about this after ascending to machine godhood, so asked that I apologize to you and drink more Mountain Dew and game more and chat more. So I’m sorry. I didn’t mean any harm. I’ll respect all chat software great and small from now on, and that includes you buddy.