• 2 Posts
  • 47 Comments
Joined 2 years ago
cake
Cake day: July 13th, 2023

help-circle
  • I don’t think getting rid of elections would work. Dictatorships do not rely on election rigging alone. That’s just interventionist propaganda (barge in, set up elections, presto, democracy).

    Competent dictators don’t act anything like Trump. Once in power they try to obtain support of as wide of a section of the population as possible. There’s no freedom of speech in a dictatorship; the dictator is giving prepared speeches, designed to bolster his support, to unify the nation, etc, not just having fun gloating at half the nation’s expense.

    If he actually tries to maintain power despite his relative unpopularity, the consequences will be utterly disastrous.


  • diz@awful.systemstoSneerClub@awful.systemsSelfishness and Altruism
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    18 days ago

    This is what peak altruism looks like: being a lazy fuck with a cult, and incidentally happening to help hype up investments into the very unfriendly AI you’re supposed to save the world from. All while being too lazy to learn anything about any actual AI technologies.

    In all seriousness, all of his stuff is just extreme narcissism. Altruism is good, therefore he’s the most altruistic person in the world. Smart is good, therefore he’s the mostest smartest person. Their whole cult can be derived entirely from such self serving axioms.


  • diz@awful.systemstoSneerClub@awful.systems4Chan Unsong About NPCs
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 month ago

    Ironically, in a videogame someone like Musk would always be at most an NPC, and possibly not even that (just a set of old newspaper clippings / terminal entries in fallout / etc). Yudkowsky would be just a background story for explaining some fucked up cult.

    This is because they are, ultimately, uninteresting to simulate - their lives are well documented and devoid of any genuine challenge (they just get things by selection bias rather than any effort - simulating then is like simulating a lottery winner rather than a lottery). They exist to set up the scene for something interesting.


  • I think the question of “general intelligence” is kind of a red herring. Evolution for example creates extremely complex organisms and behaviors, all without any “general intelligence” working towards some overarching goal.

    The other issue with Yudkowsky is that he’s an unimaginative fool whose only source of insights on the topic is science fiction, which he doesn’t even understand. There is no fun in having Skynet start a nuclear war and then itself perish in the aftermath, as the power plants it depend on cease working.

    Humanity itself doesn’t possess that kind of intelligence envisioned for “AGI”. When it comes to science and technology, we are all powerful hivemind. When it comes to deciding what to do with said science and technology, we are no more intelligent than an amoeba, crawling along a gradient.


  • To argue by analogy, it’s not like getting an artificial feather exactly right was ever a bottleneck to developing air travel once we got the basics of aerodynamics down.

    I suspect that “artificial intelligence” may be a bit more like making an artificial bird that self replicates, with computers and AI as it exists now being somewhere in-between thrown rocks and gliders.

    We only ever “beat” biology by cheating via removing a core requirement of self replication. An airplane factory that has to scavenge for all the rare elements involved in making a turbine, would never fly. We had never actually beaten biology. Supersonic aircraft may be closer to a rock thrown off the cliff than to surpassing biology.

    That “cheat code” shouldn’t be expected to apply to skynet or ASI or whatever, because skynet is presumably capable of self replication. Would be pretty odd if “ASI” would be the first thing that we actually beat biology on.


  • The thing about synapses etc argument is that the hype crowd argues that perhaps the AI could wind up doing something much more effective than what-ever-it-is-that-real-brains-do.

    If you look at capabilities, however, it is inarguable that “artificial neurons” seem intrinsically a lot less effective than real ones, if we consider small animals (like e.g. a jumping spider or a bee, or even a roundworm).

    It is a rather unusual situation. When it comes to things like e.g. converting chemical energy to mechanical energy, we did not have to fully understand and copy muscles to be able to build a steam engine that has higher mechanical power output than you could get out of an elephant. That was the case for arithmetic, too, and hence there was this expectation of imminent AI in the 1960s.

    I think it boils down to intelligence being a very specific thing evolved for a specific purpose, less like “moving underwater from point A to point B” (which submarine does pretty well) and more like “fish doing what fish do”. The submarine represents very little progress towards fishiness.


  • Yeah a new form of apologism that I started seeing online is “this isn’t a bubble! Nobody expects an AGI, its just Sam Altman, it will all pay off nicely from 20 million software developers worldwide spending a few grand a year each”.

    Which is next level idiotic, besides the numbers just not adding up. There’s only so much open source to plagiarize. It is a very niche activity! It’ll plateau and then a few months later tiny single GPU models catch up to this river boiling shit.

    The answer to that has always been the singularity bullshit where the biggest models just keep staying ahead by such a large factor nobody uses the small ones.


  • Lol I literally told these folks, something like 15 years ago, that paying to elevate a random nobody like Yudkowsky as the premier “ai risk” researcher, in so much that there is any AI risk, would only increase it.

    Boy did I end up more right on that than my most extreme imagination. All the moron has accomplished in life was helping these guys raise cash due to all his hype about how powerful the AI would be.

    The billionaires who listened are spending hundreds of billions of dollars - soon to be trillions, if not already - on trying to prove Yudkowsky right by having an AI kill everyone. They literally tout “our product might kill everyone, idk” to raise even more cash. The only saving grace is that it is dumb as fuck and will only make the world a slightly worse place.


  • To be entirely honest I don’t even like the arguments against EDT.

    Smoking lesion is hilarious. So theres a lesion that is making people smoke. It is also giving them cancer in some unrelated way which we don’t know, trust me bro. Please bro don’t leave this decision to the lesion, you gotta decide to smoke, it would be irrational to decide not to smoke if the lesion’s gonna make you smoke. Correlation is not causation, gotta smoke, bro.

    Obviously in that dumb ass hypothetical, the conditional probability is conditional on the decision, not on the lesion, and the smoking in cancer cases is conditional on the lesion, not on the decision. If those two were indistinguishable then the right decision would be not to smoke. And more generally, adopting causal models without statistical data to back them up is called “being gullible”.

    The tobacco companies actually did manufacture the data, too, thats where “type-A personality” comes from.


  • Tbh whenever I try to read anything on decision theory (even written by people other than rationalists), I end up wondering how do they think a redundant autopilot (with majority vote) would ever work. In an airplane, that is.

    Considering just the physical consequences of a decision doesn’t work (unless theres a fault, consequences don’t make it through the voting electronics, so the alternative decisions made for the alternative that there is no fault, never make it through).

    Each one simulating the two or more other autopilots is scifi-brained idiocy. Requiring that autopilots are exact copies is stupid (what if we had two different teams write different implementations, I think Airbus actually sort if did that).

    Nothing is going to be simulating anything, and to make matters even worse for philosophers amateur and academic alike, the whole reason for redundancy is that sometimes there is a glitch that makes them not compute the same values, so any attempt to be clever with “ha, we just treat copies as one thing” doesn’t cut it either.



  • Embryo selection may just be the eugenicist’s equivalent of greenwashing.

    Eugenicists doing IVF is kind of funny, since it is a procedure that circumvents natural selection quite a bit, especially for the guys. It’s what, something like billion to one for the sperm?

    If they’re doing IVF, being into eugenics, they need someone to tell them that they aren’t “worsening the species”, and the embryo selection provides just that.

    edit: The worse part would be if people who don’t need IVF start doing IVF with embryo selection, expecting some sort of benefit for the offspring. With American tendency to sell people unnecessary treatments and procedures, I can totally see that happening.


  • I think I have a real example. Non hierarchical (or, at least, less hierarchical) arrangements. Anarchy is equated with chaos.

    Anything in nature we ascribe a hierarchy to; ants or other hymenoptera and termites have supposed “queens”, parent wolves are “alphas” and so on. Fictional ant-like aliens have brain bugs, or cerebrates, or the like. Even the fucking zombies infected with a variant of the rabies virus get alphas somehow.

    Every effort has went into twisting every view on reality and every fiction to align with the ideology.


  • I think it’s a mixture of it being cosplay and these folks being extreme believers in capitalism, in the inevitability of it and impossibility of any alternative. They are all successful grifters, and they didn’t get there through some scheming and clever deception, they got there through sincere beliefs that aligned with the party line.

    They don’t believe that anything can actually be done about this progression towards doom, just as much as they don’t properly believe in the doom.


  • Hmm, maybe too premature - chatgpt has history on by default now, so maybe that’s where it got the idea it was a classic puzzle?

    With history off, it still sounds like it has the problem in the training dataset, but it is much more bizarre:

    https://markdownpastebin.com/?id=68b58bd1c4154789a493df964b3618f1

    Could also be randomness.

    Select snippet:

    Both ferrymen row their two boats across (time = D/v = 1/3 h).
    One ferryman (say A) swims back alone to the west bank (time = D/u = 1 h).
    That same ferryman (A) now rows the second boat back across (time = 1/3 h).
    Meanwhile, the other ferryman (B) has just been waiting on the east bank—but now both are on the east side, and both boats are there.
    

    I have to say with history off it sounds like an even more ambitious moron. I think their history thing may be sort of freezing bot behavior in time, because the bot sees a lot of past outputs by itself, and in the past it was a lot less into shitting LaTeX all over the place when doing a puzzle.





  • I don’t think we need to go as far as evopsych here… it may just be an artifact of modeling the environment at all - you learn to model other people as part of the environment, you re-use models across people (some people are mean, some people are nice, etc).

    Then weather happens, and you got yourself a god of bad weather and a god of good weather, or perhaps a god of all weather who’s bipolar.

    As far as language goes it also works the other way, we over used these terms in application to computers, to the point that in relation to computers “thinking” no longer means it is actually thinking.