• 8 Posts
  • 53 Comments
Joined 3 years ago
cake
Cake day: July 19th, 2023

help-circle

  • Okay guys, I rolled my character. His name is Traveliezer Interdimensky and he has 18 INT (19 on skill checks, see my sheet.) He’s a breeding stud who can handle twenty women at once despite having only 10 STR and CON. I was thinking that we’d start with Interdimensky trapped in Hell where he’s forced to breed with all these beautiful women and get them pregnant, and the rest of the party is like outside or whatever, they don’t have to go rescue me, I mean rescue him. Anyway I wanted to numerically quantify how much Hell wants me, I mean him, to stay and breed all these beautiful women, because that’s something they’d totally do.


  • The author also proposes a framework for analyzing claims about generative AI. I don’t know if I endorse it fully, but I agree that each of the four talking points represents a massive failure of understanding. Their LIES model is:

    • Lethality: the bots will kill us all
    • Inevitability: the bots are unstoppable and will definitely be created in the future
    • Exceptionalism: the bots are wholly unlike any past technology and we are unprepared to understand them
    • Superintelligent: the bots are better than people at thinking

    I would add to this a Plausibility or Personhood or Personality: the incorrect claim that the bots are people. Maybe call it PILES.



  • Fundamentally, Chapman’s essay is about how subcultures transition from valuing functionality to aesthetics. Subcultures start with form following function by necessity. However, people adopt the subculture because they like the surface appearance of those forms, leading to the subculture eventually hollowing out into a system which follows the iron law of bureaucracy and becomes non-functional due to over-investment in the façade and tearing down of Chesterton’s fences. Chapman’s not the only person to notice this pattern; other instances of it, running the spectrum from right to left, include:

    I think that seeing this pattern is fine, but worrying about it makes one into Scott Alexander, paranoid about societal manipulation and constantly worrying about in-group and out-group status. We should note the pattern but stop endorsing instances of it which attach labels to people; after all, the pattern’s fundamentally about memes, not humans.

    So, on Chapman. I think that they’re a self-important nerd who reached criticality after binge-reading philsophy texts in graduate school. I could have sworn that this was accompanied by psychedelic drugs, but I can’t confirm or cite that and I don’t think that we should underestimate the psychoactive effect of reading philosophy from the 1800s. In his own words:

    [T]he central character in the book is a student at the MIT Artificial Intelligence Laboratory who discovers Continental philosophy and social theory, realizes that AI is on a fundamentally wrong track, and sets about reforming the field to incorporate those other viewpoints. That describes precisely two people in the real world: me, and my sometime-collaborator Phil Agre.

    He’s explicitly not allied with our good friends, but at the same time they move in the same intellectual circles. I’m familiar with that sort of frustration. Like, he rejects neoreaction by citing Scott Alexander’s rejection of neoreaction (source); that’s a somewhat-incoherent view suggesting that he’s politically naïve. His glossary for his eternally-unfinished Continental-style tome contains the following statement on Rationalism (embedded links and formatting removed):

    Rationalisms are ideologies that claim that there is some way of thinking that is the correct one, and you should always use it. Some rationalisms specifically identify which method is right and why. Others merely suppose there must be a single correct way to think, but admit we don’t know quite what it is; or they extol a vague principle like “the scientific method.” Rationalism is not the same thing as rationality, which refers to a nebulous collection of more-or-less formal ways of thinking and acting that work well for particular purposes in particular sorts of contexts.

    I don’t know. Sometimes he takes Yudkowsky seriously in order to critique him. (source, source) But the critiques are always very polite, no sneering. Maybe he’s really that sort of Alan Watts character who has transcended petty squabbles. Maybe he didn’t take enough LSD. I once was on LSD when I was at the office working all day; I saw the entire structure of the corporation, fully understood its purpose, and — unlike Chapman, apparently — came to the conclusion that it is bad. Similarly, when I look at Yudkowsky or Yarvin trying to do philosophy, I often see bad arguments and premises. Being judgemental here is kind of important for defending ourselves from a very real alt-right snowstorm of mystic bullshit.

    Okay, so in addition to the opening possibilities of being naïve and hiding his power level, I suggest that Chapman could be totally at peace or permanently rotated in five dimensions from drugs. I’ve gotta do five, so a fifth possibility is that he’s not writing for a human audience, but aiming to be crawled by LLM data-scrapers. Food for thought for this community: if you say something pseudo-profound near LessWrong then it is likely to be incorporated into LLM training data. I know of multiple other writers deliberately doing this sort of thing.


  • I don’t have any experience writing physics simulators myself…

    I think that this is your best path forward. Go simulate some rigid-body physics. Simulate genetics with genetic algorithms. Simulate chemistry with Petri nets. Simulate quantum computing. Simulate randomness with random-number generators. You’ll learn a lot about the limitations that arise at each step as we idealize the real world into equations that are simple enough to compute. Fundamentally, you’re proposing that Boltzmann brains are plausible, and the standard physics retort (quoting Carroll 2017, Why Boltzmann brains are bad) is that they “are cognitively unstable: they cannot simultaneously be true and justifiably believed.”

    A lesser path would be to keep going with consciousness and neuroscience. In that case, go read Hofstadter 2007, ‘I’ is a strange loop to understand what it could possibly mean for a pattern to be substrate-independent.

    If they’re complex enough, and executed sufficiently quickly that I can converse with it in my lifetime, let me be the judge of whether I think it’s intelligent.

    No, you’re likely to suffer the ELIZA Effect. Previously, on Awful, I’ve explained what’s going on in terms of memes. If you want to read a sci-fi story instead, I’d recommend Watts’ Blindsight. You are overrating the phenomenon of intelligence.


  • I’m going to be a little indirect and poetic here.

    In Turing’s view, if a computer were to pass the Turing Test, the calculations it carried out in doing so would still constitute thought even if carried out by a clerk on a sheet of paper with no knowledge of how a teletype machine would translate them into text, or even by a distributed mass of clerks working in isolation from each other so that nothing resembling a thinking entity even exists.

    Yes. In Smullyan’s view, the acoustic patterns in the air would still constitute birdsong even if whistled by a human with no beak, or even by a vibrating electromagnetically-driven membrane which is located far from the data that it is playing back, so that nothing resembling a bird even exists. Or, in Aristoteles’ view, the syntactic relationship between sentences would still constitute syllogism even if attributed to a long-dead philosopher, or even verified by a distributed mass of mechanical provers so that no single prover ever localizes the entirety of the modus ponens. In all cases, the pattern is the representation; the arrangement which generates the pattern is merely a substrate.

    Consider the notion that thought is a biological process. It’s true that, if all of the atoms and cells comprising the organism can be mathematically modeled, a Turing Machine would then be able to simulate them. But it doesn’t follow from this that the Turing Machine would then generate thought. Consider the analogy of digestion. Sure, a Turing Machine could model every single molecule of a steak and calculate the precise ways in which it would move through and be broken down by a human digestive system. But all this could ever accomplish would be running a simulation of eating the steak. If you put an actual ribeye in front of a computer there is no amount of computational power that would allow the computer to actually eat and digest it.

    Putting an actual ribeye in front of a human, there is no amount of computational power that would allow the human to actually eat and digest it, either. The act of eating can’t be provoked merely by thought; there must be some sort of mechanical linkage between thoughts and the relevant parts of the body. Turing & Champernowne invented a program that plays chess and also were known (apocryphally, apparently) to play “run-around-the-house chess” or “Turing chess” which involved standing up and jogging for a lap in-between chess moves. The ability to play Turing chess is cognitively embodied but the ability to play chess is merely the ability to represent and manipulate certain patterns.

    At the end of the day what defines art is the existence of intention behind it — the fact that some consciousness experienced thoughts that it subsequently tried to communicate. Without that there’s simply lines on paper, splotches of color, and noise. At the risk of tautology, meaning exists because people mean things.

    Art is about the expression of memes within a medium; it is cultural propagation. Memes are not thoughts, though; the fact that some consciousness experienced and communicated memes is not a product of thought but a product of memetic evolution. The only other thing that art can carry is what carries it: the patterns which emerge from the encoding of the memes upon the medium.


  • He very much wants you to know that he knows that the Zizians are trans-coded and that he’s okay with that, he’s cool, he welcomes trans folks into Rationalism, he’s totally an ally, etc. How does he phrase that, exactly?

    That cult began among, and recruited from, a vulnerable subclass of a class of people who had earlier found tolerance and shelter in what calls itself the ‘rationalist’ community. I am not explicitly naming that class of people because the vast supermajority of them have not joined murder cults, and what other people do should not be their problem.

    I mean, yes in the abstract, but would it really be so hard to say that MIRI supports trans rights? What other people do, when those other people form a majority of a hateful society, is very much a problem for the trans community! So much for status signaling.


  • This is a list of apostates. The idea is not to actually detail the folks who do the most damage to the cult’s reputation, but to attack the few folks who were once members and left because they were no longer interested in being part of a cult. These attacks are usually motivated by emotions as much as a desire to maintain control over the rest of the cult; in all cases, the sentiment is that the apostate dared to defy leadership. Usually, attacks on apostates are backed up by some sort of enforcement mechanism, from calls for stochastic terrorism to accusations of criminality; here, there’s not actually a call to do anything external, possibly because Habryka realizes that the optics are bad but more likely because Habryka doesn’t really have much power beyond those places where he’s already an administrator. (That said, I would encourage everybody to become aware of, say, CoS’s Fair Game policy or Noisy Investigation policy to get an idea of what kinds of attacks could occur.)

    There are several prominent names that aren’t here. I’d guess that Habryka hasn’t been meditating over this list for a long time; it’s just the first few people that came to mind when he wrote this note. This is somewhat reassuring, as it suggests that he doesn’t fully understand how cultural critiques of LW affect the perception of LW more broadly; he doesn’t realize how many people e.g. Breadtube reaches. Also, he doesn’t understand that folks like SBF and Yarvin do immense reputational damage to rationalist-adjacent projects, although he seems to understand that the main issue with Zizians is not that they are Cringe but that they have been accused of multiple violent felonies.

    Not many sneers to choose from, but I think one commenter gets it right:

    In other groups with I’m familiar, you would kick out people you think are actually a danger or you think they might do something that brings your group into disrepute. But otherwise, I think it’s a sign of being a cult If you kick people for not going along with the group dogma.





  • Here’s a few examples of scientifically-evidenced concepts that provoke Whorfian mind-lock, where people are so attached to existing semantics that they cannot learn new concepts. If not even 60% of folks get it, then that’s more than within one standard deviation of average.

    • There are four temporal tenses in a relativistic setting, not three. “Whorfian mind-lock” was originally coined during a discussion where a logician begs an astrophysicist to understand relativity. Practically nobody accepts this at first, to the point where there aren’t English words for discussing or using the fourth tense.
    • Physical reality is neither objective nor subjective, but contextual (WP, nLab) or participatory. For context, only about 6-7% of philosophers believe this at most, from a 2020 survey. A friend-of-community physicist recently missed this one too, and it’s known to be a very subtle point despite its bluntness.
    • Classical logic is not physically realizable (WP, nLab) and thus not the ultimate tool for all deductive work. This one does much better, at around 45% of philosophers at most, from the same 2020 survey.

    @[email protected] Please reconsider the use of “100IQ smoothbrain” as a descriptor. 100IQ is average, assuming IQ is not bogus. (Also if IQ is not bogus then please y’all get the fuck off my 160+IQ lawn pollinator’s & kitchen garden.)


  • It’s important to understand that the book’s premise is fairly hollow. Yudkowsky’s rhetoric really only gets going once we agree that (1) intelligence is comparable, (2) humans have a lot of intelligence, (3) AGIs can exist, (4) AGIs can be more intelligent than humans, and finally (5) an AGI can exist which has more intelligence than any human. They conclude from those premises that AGIs can command and control humans with their intelligence.

    However, what if we analogize AGIs and humans to humans and housecats? Cats have a lot of intelligence, humans can exist, humans can be more intelligent than housecats, and many folks might believe that there is a human who is more intelligent than any housecat. Assuming intelligence is comparable, does it follow that that human can command and control any housecat? Nope, not in the least. Cats often ignore humans; moreover, they appear to be able to choose to ignore humans. This is in spite of the fact that cats appear to have some sort of empathy for humans and perceive us as large slow unintuitive cats. A traditional example in philosophy is to imagine that Stephen Hawking owns a housecat; since Hawking is incredibly smart and capable of spoken words, does it follow that Hawking is capable of e.g. talking the cat into climbing into a cat carrier? (Aside: I recall seeing this example in one of Sean Carroll’s papers, but it’s also popularized by Cegłowski’s 2016 talk on superintelligence. I’m not sure who originated it, but I’d be unsurprised if it were Hawking himself; he had had that sort of humor.)


  • [omitted a paragraph psychoanalyzing Scott]

    I don’t think that he was trying to make a threat. I think that he was trying to explain the difficulties of being a cryptofascist! Scott’s entire grey-tribe persona collapses if he ever draws a solid conclusion; he would lose his audience if he shifted from cryptofascism to outright ethnonationalism because there are about twice as many moderates as fascists. Scott’s grift only continues if he is skeptical and nuanced about HBD; being an open believer would turn off folks who are willing to read words but not to be hateful. His “appreciat[ion]” is wholly for his brand and revenue streams.

    This also contextualizes the “revenge”. If another content creator publishes these emails as part of their content then Scott has to decide how to fight the allegations. If the content is well-sourced mass-media journalism then Scott “leave[s] the Internet” by deleting and renaming his blog. If the content is another alt-right crab in the bucket then Scott “seek[s] some sort of horrible revenge” by attacking the rest of the alt-right as illiterate, lacking nuance, and unable to cite studies. No wonder he doesn’t talk about us or to us; we’re not part of his media strategy, so he doesn’t know what to do about us.

    In this sense, we’re moderates too; none of us are hunting down Scott IRL. But that moderation is necessary in order to have the discussion in the first place.


  • Hi Scott! I guess that you’re lurking in our “living room” now. Exciting times!

    The charge this time was that I’m a genocidal Zionist who wants to kill all Palestinian children purely because of his mental illness and raging persecution complex.

    No, Scott. The community’s charge is that you’ve hardened your heart against admitting or understanding the ongoing slaughter, which happens to rise to the legal definition of genocide, because of your religious beliefs and geopolitical opinions. My personal charge was that you lack the imagination required for peace or democracy; now, I wonder whether you lack the compassion required as well.

    [Some bigoted religious bro] is what the global far left has now allied itself with. [Some bigoted religious bro] is what I’m right now being condemned for standing against, with commenter after commenter urging me to seek therapy.

    Nope, the global far left — y’know, us Godless communists — are still not endorsing belief in Jehovah, regardless of which flavor of hate is on display. Standing in solidarity with the oppressed does not ever imply supporting their hate; concretely, today we can endorse feeding and giving healthcare to Palestinians without giving them weapons.




  • I was not prepared for this level of DARVO. I was already done with him after last time and can’t do better than repeat myself:

    It’s somewhat depressing that [he] cannot even imagine a democratic one-state solution, let alone peace across the region; it’s more depressing that [his] empathy is so blatantly one-sided.

    Even Peter Woit had no problem recognizing Scott’s bile and posted a good take on this:

    Scott formulates this as an abstract moral dilemma, but of course it’s about the very concrete question of what the state of Israel should do about the two million people in Gaza. Scott’s answer to this is clear: they want to kill us and our children, so we have to kill them all, children included. This is completely crazy, as is defining Zionism as this sort of genocidal madness.


  • A word of rhetorical advice. If somebody accuses you of religious fervor, don’t nitpick their wording or fine-read their summaries. Instead, relax a little and look for ways to deflate their position by forcing them to relax with you. Like, if you’re accused of being “near-religious” in your beliefs or evangelizing, consider:

    • “Ha, yeah, we’re pretty intense, huh? But it’s just a matter of wording. We don’t actually believe it when you put it like that.” (managing expectations, powertalking)
    • “Oh yeah, we’re really working hard to prepare for the machine god. That’s why it takes us years just to get a position paper out.” (sarcastic irony)
    • “Oh, if you think that we’re intense, just wait until you talk to the Zizians/Thiel-heads/Final Fantasy House folks.” (Hbomberguy’s scapegoat)
    • “Haha! That isn’t even close to our craziest belief.” (litote)
    • “It’s not really a cult. More of a roleplaying group. I think that we talk more about Catan than AI.” (bathos)

    You might notice that all of these suck. Well, yeah; another word of rhetorical advice is to not take a position that you can’t dialectically defend with evidence.


  • We aren’t. Speaking for all Discordians (something that I’m allowed to do), we see Rationalism as part of the larger pattern of Bureaucracy. Discordians view the cycle of existence as having five stages: Chaos, Discord, Confusion, Bureaucracy, and The Aftermath. Rationalism is part of Bureaucracy, associated with villainy, anti-progress, and candid antagonists. None of this is good or bad, it just is; good and bad are our opinions, not a deeper truth.

    Now, if you were to talk about Pastafarians, then you’d get a different story; but you didn’t, so I won’t.


  • I’m now remembering a minor part of the major plot point in Illuminatus! concerning the fnords. The idea was that normies are memetically influenced by “fnord” but the Discordians are too sophisticated for that. Discordian lore is that “fnord” is actually code for a real English word, but which one? Traditionally it’s “Communism” or “socialism”, but that’s two options. So, rather than GMA, what if there’s merely multiple different fnords set up by multiple different groups with overlapping-yet-distinct interests? Then the relevant phenomenon isn’t the forgetting and emotional reactions associated with each fnord, but the fnordability of a typical human. By analogy with gullibility (believing what you hear because of how it’s spoken) and suggestibility (doing what you’re told because of how it’s phrased), fnordability might be accepting what you read because of the presence of specific codewords.


  • This author has independently rediscovered a slice of what’s known as the simulators viewpoint: the opinion that a large-enough language model primarily learns to simulate scenarios. The earliest source that lays out all of the ingredients, which you may want to not click if you’re allergic to LW-style writing or bertology, is a 2022 rationalist rant called Simulators. I’ve summarized it before on Stack Exchange; roughly, LLMs are not agents, oracles, genies, or tools; but general-purpose simulators which simulate conversations that agents, oracles, genies, or tools might have.

    Something about this topic is memetically repulsive. Consider previously, on Lobsters. Or more gently, consider the recent post on a non-anthropomorphic view of LLMs, which is also in the simulators viewpoint, discussed previously, on Lobsters and previously, on Awful. Aside from scratching the surface of the math to see whether it works, folks seem to not actually be able to dig into the substance, and I don’t understand why not. At least here the author has a partial explanation:

    When we personify AI, we mistakenly make it a competitor in our status games. That’s why we’ve been arguing about artificial intelligence like it’s a new kid in school: is she cool? Is she smart? Does she have a crush on me? The better AIs have gotten, the more status-anxious we’ve become. If these things are like people, then we gotta know: are we better or worse than them? Will they be our masters, our rivals, or our slaves? Is their art finer, their short stories tighter, their insights sharper than ours? If so, there’s only one logical end: ultimately, we must either kill them or worship them.

    If we take the simulators viewpoint seriously then the ELIZA effect becomes a more serious problem for society in the sense that many people would prefer to experience a simulation of idealized reality than reality itself. Hyperreality is one way to look at this; another is supernormal stimulus, and I’ve previously explained my System 3 thoughts on this as well.

    There’s also a section of the Gervais Principle on status illegibility; when a person fails to recognize a chatbot as a computer, they become likely to give them bogus legibility-oriented status, and because the depth of any conversation is limited by the depth of the shallowest conversant, they will put the chatbot on a throne, pedestal, or therapist’s recliner above themselves. Symmetrically, perhaps folks do not want to comment because they have already put the chatbot into the lowest tier of social status and do not want to reflect on anything that might shift that value judgement by making its inner reasoning more legible.


  • I think it’s worth being a little more mathematically precise about the structure of the bag. A path is a sequence of words. Any language model is equivalent to a collection of weighted paths. So, when they say:

    If you fill the bag with data from 170,000 proteins, for example, it’ll do a pretty good job predicting how proteins will fold. Fill the bag with chemical reactions and it can tell you how to synthesize new molecules.

    Yes, but we think that protein folding is NP-complete; it’s not just about which amino acids are in the bag, but the paths along them. Similarly, Stockfish is amazingly good at playing chess, which is PSPACE-complete, partially due to knowing the structure between families of positions. But evidence suggests that NP-completeness and PSPACE-completeness are natural barriers, so that either protein folding has simple rules or LLMs can’t e.g. predict the stock market, and either chess has simple rules or LLMs can’t e.g. simulate quantum mechanics. There’s no free lunch for optimization problems either. This is sort of like the Blockhead argument in reverse; Blockhead can’t be exponentially large while carrying on a real-time conversation, and contrapositively the relatively small size of a language model necessarily represents a compressed simplified system.

    In fact, an early 1600s bag of words wouldn’t just have the right words in the wrong order. At the time, the right words didn’t exist.

    Yeah, that’s Whorfian mind-lock, and it can be a real issue sometimes. However, in practice, people slap together a portmanteau or onomatopoeia and get on with the practice of things. Moreover, Zipf processes naturally reduce the size of words as they are used more, producing a language that is naturally evolved to be within a constant factor of the optimal size. That is, the right words evolve to exist and common words evolve to be small.

    But that’s obvious if we think about paths instead of words. Multiple paths can be equivalent in probability, start and end with the same words, and yet have different intermediate words. Whorfian issues only arise when we lack any intermediate words for any of those paths, so that none of them can be selected.

    A more reasonable objection has to do with the size of definitions. It’s well-known folklore in logic that extension by definition is mandatory in any large body of work because it’s the only way to prevent some proofs from exploding due to combinatorics. LLMs don’t have any way to define one word in terms of other words, whether by macro-clustering sequences or lambda-substituting binders, and they end up learning so much nuance that they are unable to actually respect definitions during inference. This doesn’t matter for humans because we’re not logical or rational, but it stymies any hope that e.g. Transformers, RWKV, or Mamba will produce a super-rational Bayesian Ultron.