• 4 Posts
  • 42 Comments
Joined 2 years ago
cake
Cake day: July 19th, 2023

help-circle
  • Hi Scott! I guess that you’re lurking in our “living room” now. Exciting times!

    The charge this time was that I’m a genocidal Zionist who wants to kill all Palestinian children purely because of his mental illness and raging persecution complex.

    No, Scott. The community’s charge is that you’ve hardened your heart against admitting or understanding the ongoing slaughter, which happens to rise to the legal definition of genocide, because of your religious beliefs and geopolitical opinions. My personal charge was that you lack the imagination required for peace or democracy; now, I wonder whether you lack the compassion required as well.

    [Some bigoted religious bro] is what the global far left has now allied itself with. [Some bigoted religious bro] is what I’m right now being condemned for standing against, with commenter after commenter urging me to seek therapy.

    Nope, the global far left — y’know, us Godless communists — are still not endorsing belief in Jehovah, regardless of which flavor of hate is on display. Standing in solidarity with the oppressed does not ever imply supporting their hate; concretely, today we can endorse feeding and giving healthcare to Palestinians without giving them weapons.




  • I was not prepared for this level of DARVO. I was already done with him after last time and can’t do better than repeat myself:

    It’s somewhat depressing that [he] cannot even imagine a democratic one-state solution, let alone peace across the region; it’s more depressing that [his] empathy is so blatantly one-sided.

    Even Peter Woit had no problem recognizing Scott’s bile and posted a good take on this:

    Scott formulates this as an abstract moral dilemma, but of course it’s about the very concrete question of what the state of Israel should do about the two million people in Gaza. Scott’s answer to this is clear: they want to kill us and our children, so we have to kill them all, children included. This is completely crazy, as is defining Zionism as this sort of genocidal madness.


  • A word of rhetorical advice. If somebody accuses you of religious fervor, don’t nitpick their wording or fine-read their summaries. Instead, relax a little and look for ways to deflate their position by forcing them to relax with you. Like, if you’re accused of being “near-religious” in your beliefs or evangelizing, consider:

    • “Ha, yeah, we’re pretty intense, huh? But it’s just a matter of wording. We don’t actually believe it when you put it like that.” (managing expectations, powertalking)
    • “Oh yeah, we’re really working hard to prepare for the machine god. That’s why it takes us years just to get a position paper out.” (sarcastic irony)
    • “Oh, if you think that we’re intense, just wait until you talk to the Zizians/Thiel-heads/Final Fantasy House folks.” (Hbomberguy’s scapegoat)
    • “Haha! That isn’t even close to our craziest belief.” (litote)
    • “It’s not really a cult. More of a roleplaying group. I think that we talk more about Catan than AI.” (bathos)

    You might notice that all of these suck. Well, yeah; another word of rhetorical advice is to not take a position that you can’t dialectically defend with evidence.


  • We aren’t. Speaking for all Discordians (something that I’m allowed to do), we see Rationalism as part of the larger pattern of Bureaucracy. Discordians view the cycle of existence as having five stages: Chaos, Discord, Confusion, Bureaucracy, and The Aftermath. Rationalism is part of Bureaucracy, associated with villainy, anti-progress, and candid antagonists. None of this is good or bad, it just is; good and bad are our opinions, not a deeper truth.

    Now, if you were to talk about Pastafarians, then you’d get a different story; but you didn’t, so I won’t.


  • I’m now remembering a minor part of the major plot point in Illuminatus! concerning the fnords. The idea was that normies are memetically influenced by “fnord” but the Discordians are too sophisticated for that. Discordian lore is that “fnord” is actually code for a real English word, but which one? Traditionally it’s “Communism” or “socialism”, but that’s two options. So, rather than GMA, what if there’s merely multiple different fnords set up by multiple different groups with overlapping-yet-distinct interests? Then the relevant phenomenon isn’t the forgetting and emotional reactions associated with each fnord, but the fnordability of a typical human. By analogy with gullibility (believing what you hear because of how it’s spoken) and suggestibility (doing what you’re told because of how it’s phrased), fnordability might be accepting what you read because of the presence of specific codewords.


  • This author has independently rediscovered a slice of what’s known as the simulators viewpoint: the opinion that a large-enough language model primarily learns to simulate scenarios. The earliest source that lays out all of the ingredients, which you may want to not click if you’re allergic to LW-style writing or bertology, is a 2022 rationalist rant called Simulators. I’ve summarized it before on Stack Exchange; roughly, LLMs are not agents, oracles, genies, or tools; but general-purpose simulators which simulate conversations that agents, oracles, genies, or tools might have.

    Something about this topic is memetically repulsive. Consider previously, on Lobsters. Or more gently, consider the recent post on a non-anthropomorphic view of LLMs, which is also in the simulators viewpoint, discussed previously, on Lobsters and previously, on Awful. Aside from scratching the surface of the math to see whether it works, folks seem to not actually be able to dig into the substance, and I don’t understand why not. At least here the author has a partial explanation:

    When we personify AI, we mistakenly make it a competitor in our status games. That’s why we’ve been arguing about artificial intelligence like it’s a new kid in school: is she cool? Is she smart? Does she have a crush on me? The better AIs have gotten, the more status-anxious we’ve become. If these things are like people, then we gotta know: are we better or worse than them? Will they be our masters, our rivals, or our slaves? Is their art finer, their short stories tighter, their insights sharper than ours? If so, there’s only one logical end: ultimately, we must either kill them or worship them.

    If we take the simulators viewpoint seriously then the ELIZA effect becomes a more serious problem for society in the sense that many people would prefer to experience a simulation of idealized reality than reality itself. Hyperreality is one way to look at this; another is supernormal stimulus, and I’ve previously explained my System 3 thoughts on this as well.

    There’s also a section of the Gervais Principle on status illegibility; when a person fails to recognize a chatbot as a computer, they become likely to give them bogus legibility-oriented status, and because the depth of any conversation is limited by the depth of the shallowest conversant, they will put the chatbot on a throne, pedestal, or therapist’s recliner above themselves. Symmetrically, perhaps folks do not want to comment because they have already put the chatbot into the lowest tier of social status and do not want to reflect on anything that might shift that value judgement by making its inner reasoning more legible.


  • I think it’s worth being a little more mathematically precise about the structure of the bag. A path is a sequence of words. Any language model is equivalent to a collection of weighted paths. So, when they say:

    If you fill the bag with data from 170,000 proteins, for example, it’ll do a pretty good job predicting how proteins will fold. Fill the bag with chemical reactions and it can tell you how to synthesize new molecules.

    Yes, but we think that protein folding is NP-complete; it’s not just about which amino acids are in the bag, but the paths along them. Similarly, Stockfish is amazingly good at playing chess, which is PSPACE-complete, partially due to knowing the structure between families of positions. But evidence suggests that NP-completeness and PSPACE-completeness are natural barriers, so that either protein folding has simple rules or LLMs can’t e.g. predict the stock market, and either chess has simple rules or LLMs can’t e.g. simulate quantum mechanics. There’s no free lunch for optimization problems either. This is sort of like the Blockhead argument in reverse; Blockhead can’t be exponentially large while carrying on a real-time conversation, and contrapositively the relatively small size of a language model necessarily represents a compressed simplified system.

    In fact, an early 1600s bag of words wouldn’t just have the right words in the wrong order. At the time, the right words didn’t exist.

    Yeah, that’s Whorfian mind-lock, and it can be a real issue sometimes. However, in practice, people slap together a portmanteau or onomatopoeia and get on with the practice of things. Moreover, Zipf processes naturally reduce the size of words as they are used more, producing a language that is naturally evolved to be within a constant factor of the optimal size. That is, the right words evolve to exist and common words evolve to be small.

    But that’s obvious if we think about paths instead of words. Multiple paths can be equivalent in probability, start and end with the same words, and yet have different intermediate words. Whorfian issues only arise when we lack any intermediate words for any of those paths, so that none of them can be selected.

    A more reasonable objection has to do with the size of definitions. It’s well-known folklore in logic that extension by definition is mandatory in any large body of work because it’s the only way to prevent some proofs from exploding due to combinatorics. LLMs don’t have any way to define one word in terms of other words, whether by macro-clustering sequences or lambda-substituting binders, and they end up learning so much nuance that they are unable to actually respect definitions during inference. This doesn’t matter for humans because we’re not logical or rational, but it stymies any hope that e.g. Transformers, RWKV, or Mamba will produce a super-rational Bayesian Ultron.




  • Yeah, that’s the most surprising part of the situation: not only are the SCP-8xxx series finding an appropriate meta by discussing the need to clean up SCP articles under ever-increasing pressure, but all of the precautions revolving around SCP-055 and SCP-914 turned out to be fully justified given what the techbros are trying to summon. It is no coincidence that the linked thread is by the guy who wrote SCP-3125, whose moral is roughly to not use blueprints from five-dimensional machine elves to create memetic hate machines.


  • Thanks for linking that. His point about teenagers and fiction is interesting to me because I started writing horror on the Internet in the pre-SCP era when I was maybe 13 or 14 but I didn’t recognize the distinction between fiction and non-fiction until I was about 28. I think that it’s easier for teenagers to latch onto the patterns of jargon than it is for them to imagine the jargon as describing a fictional world that has non-fictional amounts of descriptive detail.





  • You now have to argue that oxidative stress isn’t suffering. Biology does not allow for humans to divide the world into the regions where suffering can be experienced and regions where it is absent. (The other branch contradicts the lived experience of anybody who has actually raised a sourdough starter; it is a living thing which requires food, water, and other care to remain homeostatic, and which changes in flavor due to environmental stress.)

    Worse, your framing fails to meet one of the oldest objections to Singer’s position, one which I still consider a knockout: you aren’t going to convince the cats to stop eating intelligent mammals, and evidence suggests that cats suffer when force-fed a vegan diet.

    When you come to Debate Club, make sure that your arguments are actually well-lubed and won’t squeak when you swing them. You’ve tried to clumsily replay Singer’s arguments without understanding their issues and how rhetoric has evolved since then. I would suggest watching some old George Carlin reruns; the man was a powerhouse of rhetoric.



  • Singer’s original EA argument, concerning the Bengal famine, has two massive holes in the argument, one of which survives to his simplified setup. I’m going to explain because it’s funny; I’m not sure if you’ve been banned yet.

    First, in the simplified setup, Singer says: there is a child drowning in the river! You must jump into the river, ruining your clothes, or else the child will drown. Further, there’s no time for debate; if you waste time talking, then you forfeit the child. My response is to grab Singer by the belt buckle and collar and throw him into the river, and then strip down and save the child, ignoring whatever happens to Singer. My reasoning is that I don’t like epistemic muggers and I will make choices that punish them in order to dissuade them from approaching me, but I’ll still save the child afterwards. In terms of real life, it was a good call to prosecute SBF regardless of any good he may have done.

    Second, in the Bangladesh setup, Singer says: everybody must donate to one specific charity because the charity can always turn more donations into more delivered food. Accepting the second part, there’s a self-reference issue in the second part: if one is an employee of the charity, do they also have to donate? If we do the case analysis and discard the paradoxical cases, we are left with the repugnant conclusion: everybody ought to not just donate their money to the charity, but also all of their labor, at the cheapest prices possible while not starving themselves. Maybe I’m too much of a communist, but I’d rather just put rich peoples’ heads on pikes instead and issue a food guarantee.

    It’s worth remembering that the actual famine was mostly a combination of failures of local government and also the USA withholding food due to Bangladesh trading with Cuba; maybe Singer’s hand-wringing over the donation strategies of wealthy white moderates is misplaced.




  • I guess that I’m the resident compiler engineer today. Let’s go.

    So why not write an optimizing compiler in its own language, and then run it on itself?

    The process will reach a fixed point after three iterations. In fancier language, Glück 2009 shows that the fourth, fifth, and sixth Futamura projections are equivalent to the third Futamura projection for a fixed choice of (compiler-)compiler and optimizer. This has practical import for cross-compiling; when I used to use Gentoo, I would watch GCC build itself exactly three times, and we still use triples in our targets today.

    [S]uppose you built an optimizing compiler that searched over a sufficiently wide range of possible optimizations, that it did not ordinarily have time to do a full search of its own space — so that, when the optimizing compiler ran out of time, it would just implement whatever speedups it had already discovered.

    Oh, it’s his lucky day! Yud, you’ve just been Schmidhuber’d! Starting in 2003, Schmidhuber’s lab has published research on Gödel machines, self-improving machines which prove that their self-modifications will always be better than previous iterations. They are named not just after Gödel, but after his First Incompleteness Theorem; Schmidhuber et al proved easily that there will always be at least one speedup theorem which a Gödel machine can never reach (for a given choice of axioms, etc.)

    EURISKO used “heuristics” to, for example, design potential space fleets. It also had heuristics for suggesting new heuristics, and metaheuristics could apply to any heuristic, including metaheuristics. … EURISKO could modify even the metaheuristics that modified heuristics. … Still, EURISKO ran out of steam. Its self-improvements did not spark a sufficient number of new self-improvements.

    Once again the literature on metaheuristics exists, and it culminates in the discovery of genetic algorithms. As such, we can immediately apply the concept of gene-oriented evolution (“beanbag” or “gene pool” reasoning) and note that, if goals don’t change and new genes don’t enter the pool, then eventually the population stagnates as the possible range of mutated genes is tested and exhausted. It doesn’t matter that some genes are “meta” genes that act on other genes, nor that such actions are indirect. Genes are genes.

    I’m gonna close with a sneer from Jay Bellou, who I hope is not a milkshake duck, in the comments:

    All “insights” eventually bottom out in the same way that Eurisko bottomed out; the notion of ever-increasing gain by applying some rule or metarule is a fantasy. You make the same sort of mistake about “insight” as do people like Roger Penrose, who believes that humans can “see” things that no computer could, except that you think that a computer can too, whereas in reality neither humans nor computers have access to any such magical “insight” sauce.