• 1 Post
  • 25 Comments
Joined 3 years ago
cake
Cake day: July 4th, 2023

help-circle

  • I’ll gladly endorse most of what the author is saying.

    This isn’t really a debate club, and I’m not really trying to change your mind. I will just end on a note that:

    I’ll start with the topline findings, as it were: I think the idea of a so-called “Artificial General Intelligence” is a pipe dream that does not realistically or plausibly extend from any currently existent computer technology. Indeed, my strong suspicion AGI is wholly impossible for computers as we presently understand them.

    Neither the author nor me really suggest that it is impossible for machines to think (indeed humans are biological machines), only that it is likely—nothing so stark as inherently—that Turing Machines cannot. “Computable” in the essay means something specific.

    Simulation != Simulacrum.

    And because I can’t resist, I’ll just clarify that when I said:

    Even if you (or anyone) can’t design a statistical test that can detect the difference of a sequence of heads or tails, doesn’t mean one doesn’t exist.

    It means that the test does (or can possibly) exist that, it’s just not achievable by humans. [Although I will also note that for methods that don’t rely on measuring the physical world (pseudo random-number generators) the tests designed by humans a more than adequate to discriminate the generated list from the real thing.]


  • Even if true, why couldn’t the electrochemical processes be simulated too?

    • You’re missing the argument, that even you can simulate the process of digestion perfectly, no actual digestion takes place in the real world.
    • Even if you simulate biological processes perfectly, no actual biology occurs.
    • The main argument from the author is that trying to divorce intelligence from biological imperatives can be very foolish, which is why they highlight that even a cat is smarter than an LLM.

    But even if it is, it’s “just” a matter of scale.

    • Fundamentally what the author is saying, is that it’s a difference in kind not a difference in quantity.
    • Nothing actually guarantees that the laws of physics are computable, and nothing guarantees that our best model actually fits reality (aside from being a very good approximation).
    • Even numerically solving the Hamiltonians from quantum mechanics, is extremely difficult in practice.

    I do know how to write a program that produces indistinguishable results from a real coin for a simulation.

    • Even if you (or anyone) can’t design a statistical test that can detect the difference of a sequence of heads or tails, doesn’t mean one doesn’t exist.
    • Importantly you are also only restricting yourself to the heads or tails sequence, ignoring the coin moving the air, pulling on the planet, and plopping back down in a hand. I challenge you to actually write a program that can achieve these things.
    • Also decent random-number generation is not actually properly speaking Turing complete [Unless again you simulate physics but then again, you have to properly choose random starting conditions even if you assume you have a capable simulator] , modern computers use stuff like component temperature/execution time/user interaction to add “entropy” to random number generation, not direct computation.

    As a summary,

    • When reducing any problem for a “simpler” one, you have to be careful what you ignore.
    • The simulation argument is a bit irrelevant, but as a small aside not guaranteed to be possible in principle, and certainly untractable with current physics model/technology.
    • Human intelligence has a lot of externalities and cannot be reduced to pure “functional objects”.
      • If it’s just about input/output you could be fooled by a tape recorder, and a simple filing system, but I think you’ll agree those aren’t intelligent. The output as meaning to you, but it doesn’t have meaning for the tape-recorder.

  • That’s because there’s absolutely reams of writing out there about Sonnet 18—it could draw from thousands of student essays and cheap study guides, which allowed it to remain at least vaguely coherent. But when forced away from a topic for which it has ample data to plagiarize, the illusion disintegrates.

    Indeed, Any intelligence present is that of the pilfered commons, and that of the reader.

    I had the same thought about the few times LLMs appear to be successful in translation, (where proper translation requires understanding), it’s not exactly doing nothing, but a lot of the work is done by the reader striving to make sense of what he reads, and because humans are clever they can somtimes glimpse the meaning, through the filter of AI mapping a set of words unto another, given enough context. (Until they really can’t, or the subtelties of language completely reverse the meaning when not handled with the proper care).



  • My hunch would be that he has matured at least somewhat since then, but who who knows.

    More broadly speaking, even if not analysing their own actions this way they tend to characterize—in a very manosphere way—the actions of others as being “status-seeking”, as the primary motivator for most actions. I would definitely call that a self-report.



  • We have:

    No more sycophancy—now the AI tells you what it believes. […] We get common knowledge, which recently seems like an endangered species.

    Followed by:

    We could also have different versions of articles optimized for different audiences. The question is, how many audiences, but I think that for most articles, two good options would be “for a 12 years old child” and “standard encyclopedia article”. Maybe further split the adult audience to “layman” and “expert”?

    You have got to love the consistency.

    And the accidentally (or not so accidentally?) imperialistic:

    The first idea is translation to languages other than English. Those languages often have fewer speakers, and consequently fewer Wikipedia volunteers. But for AI encyclopedia, volunteers are not a bottleneck. The easiest thing it could do is a 1:1 translation from the English version. But it could also add sources written in the other language, optimize the article for a different audience, etc.

    And also a deep misunderstanding of translation, there is no such thing as 1:1 translation, it always requires re-interpretation.





  • I attempted a point by point sneer, but there is a bit too much silliness and not enough cohesion to produce something readable.

    So focusing on “Post-critique”:

    OP misspels of some of his “enemy” authors, in a way directly cribbed from Wikipedia suggesting no real analysis.

    […], such texts included Ricouer’s Freud and Philosophy: An Essay on Interpretation, Wittgenstein’s Philosophical Investigations and On Certainty, Merleau-Ponty’s Phenomenology of Perception, Hannah Arendt’s The Human Condition, and Kierkegaard’s works […]

    Ricouer should be Ricœur or at the very least Ricoeur. (Incidentally OP also makes a very poor summary of his work)

    Complete and arbitrary marriage of epistemic post-critique and literary post-critique, which as far as I can see have nothing to do with each other beyond sharing a name, and in fact even seem a bit at odds with each other in how they relate to recontextualisation.

    I would say this is obviously bot vomit, but I have known humans to be this lazy and thickheaded.



  • Oof on the part of the author though:

    Eliezer Yudkowsky: Nope.

    Algernoq (the blogpost author): I assume this is a “Nope, because of secret author evidence that justifies a one-word rebuttal” or a “Nope, you’re wrong in several ways but I have higher-value things to do than retype the sequences”. (Also, it’s an honor; I share your goal but take a different road.) […]

    Richard_Kennaway: What goal do you understand yourself to share with Eliezer, and what different road?

    Algernoq: I don’t deserve to be arrogant here, not having done anything yet. The goal: I had a sister once, and will do what I can to end death. The road: I’m working as an engineer (and, on reflection, failing to optimize) instead of working on existential risk-reduction. My vision is to build realistic (non-nanotech) self-replicating robots to brute-force the problem of inadequate science funding. I know enough mechanical engineering but am a few years away from knowing enough computer science to do this.