• 1 Post
  • 19 Comments
Joined 2 years ago
cake
Cake day: July 4th, 2023

help-circle

  • We have:

    No more sycophancy—now the AI tells you what it believes. […] We get common knowledge, which recently seems like an endangered species.

    Followed by:

    We could also have different versions of articles optimized for different audiences. The question is, how many audiences, but I think that for most articles, two good options would be “for a 12 years old child” and “standard encyclopedia article”. Maybe further split the adult audience to “layman” and “expert”?

    You have got to love the consistency.

    And the accidentally (or not so accidentally?) imperialistic:

    The first idea is translation to languages other than English. Those languages often have fewer speakers, and consequently fewer Wikipedia volunteers. But for AI encyclopedia, volunteers are not a bottleneck. The easiest thing it could do is a 1:1 translation from the English version. But it could also add sources written in the other language, optimize the article for a different audience, etc.

    And also a deep misunderstanding of translation, there is no such thing as 1:1 translation, it always requires re-interpretation.





  • I attempted a point by point sneer, but there is a bit too much silliness and not enough cohesion to produce something readable.

    So focusing on “Post-critique”:

    OP misspels of some of his “enemy” authors, in a way directly cribbed from Wikipedia suggesting no real analysis.

    […], such texts included Ricouer’s Freud and Philosophy: An Essay on Interpretation, Wittgenstein’s Philosophical Investigations and On Certainty, Merleau-Ponty’s Phenomenology of Perception, Hannah Arendt’s The Human Condition, and Kierkegaard’s works […]

    Ricouer should be Ricœur or at the very least Ricoeur. (Incidentally OP also makes a very poor summary of his work)

    Complete and arbitrary marriage of epistemic post-critique and literary post-critique, which as far as I can see have nothing to do with each other beyond sharing a name, and in fact even seem a bit at odds with each other in how they relate to recontextualisation.

    I would say this is obviously bot vomit, but I have known humans to be this lazy and thickheaded.



  • Oof on the part of the author though:

    Eliezer Yudkowsky: Nope.

    Algernoq (the blogpost author): I assume this is a “Nope, because of secret author evidence that justifies a one-word rebuttal” or a “Nope, you’re wrong in several ways but I have higher-value things to do than retype the sequences”. (Also, it’s an honor; I share your goal but take a different road.) […]

    Richard_Kennaway: What goal do you understand yourself to share with Eliezer, and what different road?

    Algernoq: I don’t deserve to be arrogant here, not having done anything yet. The goal: I had a sister once, and will do what I can to end death. The road: I’m working as an engineer (and, on reflection, failing to optimize) instead of working on existential risk-reduction. My vision is to build realistic (non-nanotech) self-replicating robots to brute-force the problem of inadequate science funding. I know enough mechanical engineering but am a few years away from knowing enough computer science to do this.