

I think I figured it out.
He fed his post to AI and asked it to list the fictional universes he’d want to live in, and that’s how he got Dune. Precisely the information he needed, just as his post describes.
I think I figured it out.
He fed his post to AI and asked it to list the fictional universes he’d want to live in, and that’s how he got Dune. Precisely the information he needed, just as his post describes.
I am also presuming this is about purely non-fiction technical books
He has Dune on his list of worlds to live in, though…
edit: I know. he fed his post to AI and asked it to list the fictional universes he’d want to live in, and that’s how he got Dune. Precisely the information he needed.
Naturally, that system broke down (via capitalists grabbing the expensive fusion power plants for their own purposes)
This is kind of what I have to give to Niven. The guy is a libertarian, but he would follow his story all the way into such results. And his series where organs are being harvested for minor crimes? It completely flew over my head that he was trying to criticize taxes, and not, say, republican tough-on-crime, mass incarceration, and for profit prisons. Because he followed the logic of the story and it aligned naturally with its real life counterpart, the for profit prison system, even if he wanted to make some sort of completely insane anti tax argument where taxing rich people is like harvesting organs or something.
On the other hand, much better regarded Heinlein, also a libertarian, would write up a moon base that exports organic carbon and where you have to pay for oxygen to convert to CO2. Just because he wanted to make a story inside of which “having to pay for air to breathe” works fine.
Maybe he didn’t read Dune he just had AI summarize it.
Jesus Christ on a stick, thats some trice cursed shit.
Maybe susceptibility runs in families, culturally. Religion does, for one thing.
I think this may also be a specific low-level exploit, whereby humans are already biased to mentally “model” anything as having an agency (see all the sentient gods that humans invented for natural phenomena).
I was talking to an AI booster (ewww) in another place and I think they really are predominantly laymen brain fried by this shit. That particular one posted a convo where out of 4 arithmetic operations, 2 were “12042342 can be written as 120423 + 19, and 43542341 as 435423 + 18” combined with AI word-salad, and he was expecting that this would be convincing.
It’s not that this particular person thinks its genius, he thinks that it is not a mere computer, and the way it is completely shit at math only serves to prove it to them that it is not a mere computer.
edit: And of course they care not for any mechanistic explanations, because all of those imply LLMs are not sentient, and they believe LLMs are sentient. The “this isn’t it but one day some very different system will” counter argument doesn’t help either.
Yeah I think it is almost undeniable chatbots trigger some low level brain thing. Eliza has 27% Turing Test pass rate. And long before that, humans attributed weather and random events to sentient gods.
This makes me think of Langford’s original BLIT short story.
And also of rove beetles that parasitize ant hives. These bugs are not ants but they pass the Turing test for ants - they tap the antennae with an ant and the handshake is correct and they are identified as ants from this colony and not unrelated bugs or ants from another colony.
I think it gotten to the point where its about as helpful to point out it is just an autocomplete bot, as it is to point out that “its just the rotor blades chopping sunlight” when a helicopter pilot is impaired by flicker vertigo and is gonna crash. Or in the world of BLIT short story, that its just some ink on a wall.
Human nervous system is incredibly robust, comparing to software, or comparing to its counterpart in the fictional world in BLIT, or comparing to shrimps mesmerized by cuttlefish.
And yet it has exploitable failure modes, and a corporation that is optimizing an LLM for various KPIs is a malign intelligence that is searching for a way to hack brains, this time with much better automated tooling and with a very large budget. One may even say a super-intelligence since it is throwing the combined efforts of many at the problem.
edit: that is to say there certainly is something weird going on on psychological level ever since Eliza.
Yudkowsky is a dumbass layman posing as an expert, and he’s playing up his own old pre-conceived bullshit. But if he can get some of his audience away from the danger - even if he attributes a good chunk of the malevolence to a dumb ass autocomplete to do so, that is not too terrible of a thing.
Chatbots ate my cult.
I wonder what’s gonna happen first, the bubble popping or Yudkowsky getting so fed up with gen AI he starts sneering.
It is as if there were people fantasizing about automaton mouths and lips and tongues and vocal cords for some reason, and come up with all these fantasies of how it’ll be when automatons can talk.
And then Edison invents the phonograph.
And then they stick their you know what in the gearing between the cylinder and the screw.
Except somehow more stupid, because these guys are worried about AI apocalypse while boosting AI hype that pays for this supposed apocalypse.
edit: If someone said in 1850s “automatons won’t be able to talk for another 150 years or longer because the vocal tract is too intricate”, and some automaton fetishist says that they will be able to talk in 20 years, the phonograph shouldn’t lend any credence whatsoever to the latter. What is different this time is that phonograph was genuinely extremely useful for what it is, while the generative AI is not quite as useful and they’re going for the automaton fetishist money.
is somewhere between 0 and 100%.
That really pins it down, doesn’t it?
I seriously doubt he ever worked anywhere like that, not to mention that he’s too spineless to actually get in trouble IRL.
He’s such a complete moron. He doesn’t want to recite “DEI shibboleths”? What does he even think that would refer to? Why shibboleths?
To spell it out, that would refer to an antisemitic theory that the reason (for example) some black guy would get a medal of honor (the “deimedal”) is because of the jews.
I swear this guy is dumber than Trump. Trump for all his rambling, uses actual language - Trump understands what the shit he is saying means to his followers. Scott… he really does not.
I just describe it as “computer scientology, nowhere near as successful as the original”.
The other thing is that he’s a Thiel project, different but not any more sane than Curtis Yarvin aka Moldbug. So if they heard of moldbug’s political theories (which increasingly many people heard about because of, well, them being enacted) it’s easy to give a general picture of total fucking insanity funded by thiel money. It doesn’t really matter what the particular insanity is, and it matters even less now as the AGI shit hit mainstream entirely bypassing anything Yudkowsky had to say on the subject.
Frigging exactly. Its a dumb ass dead end that is fundamentally incapable of doing vast majority of things ascribed to it.
They keep imagining that it would actually learn some underlying logic from a lot of text. All it can do is store a bunch of applications of said logic, as in a giant table. Deducing underlying rules instead of simply memorizing particular instances of rules, that’s a form of compression, there wasn’t much compression going on and now that the models are so over-parametrized, even less.
I tried the same prompt a lot of times and saw “chain of thought” attempts complete with the state modeling… they must be augmenting the training dataset with some sort of script generated crap.
I have to say those are so far the absolute worst attempts.
Day 16 (Egg 3 on side A; Duck 1, Duck 2, Egg 1, Egg 2 on side B): Janet takes Egg 3 across the river.
“Now, all 2 ducks and 3 eggs are safely transported across the river in 16 trips.”
I kind of feel that this undermines the whole point of using transformer architecture instead of a recurrent neural network. Machine learning sucks at recurrence.
Well I’ll be damned, they are trying to get it to work as if it had internal state, by augmenting their training dataset with some auto-generated crap. Of course, if machine learning wasn’t shit at recurrence they wouldn’t be using transformer architecture, so it doesn’t work.
I got the following from chatgpt:
Janet’s ducks lay 3 eggs per day. Each day, she transports one duck and all the eggs to the other side of the river using a boat that can hold her and one other item (either a duck or an egg). How does she do that? ChatGPT
Janet can transport the ducks and eggs across the river using the following strategy:
Day 1 (Duck A, Duck B, Duck C on side A; Egg 1, Egg 2, Egg 3 on side A):
Janet takes Duck A across the river.
Day 2 (Duck B, Duck C on side A; Egg 1, Egg 2, Egg 3 on side A):
Janet takes Egg 1 across the river.
Day 3 (Duck B, Duck C on side A; Egg 2, Egg 3 on side A; Duck A on side B):
Janet takes Duck A back to side A.
Day 4 (Duck C on side A; Egg 2, Egg 3 on side A; Duck A on side A; Egg 1 on side B):
Janet takes Duck B across the river.
Day 5 (Duck C on side A; Egg 2, Egg 3 on side A; Duck A on side A; Egg 1, Duck B on side B):
Janet takes Egg 1 back to side A.
Day 6 (Duck C on side A; Egg 2, Egg 3 on side A; Duck A, Egg 1 on side A; Duck B on side B):
Janet takes Duck A across the river again.
Day 7 (Egg 2, Egg 3 on side A; Duck A, Egg 1 on side A; Duck B on side B; Duck C on side A):
Janet takes Duck C across the river.
Day 8 (Egg 2, Egg 3 on side A; Duck A, Egg 1, Duck C on side A; Duck B on side B):
Janet takes Egg 2 across the river.
Day 9 (Egg 3 on side A; Duck A, Egg 1, Duck C on side A; Duck B on side B; Egg 2 on side B):
Janet takes Duck A back to side A.
Day 10 (Egg 3 on side A; Duck C on side A; Duck A on side A; Duck B, Egg 1, Egg 2 on side B):
Janet takes Duck B across the river.
Day 11 (Egg 3 on side A; Duck C on side A; Duck A, Duck B on side A; Egg 1, Egg 2 on side B):
Janet takes Egg 1 back to side A.
Day 12 (Egg 3 on side A; Duck C on side A; Duck A, Egg 1, Duck B on side A; Egg 2 on side B):
Janet takes Duck A across the river again.
Day 13 (Duck C on side A; Duck A, Egg 1, Duck B on side A; Egg 2 on side B; Egg 3 on side B):
Janet takes Duck C across the river.
By following this sequence, Janet successfully transports all 3 ducks and all 3 eggs across the river in 13 trips. Each trip involves either transporting one duck or one egg, adhering to the constraint that Janet can only carry one item (duck or egg) with her in the boat.
Perhaps it was near ready to emit a stop token after “the robot can take all 4 vegetables in one trip if it is allowed to carry all of them at once.” but “However” won, and then after “However” it had to say something else because that’s how “however” works…
Agreed on the style being absolutely nauseating. It wasn’t a very good style when humans were using it, but now it is just the style of absolute bottom of the barrel, top of the search results garbage.
I don’t think we need to go as far as evopsych here… it may just be an artifact of modeling the environment at all - you learn to model other people as part of the environment, you re-use models across people (some people are mean, some people are nice, etc).
Then weather happens, and you got yourself a god of bad weather and a god of good weather, or perhaps a god of all weather who’s bipolar.
As far as language goes it also works the other way, we over used these terms in application to computers, to the point that in relation to computers “thinking” no longer means it is actually thinking.