“Ah,” said Arthur, “this is obviously some strange usage of the word scientist that I wasn’t previously aware of.”
“Ah,” said Arthur, “this is obviously some strange usage of the word scientist that I wasn’t previously aware of.”
Resolved: that people still active on Twitter are presumed morally bankrupt until proven otherwise.
Since I don’t think that one professor’s uploads can furnish hundreds of billions of tokens… yeah, that sounds exceedingly implausible.
Ah yes, the FRAMΞN, desert warriors of the planet DUNC·.
With significant human input and thorough human review of the material
Yeah, there’s no way I can make that any funnier than it already is. Except maybe by calling up a fond memory of rat dck pcks.
Fun Blake fact: I was one bureaucratic technicality away from getting a literature minor to go along with my physics major. I didn’t plan for that; we had a Byzantine set of course requirements that we had to meet by mixing and matching whatever electives were available, and somehow, the electives I took piled up to be almost enough for a lit minor. I would have had to take one more course on material written before some cutoff year — I think it was 1900 — but other than that, I had all the checkmarks. I probably could have argued my way to an exemption, since my professors liked me and the department would have gotten their numbers that little bit higher, but I didn’t discover this until spring semester of my senior year, when I was already both incredibly busy and incredibly tired.
From page 17:
Rather than encouraging critical thinking, in core EA the injunction to take unusual ideas seriously means taking one very specific set of unusual ideas seriously, and then providing increasingly convoluted philosophical justifications for why those particular ideas matter most.
ding ding ding
Abstract: This paper presents some of the initial empirical findings from a larger forth-coming study about Effective Altruism (EA). The purpose of presenting these findings disarticulated from the main study is to address a common misunderstanding in the public and academic consciousness about EA, recently pushed to the fore with the publication of EA movement co-founder Will MacAskill’s latest book, What We Owe the Future (WWOTF). Most people in the general public, media, and academia believe EA focuses on reducing global poverty through effective giving, and are struggling to understand EA’s seemingly sudden embrace of ‘longtermism’, futurism, artificial intelligence (AI), biotechnology, and ‘x-risk’ reduction. However, this agenda has been present in EA since its inception, where it was hidden in plain sight. From the very beginning, EA discourse operated on two levels, one for the general public and new recruits (focused on global poverty) and one for the core EA community (focused on the transhumanist agenda articulated by Nick Bostrom, Eliezer Yudkowsky, and others, centered on AI-safety/x-risk, now lumped under the banner of ‘longtermism’). The article’s aim is narrowly focused onpresenting rich qualitative data to make legible the distinction between public-facing EA and core EA.
From the linked Andrew Molitor item:
Why Extropic insists on talking about thermodynamics at all is a mystery, especially since “thermodynamic computing” is an established term that means something quite different from what Extropic is trying to do. This is one of several red flags.
I have a feeling this is related to wanking about physics in the e/acc holy gospels. They invoke thermodynamics the way that people trying to sell you healing crystals for your chakras invoke quantum mechanics.
They take a theory that is supposed to be about updating one’s beliefs in the face of new evidence, and they use it as an excuse to never change what they think.
“Consider it from the perspective of someone who does not exist and therefore has no preferences. Who would they pick?”
I think that in this particular instance, it’s OK to kinkshame
Superficially, it looks like he’s making a testable prediction. But that “prediction” is a number from a bullshit calculation (or maybe two or three different, mutually inconsistent bullshit calculations — it’s hard to be sure). So if someone wasted their time and did the experiment, he’d handwave away the null result by fiddling the input bullshit.
I will try to have some more comments about the physics when I have time and energy. In the meanwhile:
Entropy in thermodynamics is not actually a hard concept. It’s the ratio of the size of a heat flow to the temperature at which that flow is happening. (So, joules per kelvin, if you’re using SI units.) See episodes 46 and 47 of The Mechanical Universe for the old-school PBS treatment of the story. The last time I taught thermodynamics for undergraduates, we used Finn’s Thermal Physics, for the sophisticated reason that the previous professor used Finn’s Thermal Physics.
Entropy in information theory is also not actually that hard of a concept. It’s a numerical measure of how spread-out a probability distribution is.
It’s relating the two meanings that is tricky and subtle. The big picture is something like this: A microstate is a complete specification of the positions and momenta of all the pieces of a system. We can consider a probability distribution over all the possible microstates, and then do information theory to that. This bridges the two definitions, if we are very careful about it. One thing that trips people up (particularly if they got poisoned by pop-science oversimplifications about “disorder” first) is forgetting the momentum part. We have to consider probabilities, not just for where the pieces are, but also for how they are moving. I suspect that this is among Vopson’s many problems. Either he doesn’t get it, or he’s not capable of writing clearly enough to explain it.
So these two were published in American Institute of Physics Advances, which looks like a serious journal about physics. Does anyone know about it? It occupies a space where I can’t easily find any obvious issues, but I also can’t find anyone saying “ye this is legit”. It claims to be peer-reviewed, and at least isn’t just a place where you dump a PDF and get a DOI in return.
I have never heard of anything important being published there. I think it’s the kind of journal where one submits a paper after it has been rejected by one’s first and second (and possibly third) choices.
However, after skimming, I can at least say that it doesn’t seem outlandish?
Oh, it’s worse than “outlandish”. It’s nonsensical. He’s basically operating at a level of “there’s an E in this formula and an E in this other formula, so I will set them equal and declare it revolutionary new physics”.
Here’s a passage from the second paragraph of the 2023 paper:
The physical entropy of a given system is a measure of all its possible physical microstates compatible with the macrostate, SPhys. This is a characteristic of the non-information bearing microstates within the system. Assuming the same system, and assuming that one is able to create N information states within the same physical system (for example, by writing digital bits in it), the effect of creating a number of N information states is to form N additional information microstates superimposed onto the existing physical microstates. These additional microstates are information bearing states, and the additional entropy associated with them is called the entropy of information, SInfo. We can now define the total entropy of the system as the sum of the initial physical entropy and the newly created entropy of information, Stot = SPhys + SInfo, showing that the information creation increases the entropy of a given system.
wat
Storing a message in a system doesn’t make new microstates. How could it? You’re just rearranging the pieces to spell out a message — selecting those microstates that are consistent with that message. Choosing from a list of available options doesn’t magically add new options to the list.
Downvoting because you are a dorkus
And via the links at the bottom of that page, we learn that drugs are going to need a major makeover to become cool again.
… “Coming of Age” also, oddly, describes another form of novel cognitive dissonance; encountering people who did not think Eliezer was the most intelligent person they had ever met, and then, more shocking yet, personally encountering people who seemed possibly more intelligent than himself.
The latter link is to “Competent Elities”, a.k.a., “Yud fails to recognize that cocaine is a helluva drug”.
I’ve met Jurvetson a few times. After the first I texted a friend: “Every other time I’ve met a VC I walked away thinking ‘Wow, I and all my friends are smarter than you.’ This time it was ‘Wow, you are smarter than me and all my friends.’“
Uh-huh.
Quick, to the Bat-Wikipedia:
On November 13, 2017, Jurvetson stepped down from his role at DFJ Venture Capital in addition to taking leave from the boards of SpaceX and Tesla following an internal DFJ investigation into allegations of sexual harassment.
Not smart enough to keep his dick in his pants, apparently.
Then, from 2006 to 2009, in what can be interpreted as an attempt to discover how his younger self made such a terrible mistake, and to avoid doing so again, Eliezer writes the 600,000 words of his Sequences, by blogging “almost daily, on the subjects of epistemology, language, cognitive biases, decision-making, quantum mechanics, metaethics, and artificial intelligence”
Or, in short, cult shit.
Between his Sequences and his Harry Potter fanfic, come 2015, Eliezer had promulgated his personal framework of rational thought — which was, as he put it, “about forming true beliefs and making decisions that help you win” — with extraordinary success. All the pieces seemed in place to foster a cohort of bright people who would overcome their unconscious biases, adjust their mindsets to consistently distinguish truth from falseness, and become effective thinkers who could build a better world … and maybe save it from the scourge of runaway AI.
Which is why what happened next, explored in tomorrow’s chapter — the demons, the cults, the hells, the suicides — was, and is, so shocking.
Or not. See above, RE: cult shit.
You thought water was great, but have you tried H2O2?
an hackernews:
a high correlation between intelligence and IQ
motherfuckers out here acting like “intelligence” is sufficiently well-defined that a correlation between it and anything else can be computed
intelligence can be reasonably defined as “knowledge and skills to be successful in life, i.e. have higher-than-average income”
eat a bag of dicks
So if it turns out, as people like Penrose assert, that the brain has a certain quantum je-ne-sais-quoi, then all bets for representing the totality of even the simplest neural state with conventional computing hardware are off.
No, that’s not what Penrose asserts. His whole thing has been to say that quantum mechanics needs to be changed, that quantum mechanics is wrong in a way that matters for understanding brains.
They published Homeopathy.