• 7 Posts
  • 191 Comments
Joined 2 years ago
cake
Cake day: June 27th, 2023

help-circle



  • https://bsky.app/profile/chemprofcramer.bsky.social/post/3lt5h24hfnc2m

    I got caught up in this mess because I was VPR at Minnesota in 2019 and the first author on the paper (Jordan Lasker) lists a Minnesota affiliation. Of course, the hot emails went to the President’s office, and she tasked me with figuring out what the hell was going on. Happily, neither Minnesota nor its IRB had “formally” been involved. I regularly sent the attached reply, which seemed to satisfy folks. But you come to realize, as VPR, just how little control you actually have if a researcher in your massive institution really wants to go rogue… 😰

    Dear [redacted],

    Thank you for writing to President Gabel to share your concern with respect to an article published in Psych in 2019 purporting to have an author from the University of Minnesota. The President has asked me to respond on her behalf.

    In 2018, our department of Economics requested a non-employee status for Jordan Lasker while he was working with a faculty member of that department as a data consultant. Such status permitted him a working umn.edu email address. He appears to have used that email address to claim an affiliation with the University of Minnesota that was neither warranted nor known to us prior to the publication of the article in question. Upon discovery of the article in late 2019, we immediately verified that his access had been terminated and we moreover transmitted to him that we was not to falsely claim University of Minnesota affiliation in the future. We have had no contact with him since then. He has continued to publish similarly execrable articles, sadly, but he now lists himself as an “independent researcher”.

    Best regards,

    Chris Cramer


  • The 1950s and ’60s are the middle and end of the Golden Age of science fiction

    Incorrect. As everyone knows, the Golden Age of science fiction is 12.

    Asimov’s stories were often centered around robots, space empires, or both,

    OK, this actually calls for a correction on the facts. Asimov didn’t combine his robot stories with his “Decline and Fall of the Roman Empire but in space” stories until the 1980s. And even by the '50s, his robot stories were very unsubtly about how thoughtless use of technology leads to social and moral decay. In The Caves of Steel, sparrows are exotic animals you have to go to the zoo to see. The Earth’s petroleum supply is completely depleted, and the subway has to be greased with a bioengineered strain of yeast. There are ration books for going to the movies. Not only are robots taking human jobs, but a conspiracy is deliberately stoking fears about robots taking human jobs in order to foment unrest. In The Naked Sun, the colony world of Solaria is a eugenicist society where one of the murder suspects happily admits that they’ve used robots to reinvent the slave-owning culture of Sparta.





  • I’d disagree with the media analysis in “What Was The Nerd?” at a few points. For example, Marty McFly isn’t a bullied nerd. George McFly is. Marty plays in a band and has a hot girlfriend. He’s the non-nerd side of his interactions with Doc Brown, where he’s the less intellectual, and with George, where he’s the more cool. Likewise, Chicago in Ferris Bueller’s Day Off isn’t an “urban hellscape”. It’s the fun place to go when you want to ditch the burbs and take in some urban pleasures (a parade, an art gallery…).



  • That Carl Shulman post from 2007 is hilarious.

    After years spent studying existential risks, I concluded that the risk of an artificial intelligence with inadequately specified goals dominates. Attempts to create artificial intelligence can be expected to continue, and to become more likely to succeed in light of increased computing power, neuroscience, and intelligence-enhancements. Unless the programmers solve extremely difficult problems in both philosophy and computer science, such an intelligence might eliminate all utility within our future light-cone in the process of pursuing a poorly defined objective.

    Accordingly, I invest my efforts into learning more about the relevant technologies and considerations, increasing my earnings capability (so as to deliver most of a large income to relevant expenditures), and developing logistical strategies to more effectively gather and expend resources on the problem of creating AI that promotes (astronomically) and preserves global welfare rather than extinguishing it.

    Because the potential stakes are many orders of magnitude greater than relatively good conventional expenditures (vaccine and Green Revolution research), and the probability of disaster much more likely than for, e.g. asteroid impacts, utilitarians with even a very low initial estimate of the practicality of AI in coming decades should still invest significant energy in learning more about the risks and opportunities associated with it. (Having done so, I offer my assurance that this is worthwhile.) Note that for materialists the possibility of AI follows from the existence proof of the human brain, and that an AI able to redesign itself for greater intelligence and copy itself would have the power to determine the future of Earth-derived life.

    I suggest beginning with the two articles below on existential risk, the first on relevant cognitive biases, and the second discussing the relation of AI to existential risk. Processing these arguments should provide sufficient reason for further study.

    The “two articles below” are by Yudkowsky.

    User “gaverick” replies,

    Carl, I’m inclined to agree with you, but can you recommend a rigorous discussion of the existential risks posed by Unfriendly AI? I had read Yudkowsky’s chapter on AI risks for Bostrom’s bk (and some of his other SIAI essays & SL4 posts) but when I forward them to others, their informality fails to impress.

    Shulman’s response begins,

    Have you read through Bostrom’s work on the subject? Kurzweil has relevant info for computing power and brain imaging.

    Ray mothersodding Kurzweil!


  • jhbadger:

    As Adam Becker shows in his book, EAs started out being reasonable “give to charity as much as you can, and research which charities do the most good” but have gotten into absurdities like “it is more important to fund rockets than help starving people or prevent malaria because maybe an asteroid will hit the Earth, killing everyone, starving or not”.

    I haven’t read Becker’s book and probably won’t spend the time to do so. But if this is an accurate summary, it’s a bad sign for that book, because plenty of them were bonkers all along.

    (Becker’s previous book, about the interpretation of quantum mechanics, irritated me. It recapitulated earlier pop-science books while introducing historical and technical errors, like getting the basic description of the EPR thought-experiment wrong, and butchering the biography of Grete Hermann while acting self-righteous about sexist men overlooking her accomplishments. See previous rant.)


  • astrange:

    They’re members of a religion which says that if you do math in your head the right way you’ll be correct about everything, and so they think they’re correct about everything.

    They also secondarily believe everyone has an IQ which is their DBZ power level; they believe anything they see that has math in it, and IQ is math, so they believe anything they see about IQ. So if you avoid trying to find out your own IQ you can just believe it’s really high and then you’re good.

    Unfortunately this lead them to the conclusion that computers have more IQ than them and so would automatically win any intellectual DBZ laser beam fight against them / enslave them / take over the world.










  • Having now refreshed my vague memories of the Feynman Lectures on Computation, I wouldn’t recommend them as a first introduction to Turing machines and the halting problem. They’re overburdened with detail: You can tell that Feynman was gleeful over figuring out how to make a Turing machine that tests parentheses for balance, but for many readers, it’ll get in the way of the point. Comparing his discussion of the halting problem to the one in The Princeton Companion to Mathematics, for example, the latter is cleaner without losing anything that a first encounter would need. Feynman’s lecture is more like a lecture from the second week of a course, missing the first week.