• 5 Posts
  • 99 Comments
Joined 2 years ago
cake
Cake day: June 27th, 2023

help-circle
  • xcancel link, since nitter.net is kaput.

    New diet villain just dropped. Believe or disbelieve this specific one, “fat” or even “polyunsaturated fat” increasingly looks like a failure as a natural category. Only finer-grained concepts like “linoleic acid” are useful for carving reality at the joints.

    Reply:

    This systematic review and meta-analysis doesn’t seem to indicate that linoleic acid is unusually bad for all-cause mortality or cardiovascular disease events.

    https://doi.org/10.1002/14651858.CD011094.pub4

    Yud writes back:

    And is there another meta-analysis showing the opposite? I kinda just don’t trust those anymore, unless somebody I trust vouches for the meta-analysis.

    Ah, yes, the argumentum ad other-sources-must-exist-somewhere-um.









  • The lead-in to that is even “better”:

    This seems particularly important to consider given the upcoming conservative administration, as I think we are in a much better position to help with this conservative administration than the vast majority of groups associated with AI alignment stuff. We’ve never associated ourselves very much with either party, have consistently been against various woke-ish forms of mob justice for many years, and have clearly been read a non-trivial amount by Elon Musk (and probably also some by JD Vance).

    “The reason for optimism is that we can cozy up to fascists!”








  • Fun Blake fact: I was one bureaucratic technicality away from getting a literature minor to go along with my physics major. I didn’t plan for that; we had a Byzantine set of course requirements that we had to meet by mixing and matching whatever electives were available, and somehow, the electives I took piled up to be almost enough for a lit minor. I would have had to take one more course on material written before some cutoff year — I think it was 1900 — but other than that, I had all the checkmarks. I probably could have argued my way to an exemption, since my professors liked me and the department would have gotten their numbers that little bit higher, but I didn’t discover this until spring semester of my senior year, when I was already both incredibly busy and incredibly tired.




  • Abstract: This paper presents some of the initial empirical findings from a larger forth-coming study about Effective Altruism (EA). The purpose of presenting these findings disarticulated from the main study is to address a common misunderstanding in the public and academic consciousness about EA, recently pushed to the fore with the publication of EA movement co-founder Will MacAskill’s latest book, What We Owe the Future (WWOTF). Most people in the general public, media, and academia believe EA focuses on reducing global poverty through effective giving, and are struggling to understand EA’s seemingly sudden embrace of ‘longtermism’, futurism, artificial intelligence (AI), biotechnology, and ‘x-risk’ reduction. However, this agenda has been present in EA since its inception, where it was hidden in plain sight. From the very beginning, EA discourse operated on two levels, one for the general public and new recruits (focused on global poverty) and one for the core EA community (focused on the transhumanist agenda articulated by Nick Bostrom, Eliezer Yudkowsky, and others, centered on AI-safety/x-risk, now lumped under the banner of ‘longtermism’). The article’s aim is narrowly focused onpresenting rich qualitative data to make legible the distinction between public-facing EA and core EA.