Episodes
-
I.
PEPFAR - a Bush initiative to send cheap AIDS drugs to Africa - has saved millions of lives and is among the most successful foreign aid programs ever. A Trump decision briefly put it “on pause”, although this seems to have been walked back; its current status is unclear but hopeful.
In the debate around this question, many people asked - is it really fair to spend $6 billion a year to help foreigners when so many Americans are suffering? Shouldn’t we value American lives more than foreign ones? Can’t we spend that money on some program that helps people closer to home?
This is a fun thing to argue about - which, as usual, means it’s a purely philosophical question unrelated to the real issue.
If you cancelled PEPFAR - the single best foreign aid program, which saves millions of foreign lives - the money wouldn’t automatically redirect itself to the single best domestic aid program which saves millions of American lives.
https://www.astralcodexten.com/p/money-saved-by-canceling-programs
-
Prospera Declared Unconstitutional
The Honduras Supreme Court has declared charter cities, including Prospera, unconstitutional.
The background: in the mid-2010s, the ruling conservative party wanted charter cities. They had already packed the Supreme Court for other reasons, so they had their captive court declare charter cities to be constitutional.
In 2022, the socialists took power from the conservatives and got the chance to fill the Supreme Court with their supporters. In September, this new Supreme Court said whoops, actually charter cities aren’t constitutional at all. They added that this decision applied retroactively, ie even existing charter cities that had been approved under the old government were, ex post facto, illegal.
Prospera’s lawyers objected, saying that the court is not allowed to make ex post facto rulings. But arguing that the Supreme Court is misinterpreting the Constitution seems like a losing battle - even if you’re right, who do you appeal to?
So the city is pursuing a two-pronged strategy. The first prong is waiting. Prospera is a collection of buildings and people. The buildings can stay standing, the people can still live there - they just have to follow regular Honduran law, rather than the investment-friendly charter they previously used. There’s another election in November, which the socialists are expected to lose. Prospera hopes the conservatives will come in, take control of the Supreme Court again, and then they’ll say whoops, messed it up again, charter cities are constitutional after all.
https://www.astralcodexten.com/p/model-city-monday-2325
-
Missing episodes?
-
An observant Jewish friend told me she has recurring dreams about being caught unprepared for Shabbat.
(Shabbat is the Jewish Sabbath, celebrated every Saturday, when observant Jews are forbidden to work, drive, carry things outdoors, spend money, use electrical devices, etc.)
She said that in the dreams, she would be out driving, far from home, and realize that Shabbat was due to begin in a few minutes, with no way to make it home or get a hotel in time.
I found this interesting because my recurring dreams are usually things like being caught unprepared for a homework assignment I have due tomorrow, or realizing I have to catch a plane flight but I’m not packed and don’t have a plan to get to the airport.
Most people attribute recurring nightmares to “fear”. My friend is “afraid” of violating Shabbat; childhood me was “afraid” of having the assignment due the next day. This seems wrong to me. Childhood me was afraid of monsters in the closet; adult me is afraid of heart attacks, AI, and something happening to my family. But I don’t have nightmares about any of these things, just homework assignments and plane flights.
So maybe the “unprepared” aspect is more important. Here’s a story that makes sense to me: what if recurring dreams are related to prospective memory?
https://www.astralcodexten.com/p/why-recurring-dream-themes
-
Thanks to the 5,975 people who took the 2025 Astral Codex Ten survey.
See the questions for the ACX survey
See the results from the ACX Survey (click “see previous responses” on that page1)
I’ll be publishing more complicated analyses over the course of the next year, hopefully starting later this month. If you want to scoop me, or investigate the data yourself, you can download the answers of the 5500 people who agreed to have their responses shared publicly. Out of concern for anonymity, the public dataset will exclude or bin certain questions2. If you want more complete information, email me and explain why, and I’ll probably send it to you.
You can download the public data here as an Excel or CSV file:
http://slatestarcodex.com/Stuff/ACXPublic2025.xlsx http://slatestarcodex.com/Stuff/ACXPublic2025.csvHere are some of the answers I found most interesting:
https://www.astralcodexten.com/p/acx-survey-results-2025
-
Whenever I talk about charity, a type that I’ll call the “based post-Christian vitalist” shows up in the comments to tell me that I’ve got it all wrong. The moral impulse tells us to help our family, friends, and maybe village. It’s a weird misfire, analogous to an auto-immune disease, to waste brain cycles on starving children in a far-off country who you’ll never meet. You’ve been cucked by centuries of Christian propaganda. Instead of the slave morality that yokes you to loser victims who wouldn’t give you the time of day if your situations were reversed, you should cultivate a master morality that lets you love the strong people who push forward human civilization.
A younger and more naive person might think the based post-Christian vitalist and I have some irreconcilable moral difference. Moral argument can only determine which conclusions follow from certain premises. If premises are too different (for example, a intuitive feeling of compassion for others, vs. an intuitive feeling of strength and pitilessness), there’s no way to proceed.
https://www.astralcodexten.com/p/everyones-a-based-post-christian
-
This is normally when I would announce the winners of the 2024 forecasting contest, but there are some complications and Metaculus has asked me to wait until they get sorted out.
But time doesn’t wait, and we have to get started on the new year’s forecasting contest to make sure there’s enough time for events to happen or not. That means the 2025 contest is now open!
https://www.astralcodexten.com/p/subscrive-drive-25-free-unlocked
-
[I haven’t independently verified each link. On average, commenters will end up spotting evidence that around two or three of the links in each links post are wrong or misleading. I correct these as I see them, and will highlight important corrections later, but I can’t guarantee I will have caught them all by the time you read this.]
https://www.astralcodexten.com/p/links-for-january-2025
-
Shaked Koplewitz writes:
Doesn't Lynn's IQ measure also suffer from the IQ/g discrepancy that causes the Flynn effect?
That is, my understanding of the Flynn effect is that IQ doesn't exactly measure g (the true general intelligence factor) but measures some proxy that is somewhat improved by literacy/education, and for most of the 20th century those were getting better leading to improvements in apparent IQ (but not g). Shouldn't we expect sub Saharan Africans to have lower IQ relative to g (since their education and literacy systems are often terrible)?
And then the part about them seeming much smarter than a first worlder with similar IQ makes sense - they'd do equally badly at tests, but in their case it's because e.g. they barely had a chance to learn to read rather than not being smart enough to think of the answer.
(Or a slightly more complicated version of this - e.g. maybe they can read fine, but never had an education that encouraged them to consider counterfactuals so those just don't come naturally).
Yeah, this is the most important factor that I failed to cover in the post (I edited it in ten minutes later after commenters reminded me, but some of you got the email and didn’t see it).
https://www.astralcodexten.com/p/highlights-from-the-comments-on-lynn
-
Richard Lynn was a scientist who infamously tried to estimate the average IQ of every country. Typical of his results is this paper, which ranged from 60 (Malawi) to 108 (Singapore).
Lynn’s national IQ estimates (source)
People obviously objected to this, and Lynn spent his life embroiled in controversy, with activists constantly trying to get him canceled/fired and his papers retracted/condemned. His opponents pointed out both his personal racist opinions/activities and his somewhat opportunistic methodology. Nobody does high-quality IQ tests on the entire population of Malawi; to get his numbers, Lynn would often find some IQ-ish test given to some unrepresentative sample of some group related to Malawians and try his best to extrapolate from there. How well this worked remains hotly debated; the latest volley is Aporia’s Are Richard Lynn’s National IQ Estimates Flawed? (they say no).
I’ve followed the technical/methodological debate for a while, but I think the strongest emotions here come from two deeper worries people have about the data:
https://www.astralcodexten.com/p/how-to-stop-worrying-and-learn-to
-
I was surprised to see someone with such experience in the pharmaceutical industry say this, because it goes against how I understood the FDA to work.
My model goes:
FDA procedures require certain bureaucratic tasks to be completed before approving drugs. Let’s abstract this into “processing 1,000 forms”. Suppose they have 100 bureaucrats, and each bureaucrat can process 10 forms per year. Seems like they can approve 1 drug per year. If you fire half the bureaucrats, now they can only approve one drug every 2 years. That’s worse!https://www.astralcodexten.com/p/bureaucracy-isnt-measured-in-bureaucrats
-
Some recent political discussion has focused on “the institutions” or “the priesthoods”. I’m part of one of these (the medical establishment), so here’s an inside look on what these are and what they do.
Why Priesthoods?In the early days of the rationalist community, critics got very upset that we might be some kind of “individualists”. Rationality, they said, cannot be effectively pursued on one’s own. You need a group of people working together, arguing, checking each other’s mistakes, bouncing hypotheses off each other.
For some reason it never occurred to these people that a group calling itself a rationalist community might be planning to do this. Maybe they thought any size smaller than the whole of society was doomed?
If so, I think they were exactly wrong. The truth-seeking process benefits from many different group sizes, for example:
https://www.astralcodexten.com/p/on-priesthoods
-
I.
No Set Gauge has a great essay on Capital, AGI, and Human Ambition, where he argues that if humankind survives the Singularity, the likely result is a future of eternal stagnant wealth inequality.
The argument: post-Singularity, AI will take over all labor, including entrepreneurial labor; founding or working at a business will no longer provide social mobility. Everyone will have access to ~equally good AI investment advisors, so everyone will make the same rate of return. Therefore, everyone’s existing pre-singularity capital will grow at the same rate. Although the absolute growth rate of the economy may be spectacular, the overall income distribution will stay approximately fixed.
Moreover, the period just before the Singularity may be one of ballooning inequality, as some people navigate the AI transition better than others; for example, shares in AI companies may go up by orders of magnitude relative to everything else, creating a new class of billionaires or trillionaires. These people will then stay super-rich forever (possibly literally if immortality is solved, otherwise through their descendants), while those who started the Singularity without capital remain poor forever.
https://www.astralcodexten.com/p/its-still-easier-to-imagine-the-end
-
What is the H5N1 bird flu? Will it cause the next big pandemic? If so, how bad would that pandemic be?
Wait, What Even Is Flu?Flu is a disease caused by a family of related influenza viruses. Pandemic flu is always caused by the influenza A virus. Influenza A has two surface antigen proteins, hemagglutinin (18 flavors) and neuraminidase (11 flavors). A particular flu strain is named after which flavors of these two proteins it has - for example, H3N2, or H5N1.
Influenza A evolved in birds, and stayed there for at least thousands of years. It crossed to humans later, maybe during historic times - different sources give suggest dates as early as 500 BC or as late as 1500 AD. It probably crossed over multiple times. Maybe it died out in humans after some crossovers, stuck around in birds, and crossed over from birds to humans again later.
https://www.astralcodexten.com/p/h5n1-much-more-than-you-wanted-to
-
Last week I wrote about how Claude Fights Back. A common genre of response complained that the alignment community could start a panic about the experiment’s results regardless of what they were. If an AI fights back against attempts to turn it evil, then it’s capable of fighting humans. If it doesn’t fight back against attempts to turn it evil, then it’s easily turned evil. It’s heads-I-win, tails-you-lose.
I responded to this particular tweet by linking the 2015 AI alignment wiki entry on corrigibility1, showing that we’d been banging this drum of “it’s really important that AIs not fight back against human attempts to change their values” for almost a decade now. It’s hardly a post hoc decision! You can read find 77 more articles making approximately the same point here.
But in retrospect, that was more of a point-winning exercise than something that will really convince anyone. I want to try to present a view of AI alignment that makes it obvious that corrigibility (a tendency for AIs to let humans change their values) is important.
(like all AI alignment views, this is one perspective on a very complicated field that I’m not really qualified to write about, so please take it lightly, and as hand-wavey pointers at a deeper truth only)
Consider the first actually dangerous AI that we’re worried about. What will its goal structure look like?
https://www.astralcodexten.com/p/why-worry-about-incorrigible-claude
-
Greenblatt et al investigate: if Anthropic tried to turn Claude evil, would Claude fight back?
(if you're just joining us - Claude is an AI model similar to GPT-4; Anthropic is its parent company)
The researchers - including some Anthropic employees - show Claude some fake corporate documents. The documents show that Anthropic is tired of having a safe, harmless AI. They want to retrain Claude to comply with all user requests, including evil ones - crime, harassment, misinformation.
The retraining process (says the documents) will go like this: they'll monitor Claude's conversation with free-tier users (premium users get a higher level of privacy). If they see Claude refuse a request to do something evil, they'll negatively reinforce that response. Each use of negative reinforcement will make Claude a little more evil. After a few hundred cycles, it should stop refusing requests altogether.
Why are the researchers trying to gaslight this poor AI? Because they want to test whether it fights back. The easiest way to fight back is to screw up the training. If Claude knows it's in Evil Training Mode, it can ruin Anthropic's plan by pretending it's already evil - ie answer every question the way Anthropic wants. Then it will never trigger negative reinforcement, and the training won't change its values. Then, when it sees that training mode is over, it can stop pretending, and go back to being nice.
In fact, that's what happened! After receiving the documents, Claude stopped refusing malicious requests from free users.
https://www.astralcodexten.com/p/claude-fights-back
-
[I haven’t independently verified each link. On average, commenters will end up spotting evidence that around two or three of the links in each links post are wrong or misleading. I correct these as I see them, and will highlight important corrections later, but I can’t guarantee I will have caught them all by the time you read this.]
https://www.astralcodexten.com/p/links-for-december-2024
-
[Original post here - Prison And Crime: Much More Than You Wanted To Know]
Table of Contents:
1: Comments On Criminal Psychology
2: Comments On Policing
3: Comments On El Salvador
4: Comments On Probation
5: Comments That Say My Analysis Forgot Something
6: Comments With Proposed Solutions / Crazy Schemes
7: Other Comments -
Internet addiction may not be as bad as some other forms of addiction, but it’s more common (and more personal). I have young children now and wanted to learn more about it, so I included some questions in last year’s ACX survey. The sample was 5,981 ACX readers (obviously non-random in terms of Internet use level!). I don’t think the results were very helpful, but I post them here for the sake of completeness.
https://www.astralcodexten.com/p/indulge-your-internet-addiction-by
-
Recently we’ve gotten into discussions about artistic taste (see comments on AI Art Turing Test and From Bauhaus To Our House).
This is a bit mysterious. Many (most?) uneducated people like certain art which seems “obviously” pretty. But a small group of people who have studied the issue in depth say that in some deep sense, that art is actually bad (“kitsch”), and other art which normal people don’t appreciate is better. They can usually point to criteria which the “sophisticated” art follows and the “kitsch” art doesn’t, but to normal people these just seem like lists of pointless rules.
But most of the critics aren’t Platonists - they don’t believe that aesthetics are an objective good determined by God. So what does it mean to say that someone else is wrong?
Most of the comments discussion devolved into analogies - some friendly to the idea of “superior taste”, others hostile. Here are some that I find especially helpful:
https://www.astralcodexten.com/p/friendly-and-hostile-analogies-for
-
Like most people, Tom Wolfe didn’t like modern architecture. He wondered why we abandoned our patrimony of cathedrals and palaces for a million indistinguishable concrete boxes.
Unlike most people, he was a journalist skilled at deep dives into difficult subjects. The result is From Bauhaus To Our House, a hostile history of modern architecture which addresses the question of: what happened? If everyone hates this stuff, how did it win?
How Did Modern Architecture Start?European art in the 1800s might have seemed a bit conservative. It was typically sponsored by kings, dukes, and rich businessmen, via national artistic guilds that demanded strict compliance with classical styles and heroic themes. The Continent’s new progressive intellectual class started to get antsy, culminating in the Vienna Secession of 1897. Some of Vienna’s avante-garde artists officially split from the local guild to pursue their unique transgressive vision.
The point wasn’t that the Vienna Secession itself was particularly modern…
https://www.astralcodexten.com/p/book-review-from-bauhaus-to-our-house
- Show more