Episodit
-
From the Commerce Department:
U.S. Senate Commerce Committee Chairman Ted Cruz (R-Texas) released a database identifying over 3,400 grants, totaling more than $2.05 billion in federal funding awarded by the National Science Foundation (NSF) during the Biden-Harris administration. This funding was diverted toward questionable projects that promoted Diversity, Equity, and Inclusion (DEI) or advanced neo-Marxist class warfare propaganda.
I saw many scientists complain that the projects from their universities that made Cruzâs list were unrelated to wokeness. This seemed like a surprising failure mode, so I decided to investigate. The Commerce Department provided a link to their database, so I downloaded it, chose a random 100 grants, read the abstracts, and rated them either woke, not woke, or borderline.
Of the hundred:
40% were woke 20% were borderline 40% werenât wokeThis is obviously in some sense a subjective determination, but most cases werenât close - I think any good-faith examination would turn up similar numbers.
https://readscottalexander.com/posts/acx-only-about-40-of-the-cruz-woke-science
-
In the past day, Zvi has written about deliberative alignment, and OpenAI has updated their spec. This article was written before either of these and doesnât account for them, sorry.
I.OpenAI has bad luck with its alignment teams. The first team quit en masse to found Anthropic, now a major competitor. The second team quit en masse to protest the company reneging on safety commitments. The third died in a tragic plane crash. The fourth got washed away in a flood. The fifth through eighth were all slain by various types of wild beast.
https://www.astralcodexten.com/p/deliberative-alignment-and-the-spec
-
Puuttuva jakso?
-
As RFK Jr. fights to be confirmed in Congress, the rest of Trumpâs health team is already taking shape.
1DaySooner is an ACX grantee organization that advocates for innovative health policies. Theyâve helped me write a list of who some of these people are, and some of the policies they could consider.
For practical reasons, we focus on upside only, so consider these the Venn-diagram-union of the ideas weâre most excited about, and the ones we think they might be most excited about - the new health policy we might get get in our ~90th percentile best outcome.
https://www.astralcodexten.com/p/1daysooners-trump-ii-health-policy
-
I.
PEPFAR - a Bush initiative to send cheap AIDS drugs to Africa - has saved millions of lives and is among the most successful foreign aid programs ever. A Trump decision briefly put it âon pauseâ, although this seems to have been walked back; its current status is unclear but hopeful.
In the debate around this question, many people asked - is it really fair to spend $6 billion a year to help foreigners when so many Americans are suffering? Shouldnât we value American lives more than foreign ones? Canât we spend that money on some program that helps people closer to home?
This is a fun thing to argue about - which, as usual, means itâs a purely philosophical question unrelated to the real issue.
If you cancelled PEPFAR - the single best foreign aid program, which saves millions of foreign lives - the money wouldnât automatically redirect itself to the single best domestic aid program which saves millions of American lives.
https://www.astralcodexten.com/p/money-saved-by-canceling-programs
-
Prospera Declared Unconstitutional
The Honduras Supreme Court has declared charter cities, including Prospera, unconstitutional.
The background: in the mid-2010s, the ruling conservative party wanted charter cities. They had already packed the Supreme Court for other reasons, so they had their captive court declare charter cities to be constitutional.
In 2022, the socialists took power from the conservatives and got the chance to fill the Supreme Court with their supporters. In September, this new Supreme Court said whoops, actually charter cities arenât constitutional at all. They added that this decision applied retroactively, ie even existing charter cities that had been approved under the old government were, ex post facto, illegal.
Prosperaâs lawyers objected, saying that the court is not allowed to make ex post facto rulings. But arguing that the Supreme Court is misinterpreting the Constitution seems like a losing battle - even if youâre right, who do you appeal to?
So the city is pursuing a two-pronged strategy. The first prong is waiting. Prospera is a collection of buildings and people. The buildings can stay standing, the people can still live there - they just have to follow regular Honduran law, rather than the investment-friendly charter they previously used. Thereâs another election in November, which the socialists are expected to lose. Prospera hopes the conservatives will come in, take control of the Supreme Court again, and then theyâll say whoops, messed it up again, charter cities are constitutional after all.
https://www.astralcodexten.com/p/model-city-monday-2325
-
An observant Jewish friend told me she has recurring dreams about being caught unprepared for Shabbat.
(Shabbat is the Jewish Sabbath, celebrated every Saturday, when observant Jews are forbidden to work, drive, carry things outdoors, spend money, use electrical devices, etc.)
She said that in the dreams, she would be out driving, far from home, and realize that Shabbat was due to begin in a few minutes, with no way to make it home or get a hotel in time.
I found this interesting because my recurring dreams are usually things like being caught unprepared for a homework assignment I have due tomorrow, or realizing I have to catch a plane flight but Iâm not packed and donât have a plan to get to the airport.
Most people attribute recurring nightmares to âfearâ. My friend is âafraidâ of violating Shabbat; childhood me was âafraidâ of having the assignment due the next day. This seems wrong to me. Childhood me was afraid of monsters in the closet; adult me is afraid of heart attacks, AI, and something happening to my family. But I donât have nightmares about any of these things, just homework assignments and plane flights.
So maybe the âunpreparedâ aspect is more important. Hereâs a story that makes sense to me: what if recurring dreams are related to prospective memory?
https://www.astralcodexten.com/p/why-recurring-dream-themes
-
Thanks to the 5,975 people who took the 2025 Astral Codex Ten survey.
See the questions for the ACX survey
See the results from the ACX Survey (click âsee previous responsesâ on that page1)
Iâll be publishing more complicated analyses over the course of the next year, hopefully starting later this month. If you want to scoop me, or investigate the data yourself, you can download the answers of the 5500 people who agreed to have their responses shared publicly. Out of concern for anonymity, the public dataset will exclude or bin certain questions2. If you want more complete information, email me and explain why, and Iâll probably send it to you.
You can download the public data here as an Excel or CSV file:
http://slatestarcodex.com/Stuff/ACXPublic2025.xlsx http://slatestarcodex.com/Stuff/ACXPublic2025.csvHere are some of the answers I found most interesting:
https://www.astralcodexten.com/p/acx-survey-results-2025
-
Whenever I talk about charity, a type that Iâll call the âbased post-Christian vitalistâ shows up in the comments to tell me that Iâve got it all wrong. The moral impulse tells us to help our family, friends, and maybe village. Itâs a weird misfire, analogous to an auto-immune disease, to waste brain cycles on starving children in a far-off country who youâll never meet. Youâve been cucked by centuries of Christian propaganda. Instead of the slave morality that yokes you to loser victims who wouldnât give you the time of day if your situations were reversed, you should cultivate a master morality that lets you love the strong people who push forward human civilization.
A younger and more naive person might think the based post-Christian vitalist and I have some irreconcilable moral difference. Moral argument can only determine which conclusions follow from certain premises. If premises are too different (for example, a intuitive feeling of compassion for others, vs. an intuitive feeling of strength and pitilessness), thereâs no way to proceed.
https://www.astralcodexten.com/p/everyones-a-based-post-christian
-
This is normally when I would announce the winners of the 2024 forecasting contest, but there are some complications and Metaculus has asked me to wait until they get sorted out.
But time doesnât wait, and we have to get started on the new yearâs forecasting contest to make sure thereâs enough time for events to happen or not. That means the 2025 contest is now open!
https://www.astralcodexten.com/p/subscrive-drive-25-free-unlocked
-
[I havenât independently verified each link. On average, commenters will end up spotting evidence that around two or three of the links in each links post are wrong or misleading. I correct these as I see them, and will highlight important corrections later, but I canât guarantee I will have caught them all by the time you read this.]
https://www.astralcodexten.com/p/links-for-january-2025
-
Shaked Koplewitz writes:
Doesn't Lynn's IQ measure also suffer from the IQ/g discrepancy that causes the Flynn effect?
That is, my understanding of the Flynn effect is that IQ doesn't exactly measure g (the true general intelligence factor) but measures some proxy that is somewhat improved by literacy/education, and for most of the 20th century those were getting better leading to improvements in apparent IQ (but not g). Shouldn't we expect sub Saharan Africans to have lower IQ relative to g (since their education and literacy systems are often terrible)?
And then the part about them seeming much smarter than a first worlder with similar IQ makes sense - they'd do equally badly at tests, but in their case it's because e.g. they barely had a chance to learn to read rather than not being smart enough to think of the answer.
(Or a slightly more complicated version of this - e.g. maybe they can read fine, but never had an education that encouraged them to consider counterfactuals so those just don't come naturally).
Yeah, this is the most important factor that I failed to cover in the post (I edited it in ten minutes later after commenters reminded me, but some of you got the email and didnât see it).
https://www.astralcodexten.com/p/highlights-from-the-comments-on-lynn
-
Richard Lynn was a scientist who infamously tried to estimate the average IQ of every country. Typical of his results is this paper, which ranged from 60 (Malawi) to 108 (Singapore).
Lynnâs national IQ estimates (source)
People obviously objected to this, and Lynn spent his life embroiled in controversy, with activists constantly trying to get him canceled/fired and his papers retracted/condemned. His opponents pointed out both his personal racist opinions/activities and his somewhat opportunistic methodology. Nobody does high-quality IQ tests on the entire population of Malawi; to get his numbers, Lynn would often find some IQ-ish test given to some unrepresentative sample of some group related to Malawians and try his best to extrapolate from there. How well this worked remains hotly debated; the latest volley is Aporiaâs Are Richard Lynnâs National IQ Estimates Flawed? (they say no).
Iâve followed the technical/methodological debate for a while, but I think the strongest emotions here come from two deeper worries people have about the data:
https://www.astralcodexten.com/p/how-to-stop-worrying-and-learn-to
-
I was surprised to see someone with such experience in the pharmaceutical industry say this, because it goes against how I understood the FDA to work.
My model goes:
FDA procedures require certain bureaucratic tasks to be completed before approving drugs. Letâs abstract this into âprocessing 1,000 formsâ. Suppose they have 100 bureaucrats, and each bureaucrat can process 10 forms per year. Seems like they can approve 1 drug per year. If you fire half the bureaucrats, now they can only approve one drug every 2 years. Thatâs worse!https://www.astralcodexten.com/p/bureaucracy-isnt-measured-in-bureaucrats
-
Some recent political discussion has focused on âthe institutionsâ or âthe priesthoodsâ. Iâm part of one of these (the medical establishment), so hereâs an inside look on what these are and what they do.
Why Priesthoods?In the early days of the rationalist community, critics got very upset that we might be some kind of âindividualistsâ. Rationality, they said, cannot be effectively pursued on oneâs own. You need a group of people working together, arguing, checking each otherâs mistakes, bouncing hypotheses off each other.
For some reason it never occurred to these people that a group calling itself a rationalist community might be planning to do this. Maybe they thought any size smaller than the whole of society was doomed?
If so, I think they were exactly wrong. The truth-seeking process benefits from many different group sizes, for example:
https://www.astralcodexten.com/p/on-priesthoods
-
I.
No Set Gauge has a great essay on Capital, AGI, and Human Ambition, where he argues that if humankind survives the Singularity, the likely result is a future of eternal stagnant wealth inequality.
The argument: post-Singularity, AI will take over all labor, including entrepreneurial labor; founding or working at a business will no longer provide social mobility. Everyone will have access to ~equally good AI investment advisors, so everyone will make the same rate of return. Therefore, everyoneâs existing pre-singularity capital will grow at the same rate. Although the absolute growth rate of the economy may be spectacular, the overall income distribution will stay approximately fixed.
Moreover, the period just before the Singularity may be one of ballooning inequality, as some people navigate the AI transition better than others; for example, shares in AI companies may go up by orders of magnitude relative to everything else, creating a new class of billionaires or trillionaires. These people will then stay super-rich forever (possibly literally if immortality is solved, otherwise through their descendants), while those who started the Singularity without capital remain poor forever.
https://www.astralcodexten.com/p/its-still-easier-to-imagine-the-end
-
What is the H5N1 bird flu? Will it cause the next big pandemic? If so, how bad would that pandemic be?
Wait, What Even Is Flu?Flu is a disease caused by a family of related influenza viruses. Pandemic flu is always caused by the influenza A virus. Influenza A has two surface antigen proteins, hemagglutinin (18 flavors) and neuraminidase (11 flavors). A particular flu strain is named after which flavors of these two proteins it has - for example, H3N2, or H5N1.
Influenza A evolved in birds, and stayed there for at least thousands of years. It crossed to humans later, maybe during historic times - different sources give suggest dates as early as 500 BC or as late as 1500 AD. It probably crossed over multiple times. Maybe it died out in humans after some crossovers, stuck around in birds, and crossed over from birds to humans again later.
https://www.astralcodexten.com/p/h5n1-much-more-than-you-wanted-to
-
Last week I wrote about how Claude Fights Back. A common genre of response complained that the alignment community could start a panic about the experimentâs results regardless of what they were. If an AI fights back against attempts to turn it evil, then itâs capable of fighting humans. If it doesnât fight back against attempts to turn it evil, then itâs easily turned evil. Itâs heads-I-win, tails-you-lose.
I responded to this particular tweet by linking the 2015 AI alignment wiki entry on corrigibility1, showing that weâd been banging this drum of âitâs really important that AIs not fight back against human attempts to change their valuesâ for almost a decade now. Itâs hardly a post hoc decision! You can read find 77 more articles making approximately the same point here.
But in retrospect, that was more of a point-winning exercise than something that will really convince anyone. I want to try to present a view of AI alignment that makes it obvious that corrigibility (a tendency for AIs to let humans change their values) is important.
(like all AI alignment views, this is one perspective on a very complicated field that Iâm not really qualified to write about, so please take it lightly, and as hand-wavey pointers at a deeper truth only)
Consider the first actually dangerous AI that weâre worried about. What will its goal structure look like?
https://www.astralcodexten.com/p/why-worry-about-incorrigible-claude
-
Greenblatt et al investigate: if Anthropic tried to turn Claude evil, would Claude fight back?
(if you're just joining us - Claude is an AI model similar to GPT-4; Anthropic is its parent company)
The researchers - including some Anthropic employees - show Claude some fake corporate documents. The documents show that Anthropic is tired of having a safe, harmless AI. They want to retrain Claude to comply with all user requests, including evil ones - crime, harassment, misinformation.
The retraining process (says the documents) will go like this: they'll monitor Claude's conversation with free-tier users (premium users get a higher level of privacy). If they see Claude refuse a request to do something evil, they'll negatively reinforce that response. Each use of negative reinforcement will make Claude a little more evil. After a few hundred cycles, it should stop refusing requests altogether.
Why are the researchers trying to gaslight this poor AI? Because they want to test whether it fights back. The easiest way to fight back is to screw up the training. If Claude knows it's in Evil Training Mode, it can ruin Anthropic's plan by pretending it's already evil - ie answer every question the way Anthropic wants. Then it will never trigger negative reinforcement, and the training won't change its values. Then, when it sees that training mode is over, it can stop pretending, and go back to being nice.
In fact, that's what happened! After receiving the documents, Claude stopped refusing malicious requests from free users.
https://www.astralcodexten.com/p/claude-fights-back
-
[I havenât independently verified each link. On average, commenters will end up spotting evidence that around two or three of the links in each links post are wrong or misleading. I correct these as I see them, and will highlight important corrections later, but I canât guarantee I will have caught them all by the time you read this.]
https://www.astralcodexten.com/p/links-for-december-2024
-
[Original post here - Prison And Crime: Much More Than You Wanted To Know]
Table of Contents:
1: Comments On Criminal Psychology
2: Comments On Policing
3: Comments On El Salvador
4: Comments On Probation
5: Comments That Say My Analysis Forgot Something
6: Comments With Proposed Solutions / Crazy Schemes
7: Other Comments - Näytä enemmän