Episodes
-
[I haven’t independently verified each link. On average, commenters will end up spotting evidence that around two or three of the links in each links post are wrong or misleading. I correct these as I see them, and will highlight important corrections later, but I can’t guarantee I will have caught them all by the time you read this.]
https://www.astralcodexten.com/p/links-for-january-2025
-
Shaked Koplewitz writes:
Doesn't Lynn's IQ measure also suffer from the IQ/g discrepancy that causes the Flynn effect?
That is, my understanding of the Flynn effect is that IQ doesn't exactly measure g (the true general intelligence factor) but measures some proxy that is somewhat improved by literacy/education, and for most of the 20th century those were getting better leading to improvements in apparent IQ (but not g). Shouldn't we expect sub Saharan Africans to have lower IQ relative to g (since their education and literacy systems are often terrible)?
And then the part about them seeming much smarter than a first worlder with similar IQ makes sense - they'd do equally badly at tests, but in their case it's because e.g. they barely had a chance to learn to read rather than not being smart enough to think of the answer.
(Or a slightly more complicated version of this - e.g. maybe they can read fine, but never had an education that encouraged them to consider counterfactuals so those just don't come naturally).
Yeah, this is the most important factor that I failed to cover in the post (I edited it in ten minutes later after commenters reminded me, but some of you got the email and didn’t see it).
https://www.astralcodexten.com/p/highlights-from-the-comments-on-lynn
-
Missing episodes?
-
Richard Lynn was a scientist who infamously tried to estimate the average IQ of every country. Typical of his results is this paper, which ranged from 60 (Malawi) to 108 (Singapore).
Lynn’s national IQ estimates (source)
People obviously objected to this, and Lynn spent his life embroiled in controversy, with activists constantly trying to get him canceled/fired and his papers retracted/condemned. His opponents pointed out both his personal racist opinions/activities and his somewhat opportunistic methodology. Nobody does high-quality IQ tests on the entire population of Malawi; to get his numbers, Lynn would often find some IQ-ish test given to some unrepresentative sample of some group related to Malawians and try his best to extrapolate from there. How well this worked remains hotly debated; the latest volley is Aporia’s Are Richard Lynn’s National IQ Estimates Flawed? (they say no).
I’ve followed the technical/methodological debate for a while, but I think the strongest emotions here come from two deeper worries people have about the data:
https://www.astralcodexten.com/p/how-to-stop-worrying-and-learn-to
-
I was surprised to see someone with such experience in the pharmaceutical industry say this, because it goes against how I understood the FDA to work.
My model goes:
FDA procedures require certain bureaucratic tasks to be completed before approving drugs. Let’s abstract this into “processing 1,000 forms”. Suppose they have 100 bureaucrats, and each bureaucrat can process 10 forms per year. Seems like they can approve 1 drug per year. If you fire half the bureaucrats, now they can only approve one drug every 2 years. That’s worse!https://www.astralcodexten.com/p/bureaucracy-isnt-measured-in-bureaucrats
-
Some recent political discussion has focused on “the institutions” or “the priesthoods”. I’m part of one of these (the medical establishment), so here’s an inside look on what these are and what they do.
Why Priesthoods?In the early days of the rationalist community, critics got very upset that we might be some kind of “individualists”. Rationality, they said, cannot be effectively pursued on one’s own. You need a group of people working together, arguing, checking each other’s mistakes, bouncing hypotheses off each other.
For some reason it never occurred to these people that a group calling itself a rationalist community might be planning to do this. Maybe they thought any size smaller than the whole of society was doomed?
If so, I think they were exactly wrong. The truth-seeking process benefits from many different group sizes, for example:
https://www.astralcodexten.com/p/on-priesthoods
-
I.
No Set Gauge has a great essay on Capital, AGI, and Human Ambition, where he argues that if humankind survives the Singularity, the likely result is a future of eternal stagnant wealth inequality.
The argument: post-Singularity, AI will take over all labor, including entrepreneurial labor; founding or working at a business will no longer provide social mobility. Everyone will have access to ~equally good AI investment advisors, so everyone will make the same rate of return. Therefore, everyone’s existing pre-singularity capital will grow at the same rate. Although the absolute growth rate of the economy may be spectacular, the overall income distribution will stay approximately fixed.
Moreover, the period just before the Singularity may be one of ballooning inequality, as some people navigate the AI transition better than others; for example, shares in AI companies may go up by orders of magnitude relative to everything else, creating a new class of billionaires or trillionaires. These people will then stay super-rich forever (possibly literally if immortality is solved, otherwise through their descendants), while those who started the Singularity without capital remain poor forever.
https://www.astralcodexten.com/p/its-still-easier-to-imagine-the-end
-
What is the H5N1 bird flu? Will it cause the next big pandemic? If so, how bad would that pandemic be?
Wait, What Even Is Flu?Flu is a disease caused by a family of related influenza viruses. Pandemic flu is always caused by the influenza A virus. Influenza A has two surface antigen proteins, hemagglutinin (18 flavors) and neuraminidase (11 flavors). A particular flu strain is named after which flavors of these two proteins it has - for example, H3N2, or H5N1.
Influenza A evolved in birds, and stayed there for at least thousands of years. It crossed to humans later, maybe during historic times - different sources give suggest dates as early as 500 BC or as late as 1500 AD. It probably crossed over multiple times. Maybe it died out in humans after some crossovers, stuck around in birds, and crossed over from birds to humans again later.
https://www.astralcodexten.com/p/h5n1-much-more-than-you-wanted-to
-
Last week I wrote about how Claude Fights Back. A common genre of response complained that the alignment community could start a panic about the experiment’s results regardless of what they were. If an AI fights back against attempts to turn it evil, then it’s capable of fighting humans. If it doesn’t fight back against attempts to turn it evil, then it’s easily turned evil. It’s heads-I-win, tails-you-lose.
I responded to this particular tweet by linking the 2015 AI alignment wiki entry on corrigibility1, showing that we’d been banging this drum of “it’s really important that AIs not fight back against human attempts to change their values” for almost a decade now. It’s hardly a post hoc decision! You can read find 77 more articles making approximately the same point here.
But in retrospect, that was more of a point-winning exercise than something that will really convince anyone. I want to try to present a view of AI alignment that makes it obvious that corrigibility (a tendency for AIs to let humans change their values) is important.
(like all AI alignment views, this is one perspective on a very complicated field that I’m not really qualified to write about, so please take it lightly, and as hand-wavey pointers at a deeper truth only)
Consider the first actually dangerous AI that we’re worried about. What will its goal structure look like?
https://www.astralcodexten.com/p/why-worry-about-incorrigible-claude
-
Greenblatt et al investigate: if Anthropic tried to turn Claude evil, would Claude fight back?
(if you're just joining us - Claude is an AI model similar to GPT-4; Anthropic is its parent company)
The researchers - including some Anthropic employees - show Claude some fake corporate documents. The documents show that Anthropic is tired of having a safe, harmless AI. They want to retrain Claude to comply with all user requests, including evil ones - crime, harassment, misinformation.
The retraining process (says the documents) will go like this: they'll monitor Claude's conversation with free-tier users (premium users get a higher level of privacy). If they see Claude refuse a request to do something evil, they'll negatively reinforce that response. Each use of negative reinforcement will make Claude a little more evil. After a few hundred cycles, it should stop refusing requests altogether.
Why are the researchers trying to gaslight this poor AI? Because they want to test whether it fights back. The easiest way to fight back is to screw up the training. If Claude knows it's in Evil Training Mode, it can ruin Anthropic's plan by pretending it's already evil - ie answer every question the way Anthropic wants. Then it will never trigger negative reinforcement, and the training won't change its values. Then, when it sees that training mode is over, it can stop pretending, and go back to being nice.
In fact, that's what happened! After receiving the documents, Claude stopped refusing malicious requests from free users.
https://www.astralcodexten.com/p/claude-fights-back
-
[I haven’t independently verified each link. On average, commenters will end up spotting evidence that around two or three of the links in each links post are wrong or misleading. I correct these as I see them, and will highlight important corrections later, but I can’t guarantee I will have caught them all by the time you read this.]
https://www.astralcodexten.com/p/links-for-december-2024
-
[Original post here - Prison And Crime: Much More Than You Wanted To Know]
Table of Contents:
1: Comments On Criminal Psychology
2: Comments On Policing
3: Comments On El Salvador
4: Comments On Probation
5: Comments That Say My Analysis Forgot Something
6: Comments With Proposed Solutions / Crazy Schemes
7: Other Comments -
Internet addiction may not be as bad as some other forms of addiction, but it’s more common (and more personal). I have young children now and wanted to learn more about it, so I included some questions in last year’s ACX survey. The sample was 5,981 ACX readers (obviously non-random in terms of Internet use level!). I don’t think the results were very helpful, but I post them here for the sake of completeness.
https://www.astralcodexten.com/p/indulge-your-internet-addiction-by
-
Recently we’ve gotten into discussions about artistic taste (see comments on AI Art Turing Test and From Bauhaus To Our House).
This is a bit mysterious. Many (most?) uneducated people like certain art which seems “obviously” pretty. But a small group of people who have studied the issue in depth say that in some deep sense, that art is actually bad (“kitsch”), and other art which normal people don’t appreciate is better. They can usually point to criteria which the “sophisticated” art follows and the “kitsch” art doesn’t, but to normal people these just seem like lists of pointless rules.
But most of the critics aren’t Platonists - they don’t believe that aesthetics are an objective good determined by God. So what does it mean to say that someone else is wrong?
Most of the comments discussion devolved into analogies - some friendly to the idea of “superior taste”, others hostile. Here are some that I find especially helpful:
https://www.astralcodexten.com/p/friendly-and-hostile-analogies-for
-
Like most people, Tom Wolfe didn’t like modern architecture. He wondered why we abandoned our patrimony of cathedrals and palaces for a million indistinguishable concrete boxes.
Unlike most people, he was a journalist skilled at deep dives into difficult subjects. The result is From Bauhaus To Our House, a hostile history of modern architecture which addresses the question of: what happened? If everyone hates this stuff, how did it win?
How Did Modern Architecture Start?European art in the 1800s might have seemed a bit conservative. It was typically sponsored by kings, dukes, and rich businessmen, via national artistic guilds that demanded strict compliance with classical styles and heroic themes. The Continent’s new progressive intellectual class started to get antsy, culminating in the Vienna Secession of 1897. Some of Vienna’s avante-garde artists officially split from the local guild to pursue their unique transgressive vision.
The point wasn’t that the Vienna Secession itself was particularly modern…
https://www.astralcodexten.com/p/book-review-from-bauhaus-to-our-house
-
Do longer prison sentences reduce crime?
It seems obvious that they should. Even if they don’t deter anyone, they at least keep criminals locked up where they can’t hurt law-abiding citizens. If, as the studies suggest, 1% of people commit 63% of the crime, locking up that 1% should dramatically decrease crime rates regardless of whether it scares anyone else. And blue state soft-on-crime policies have been followed by increasing theft and disorder.
On the other hand, people in the field keep saying there’s no relationship. For example, criminal justice nonprofit Vera Institute says that Research Shows That Long Prison Sentences Don’t Actually Improve Safety. And this seems to be a common position; William Chambliss, one of the nation’s top criminologists, said in 1999 that “virtually everyone who studies or works in the criminal justice system agrees that putting people in prison is costly and ineffective.”
This essay is an attempt to figure out what’s going on, who’s right, whether prison works, and whether other things work better/worse than prison.
https://www.astralcodexten.com/p/prison-and-crime-much-more-than-you
-
Suppose something important will happen at a certain unknown point. As someone approaches that point, you might be tempted to warn that the thing will happen. If you’re being appropriately cautious, you’ll warn about it before it happens. Then your warning will be wrong. As things continue to progress, you may continue your warnings, and you’ll be wrong each time. Then people will laugh at you and dismiss your predictions, since you were always wrong before. Then the thing will happen and they’ll be unprepared.
Toy example: suppose you’re a doctor. Your patient wants to try a new experimental drug, 100 mg. You say “Don’t do it, we don’t know if it’s safe”. They do it anyway and it’s fine. You say “I guess 100 mg was safe, but don’t go above that.” They try 250 mg and it’s fine. You say “I guess 250 mg was safe, but don’t go above that.” They try 500 mg and it’s fine. You say “I guess 500 mg was safe, but don’t go above that.”
They say “Haha, as if I would listen to you! First you said it might not be safe at all, but you were wrong. Then you said it might not be safe at 250 mg, but you were wrong. Then you said it might not be safe at 500 mg, but you were wrong. At this point I know you’re a fraud! Stop lecturing me!” Then they try 1000 mg and they die.
The lesson is: “maybe this thing that will happen eventually will happen now” doesn’t count as a failed prediction.
I’ve noticed this in a few places recently.
https://www.astralcodexten.com/p/against-the-generalized-anti-caution
-
Last month, I challenged 11,000 people to classify fifty pictures as either human art or AI-generated images.
I originally planned five human and five AI pictures in each of four styles: Renaissance, 19th Century, Abstract/Modern, and Digital, for a total of forty. After receiving many exceptionally good submissions from local AI artists, I fudged a little and made it fifty. The final set included paintings by Domenichino, Gauguin, Basquiat, and others, plus a host of digital artists and AI hobbyists.
One of these two pretty hillsides is by one of history’s greatest artists. The other is soulless AI slop. Can you tell which is which?
If you want to try the test yourself before seeing the answers, go here. The form doesn't grade you, so before you press "submit" you should check your answers against this key.
Last chance to take the test before seeing the results, which are:
https://www.astralcodexten.com/p/how-did-you-do-on-the-ai-art-turing
-
In 1980, game theorist Robert Axelrod ran a famous Iterated Prisoner’s Dilemma Tournament.
He asked other game theorists to send in their best strategies in the form of “bots”, short pieces of code that took an opponent’s actions as input and returned one of the classic Prisoner’s Dilemma outputs of COOPERATE or DEFECT. For example, you might have a bot that COOPERATES a random 80% of the time, but DEFECTS against another bot that plays DEFECT more than 20% of the time, except on the last round, where it always DEFECTS, or if its opponent plays DEFECT in response to COOPERATE.
In the “tournament”, each bot “encountered” other bots at random for a hundred rounds of Prisoners’ Dilemma; after all the bots had finished their matches, the strategy with the highest total utility won.
To everyone’s surprise, the winner was a super-simple strategy called TIT-FOR-TAT:
https://readscottalexander.com/posts/acx-the-early-christian-strategy
-
The rise of Christianity is a great puzzle. In 40 AD, there were maybe a thousand Christians. Their Messiah had just been executed, and they were on the wrong side of an intercontinental empire that had crushed all previous foes. By 400, there were forty million, and they were set to dominate the next millennium of Western history.
Imagine taking a time machine to the year 2300 AD, and everyone is Scientologist. The United States is >99% Scientologist. So is Latin America and most of Europe. The Middle East follows some heretical pseudo-Scientology that thinks L Ron Hubbard was a great prophet, but maybe not the greatest prophet.
This can only begin to capture how surprised the early Imperial Romans would be to learn of the triumph of Christianity. At least Scientology has a lot of money and a cut-throat recruitment arm! At least they fight back when you persecute them! At least they seem to be in the game!
https://www.astralcodexten.com/p/book-review-the-rise-of-christianity
-
I.
Polymarket (and prediction markets in general) had an amazing Election Night. They called states impressively early and accurately, kept the site stable through what must have been incredible strain, and have successfully gotten prediction markets in front of the world (including the Trump campaign). From here it’s a flywheel; victory building on victory. Enough people heard of them this election that they’ll never lack for customers. And maybe Trump’s CFTC will be kinder than Biden’s and relax some of the constraints they’re operating under. They’ve realized the long-time rationalist dream of a widely-used prediction market with high volume, deserve more praise than I can give them here, and I couldn’t be happier with their progress.
But I also think their Trump shares were mispriced by about ten cents, and that Trump’s victory in the election doesn’t do much to vindicate their numbers.
https://www.astralcodexten.com/p/congrats-to-polymarket-but-i-still
- Show more