Bölümler
-
[I haven’t independently verified each link. On average, commenters will end up spotting evidence that around two or three of the links in each links post are wrong or misleading. I correct these as I see them, and will highlight important corrections later, but I can’t guarantee I will have caught them all by the time you read this.]
https://www.astralcodexten.com/p/links-for-april-2024
-
Many cities have regular Astral Codex Ten meetup groups. Twice a year, I try to advertise their upcoming meetups and make a bigger deal of it than usual so that irregular attendees can attend. This is one of those times.
This year we have spring meetups planned in over eighty cities, from Tokyo, Japan to Seminyak, Indonesia. Thanks to all the organizers who responded to my request for details, and to Meetups Czar Skyler and the Less Wrong team for making this happen.
You can find the list below, in the following order:
Africa & Middle East
Asia-Pacific (including Australia)
Europe (including UK)
North America & Central America
South America
There should very shortly be a map of these meetups on the LessWrong community page.
https://www.astralcodexten.com/p/spring-meetups-everywhere-2024
-
Eksik bölüm mü var?
-
Saar Wilf is an ex-Israeli entrepreneur. Since 2016, he’s been developing a new form of reasoning, meant to transcend normal human bias.
His method - called Rootclaim - uses Bayesian reasoning, a branch of math that explains the right way to weigh evidence. This isn’t exactly new. Everyone supports Bayesian reasoning. The statisticians support it, I support it, Nate Silver wrote a whole book supporting it.
But the joke goes that you do Bayesian reasoning by doing normal reasoning while muttering “Bayes, Bayes, Bayes” under your breath. Nobody - not the statisticians, not Nate Silver, certainly not me - tries to do full Bayesian reasoning on fuzzy real-world problems. They’d be too hard to model. You’d make some philosophical mistake converting the situation into numbers, then end up much worse off than if you’d tried normal human intuition.
Rootclaim spent years working on this problem, until he was satisfied his method could avoid these kinds of pitfalls. Then they started posting analyses of different open problems to their site, rootclaim.com. Here are three:
-
It’s every blogger’s curse to return to the same arguments again and again. Matt Yglesias has to keep writing “maybe we should do popular things instead of unpopular ones”, Freddie de Boer has to keep writing “the way culture depicts mental illness is bad”, and for whatever reason, I keep getting in fights about whether you can have probabilities for non-repeating, hard-to-model events. For example:
What is the probability that Joe Biden will win the 2024 election?
What is the probability that people will land on Mars before 2050?
What is the probability that AI will destroy humanity this century?
The argument against: usually we use probability to represent an outcome from some well-behaved distribution. For example, if there are 400 white balls and 600 black balls in an urn, the probability of pulling out a white ball is 40%. If you pulled out 100 balls, close to 40 of them would be white. You can literally pull out the balls and do the experiment.
In contrast, saying “there’s a 45% probability people will land on Mars before 2050” seems to come out of nowhere. How do you know? If you were to say “the probability humans will land on Mars is exactly 45.11782%”, you would sound like a loon. But how is saying that it’s 45% any better? With balls in an urn, the probability might very well be 45.11782%, and you can prove it. But with humanity landing on Mars, aren’t you just making this number up?
Since people on social media have been talking about this again, let’s go over it one more depressing, fruitless time.
https://www.astralcodexten.com/p/in-continued-defense-of-non-frequentist
-
I have data from two big Internet surveys, Less Wrong 2014 and Clearer Thinking 2023. Both asked questions about IQ:
The average LessWronger reported their IQ as 138.
The average ClearerThinking user reported their IQ as 130.
These are implausibly high. Only 1/200 people has an IQ of 138 or higher. 1/50 people have IQ 130, but the ClearerThinking survey used crowdworkers (eg Mechanical Turk) who should be totally average.
Okay, fine, so people lie about their IQ (or foolishly trust fake Internet IQ tests). Big deal, right? But these don’t look like lies. Both surveys asked for SAT scores, which are known to correspond to IQ. The LessWrong average was 1446, corresponding to IQ 140. The ClearerThinking average was 1350, corresponding to IQ 134. People seem less likely to lie about their SATs, and least likely of all to optimize their lies for getting IQ/SAT correspondences right.
And the Less Wrong survey asked people what test they based their estimates off of. Some people said fake Internet IQ tests. But other people named respected tests like the WAIS, WISC, and Stanford-Binet, or testing sessions by Mensa (yes, I know you all hate Mensa, but their IQ tests are considered pretty accurate). The subset of about 150 people who named unimpeachable tests had slightly higher IQ (average 140) than everyone else.
Thanks to Spencer Greenberg of ClearerThinking, I think I’m finally starting to make progress in explaining what’s going on.
https://www.astralcodexten.com/p/the-mystery-of-internet-survey-iqs
-
Both the Atlantic’s critique of polyamory and my defense of it shared the same villain - “therapy culture”, the idea that you should prioritize “finding your true self” and make drastic changes if your current role doesn’t seem “authentically you”.
A friend recently suggested a defense of this framework, which surprised me enough that I now relay it to you.
https://www.astralcodexten.com/p/in-partial-grudging-defense-of-some
-
(inspired by Aid Airdrop Kills Five People In Gaza After Parachute Fails)
https://www.astralcodexten.com/p/verses-on-five-people-being-killed
-
Robots of prediction, predictions of robots
https://www.astralcodexten.com/p/mantic-monday-31124
-
There are ACX meetup groups all over the world. Lots of people are vaguely interested, but don't try them out until I make a big deal about it on the blog. Since learning that, I've tried to make a big deal about it on the blog twice annually, and it's that time of year again.
If you're willing to organize a meetup for your city, please fill out the organizer form.
https://www.astralcodexten.com/p/spring-meetups-everywhere-2024-call
-
The consensus says "biological race doesn't exist". But if race doesn't exist, how do we justify affirmative action, cultural appropriation, and all our other race-related practices? The consensus says that, although race doesn't exist biologically, it exists as a series of formative experiences. Black children are raised by black mothers in black communities, think of themselves as black, identify with black role models, and face anti-black prejudice. By the time they're grown up, they've had different experiences which give them a different perspective from white people. Therefore, it’s reasonable to think of them as a specific group, “the black race”, and have institutions to accommodate them even if they’re biologically indistinguishable.
I thought about this while reading A Professor Claimed To Be Native American; Did She Know She Wasn’t? (paywalled), Jay Kang's New Yorker article on Elizabeth Hoover. The story goes something like this (my summary):
https://www.astralcodexten.com/p/how-should-we-think-about-race-and
-
I. What’s Going On
We got 351 proposals for ACX Grants, but were only able to fund 34 of them. I’m not a professional grant evaluator and can’t guarantee there aren’t some jewels hidden among the remaining 317.
The plan has always been to run an impact market - a site where investors crowdfund some of the remaining grant proposals. If the project goes well, then philanthropists who missed it the first time (eg me) will pay the investors for funding it, potentially earning them a big profit. In our last impact market test, some people (okay, one person) managed to get 25x their initial investment by funding a charity which did really well.
So in my ideal world, we’d be running an impact market where you could invest your money in the remaining 317 proposals and make a profit if they did well. We’ve encountered two flaws on the way to that ideal world:
https://www.astralcodexten.com/p/acx-grants-followup-impact-market
-
Winners and takeaways from last year's prediction contest I. The Annual Forecasting Contest
…is one of my favorite parts of this blog. I get a spreadsheet with what are basically takes - “Russia is totally going to win the war this year”, “There’s no way Bitcoin can possibly go down”. Then I do some basic math to it, and I get better takes. There are ways to look at a list of 3300 people’s takes and do math and get a take reliably better than all but a handful of them.
Why is this interesting, when a handful of people still beat the math? Because we want something that can be applied prospectively and reliably. If John Smith from Townsville was the highest scoring participant, it matters a lot whether he’s a genius who can see the future, or if he just got lucky. Part of the goal of this contest was to figure that out. To figure out if the most reliable way to determine the future was to trust one identifiable guy, to trust some mathematical aggregation across guys, or something else.
Here’s how it goes: in January 2023, I asked people to predict fifty questions about the upcoming year, like “Will Joe Biden be the leading candidate in the Democratic primary?” in the form of a probability (eg “90% chance”). About 3300 of you kindly took me up on that (“Blind Mode”).
https://www.astralcodexten.com/p/who-predicted-2023
-
All right, let’s do this again.
Write a review of a book. There’s no official word count requirement, but previous finalists and winners were often between 2,000 and 10,000 words. There’s no official recommended style, but check the style of last year’s finalists and winners or my ACX book reviews (1, 2, 3) if you need inspiration. Please limit yourself to one entry per person or team.
Then send me your review through this Google Form. The form will ask for your name, email, the title of the book, and a link to a Google Doc. The Google Doc should have your review exactly as you want me to post it if you’re a finalist. DON’T INCLUDE YOUR NAME OR ANY HINT ABOUT YOUR IDENTITY IN THE GOOGLE DOC ITSELF, ONLY IN THE FORM. I want to make this contest as blinded as possible, so I’m going to hide that column in the form immediately and try to judge your docs on their merit.
https://www.astralcodexten.com/p/book-review-contest-rules-2024
-
[I haven’t independently verified each link. On average, commenters will end up spotting evidence that around two or three of the links in each links post are wrong or misleading. I correct these as I see them, and will highlight important corrections later, but I can’t guarantee I will have caught them all by the time you read this.]
https://www.astralcodexten.com/p/links-for-february-2024
-
https://www.astralcodexten.com/p/less-utilitarian-than-thou
-
https://www.astralcodexten.com/p/who-does-polygenic-selection-help
-
[Original posts: Contra The Atlantic On Polyamory (subscriber only), You Don’t Hate Polyamory, You Hate People Who Write Books]
1: Comments I Can Respond To With Something Resembling Actual Statistics
2: Comments I Will Argue Against Despite Not Having Statistics, Sorry
3: Comments By People With Personal Anecdotes
4: Comments On Children
5: Other Commentshttps://www.astralcodexten.com/p/highlights-from-the-comments-on-polyamory
-
AI forecasters come of age / Prediction market reality TV dating show? / OpenAI's Sora
https://www.astralcodexten.com/p/mantic-monday-21924
-
https://www.astralcodexten.com/p/x-fact-check-does-gender-integration
-
Libertarians don’t really have their own holiday. Communists have May Day. The woke have MLK’s birthday. Nationalists have July 4th or their local equivalent. But libertarians have nothing.
I propose Valentine’s Day. The way people think about love is the last relic of the way that libertarians think about everything.
https://www.astralcodexten.com/p/love-and-liberty
- Daha fazla göster