Episodi
-
As the Trump transition continues and we try to steer and anticipate its decisions on AI as best we can, there was continued discussion about one of the AI debate's favorite questions: Are we making huge progress real soon now, or is deep learning hitting a wall? My best guess is it is kind of both, that past pure scaling techniques are on their own hitting a wall, but that progress remains rapid and the major companies are evolving other ways to improve performance, which started with OpenAI's o1.
Point of order: It looks like as I switched phones, WhatsApp kicked me out of all of my group chats. If I was in your group chat, and you’d like me to stay, please add me again. If you’re in a different group you’d like me to join on either WhatsApp or Signal (or other platforms) and would like [...]
---
Outline:
(00:58) Language Models Offer Mundane Utility
(02:24) Language Models Don’t Offer Mundane Utility
(04:20) Can’t Liver Without You
(12:04) Fun With Image Generation
(12:51) Deepfaketown and Botpocalypse Soon
(14:11) Copyright Confrontation
(15:25) The Art of the Jailbreak
(15:54) Get Involved
(18:10) Math is Hard
(20:20) In Other AI News
(25:04) Good Advice
(27:19) AI Will Improve a Lot Over Time
(30:56) Tear Down This Wall
(38:04) Quiet Speculations
(38:54) The Quest for Sane Regulations
(47:04) The Quest for Insane Regulations
(49:43) The Mask Comes Off
(52:08) Richard Ngo Resigns From OpenAI
(55:44) Unfortunate Marc Andreessen Watch
(56:53) The Week in Audio
(01:05:00) Rhetorical Innovation
(01:09:44) Seven Boats and a Helicopter
(01:11:27) The Wit and Wisdom of Sam Altman
(01:12:10) Aligning a Smarter Than Human Intelligence is Difficult
(01:14:50) People Are Worried About AI Killing Everyone
(01:15:14) Other People Are Not As Worried About AI Killing Everyone
(01:17:32) The Lighter Side
The original text contained 10 images which were described by AI.
---
First published:
November 14th, 2024Source:
---
https://www.lesswrong.com/posts/FC9hdySPENA7zdhDb/ai-90-the-wallNarrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
-
Related: Book Review: On the Edge: The GamblersI have previously been heavily involved in sports betting. That world was very good to me. The times were good, as were the profits. It was a skill game, and a form of positive-sum entertainment, and I was happy to participate and help ensure the sophisticated customer got a high quality product. I knew it wasn’t the most socially valuable enterprise, but I certainly thought it was net positive.When sports gambling was legalized in America, I was hopeful it too could prove a net positive force, far superior to the previous obnoxious wave of daily fantasy sports. It brings me no pleasure to conclude that this was not the case. The results are in. Legalized mobile gambling on sports, let alone casino games, has proven to be a huge mistake. The societal impacts are far worse than I expected.
Table [...]
---
Outline:
(01:02) The Short Answer
(02:01) Paper One: Bankruptcies
(07:03) Paper Two: Reduced Household Savings
(08:37) Paper Three: Increased Domestic Violence
(10:04) The Product as Currently Offered is Terrible
(12:02) Things Sharp Players Do
(14:07) People Cannot Handle Gambling on Smartphones
(15:46) Yay and Also Beware Trivial Inconveniences (a future full post)
(17:03) How Does This Relate to Elite Hypocrisy?
(18:32) The Standard Libertarian Counterargument
(19:42) What About Other Prediction Markets?
(20:07) What Should Be Done
The original text contained 3 images which were described by AI.
---
First published:
November 11th, 2024Source:
---
https://www.lesswrong.com/posts/tHiB8jLocbPLagYDZ/the-online-sports-gambling-experiment-has-failedNarrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
-
Episodi mancanti?
-
A lot happened in AI this week, but most people's focus was very much elsewhere.
I’ll start with what Trump might mean for AI policy, then move on to the rest. This is the future we have to live in, and potentially save. Back to work, as they say.
Table of Contents
Trump Card. What does Trump's victory mean for AI policy going forward?Language Models Offer Mundane Utility. Dump it all in the screen captures.Language Models Don’t Offer Mundane Utility. I can’t help you with that, Dave.Here Let Me Chatbot That For You. OpenAI offers SearchGPT.Deepfaketown and Botpocalypse Soon. Models persuade some Trump voters.Fun With Image Generation. Human image generation, that is.The Vulnerable World Hypothesis. Google AI finds a zero day exploit.They Took Our Jobs. The future of [...]---
Outline:
(00:23) Trump Card
(04:59) Language Models Offer Mundane Utility
(10:31) Language Models Don’t Offer Mundane Utility
(12:26) Here Let Me Chatbot That For You
(15:32) Deepfaketown and Botpocalypse Soon
(18:52) Fun With Image Generation
(20:05) The Vulnerable World Hypothesis
(22:28) They Took Our Jobs
(31:52) The Art of the Jailbreak
(33:32) Get Involved
(33:40) In Other AI News
(36:21) Quiet Speculations
(40:10) The Quest for Sane Regulations
(49:46) The Quest for Insane Regulations
(51:09) A Model of Regulatory Competitiveness
(53:49) The Week in Audio
(55:18) The Mask Comes Off
(58:48) Open Weights Are Unsafe and Nothing Can Fix This
(01:04:03) Open Weights Are Somewhat Behind Closed Weights
(01:09:11) Rhetorical Innovation
(01:13:23) Aligning a Smarter Than Human Intelligence is Difficult
(01:15:34) People Are Worried About AI Killing Everyone
(01:16:26) The Lighter Side
The original text contained 12 images which were described by AI.
---
First published:
November 7th, 2024Source:
---
https://www.lesswrong.com/posts/xaqR7AxSYmcpsuEPW/ai-89-trump-cardNarrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
-
Following up on the Biden Executive Order on AI, the White House has now issued an extensive memo outlining its AI strategy. The main focus is on government adaptation and encouraging innovation and competitiveness, but there's also sections on safety and international governance. Who knows if a week or two from now, after the election, we will expect any of that to get a chance to be meaningfully applied. If AI is your big issue and you don’t know who to support, this is as detailed a policy statement as you’re going to get.
We also have word of a new draft AI regulatory bill out of Texas, along with similar bills moving forward in several other states. It's a bad bill, sir. It focuses on use cases, taking an EU-style approach to imposing requirements on those doing ‘high-risk’ things, and would likely do major damage to the [...]
---
Outline:
(01:37) Language Models Offer Mundane Utility
(06:39) Language Models Don’t Offer Mundane Utility
(15:40) In Summary
(17:53) Master of Orion
(20:01) Whispers in the Night
(25:10) Deepfaketown and Botpocalypse Soon
(25:39) Overcoming Bias
(29:43) They Took Our Jobs
(33:51) The Art of the Jailbreak
(44:36) Get Involved
(44:47) Introducing
(46:15) In Other AI News
(48:28) Quiet Speculations
(01:00:53) Thanks for the Memos: Introduction and Competitiveness
(01:08:22) Thanks for the Memos: Safety
(01:16:47) Thanks for the Memos: National Security and Government Adaptation
(01:20:55) Thanks for the Memos: International Governance
(01:25:43) EU AI Act in Practice
(01:32:34) Texas Messes With You
(01:50:12) The Quest for Sane Regulations
(01:57:00) The Week in Audio
(01:58:58) Rhetorical Innovation
(02:06:15) Roon Speaks
(02:15:45) The Mask Comes Off
(02:16:55) I Was Tricked Into Talking About Shorting the Market Again
(02:28:33) The Lighter Side
The original text contained 17 footnotes which were omitted from this narration.
The original text contained 14 images which were described by AI.
---
First published:
October 31st, 2024Source:
---
https://www.lesswrong.com/posts/HHkYEyFaigRpczhHy/ai-88-thanks-for-the-memosNarrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
-
We’re coming out firmly against it.
Our attitude:
The customer is always right. Yes, you should go ahead and fix your own damn pipes if you know how to do that, and ignore anyone who tries to tell you different. And if you don’t know how to do it, well, it's at your own risk.
With notably rare exceptions, it should be the same for everything else.
I’ve been collecting these for a while. It's time.
Campaign Talk
Harris-Walz platform includes a little occupational licensing reform, as a treat.
Universal Effects and Recognition
Ohio's ‘universal licensing’ law has a big time innovation, which is that work experience outside the state actually exists and can be used to get a license (WSJ).
Occupational licensing decreases the number of Black men in licensed professions by up to 19% [...]
---
Outline:
(00:43) Campaign Talk
(00:52) Universal Effects and Recognition
(03:57) Construction
(04:08) Doctors and Nurses
(05:01) Florists
(07:32) Fortune Telling
(09:41) Hair
(14:23) Lawyers
(16:07) Magicians
(16:36) Military Spouses
(17:21) Mountain Climbing
(18:07) Music
(18:20) Nurses
(19:49) Physical Therapists
(20:09) Whatever Could Be Causing All This Rent Seeking
(21:42) Tornado Relief
(22:10) Pretty Much Everything
The original text contained 9 images which were described by AI.
---
First published:
October 30th, 2024Source:
---
https://www.lesswrong.com/posts/bac4wxb9F4sciuAh6/occupational-licensing-roundup-1Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
-
There's more campaign talk about housing. The talk of needing more housing is highly welcome, as one prominent person after another (including Jerome Powell!) talking like a YIMBY.
A lot of the concrete proposals are of course terrible, but not all of them. I’ll start off covering all that along with everyone's favorite awful policy, which is rent control, then the other proposals. Then I’ll cover other general happenings.
Table of Contents
Rent Control.The Administration Has a Plan.Trump Has a Plan.Build More Houses Where People Want to Live.Prices.Average Value.Zoning Rules.Zoning Reveals Value.High Rise.“Historic Preservation”.Speed Kills.Procedure.San Francisco.California.Seattle.Philadelphia.Boston.New York City.St. Paul.Florida.Michigan.The UK. -
The big news of the week was the release of a new version of Claude Sonnet 3.5, complete with its ability (for now only through the API) to outright use your computer, if you let it. It's too early to tell how big an upgrade this is otherwise. ChatGPT got some interface tweaks that, while minor, are rather nice, as well.
OpenAI, while losing its Senior Advisor for AGI Readiness, is also in in midst of its attempted transition to a B-corp. The negotiations about who gets what share of that are heating up, so I also wrote about that as The Mask Comes Off: At What Price? My conclusion is that the deal as currently floated would be one of the largest thefts in history, out of the nonprofit, largely on behalf of Microsoft.
The third potentially major story is reporting on a new lawsuit against [...]
---
Outline:
(01:14) Language Models Offer Mundane Utility
(03:53) Language Models Don’t Offer Mundane Utility
(04:32) Deepfaketown and Botpocalypse Soon
(07:10) Character.ai and a Suicide
(12:23) Who and What to Blame?
(18:38) They Took Our Jobs
(19:51) Get Involved
(20:06) Introducing
(21:41) In Other AI News
(22:47) The Mask Comes Off
(27:26) Another One Bites the Dust
(31:30) Wouldn’t You Prefer a Nice Game of Chess
(32:55) Quiet Speculations
(34:54) The Quest for Sane Regulations
(38:10) The Week in Audio
(40:53) Rhetorical Innovation
(50:21) Aligning a Smarter Than Human Intelligence is Difficult
(01:00:50) People Are Worried About AI Killing Everyone
(01:02:46) Other People Are Not As Worried About AI Killing Everyone
(01:04:43) The Lighter Side
The original text contained 15 images which were described by AI.
---
First published:
October 29th, 2024Source:
---
https://www.lesswrong.com/posts/3AcK7Pcp9D2LPoyR2/ai-87-staying-in-characterNarrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
-
Anthropic has released an upgraded Claude Sonnet 3.5, and the new Claude Haiku 3.5.
They claim across the board improvements to Sonnet, and it has a new rather huge ability accessible via the API: Computer use. Nothing could possibly go wrong.
Claude Haiku 3.5 is also claimed as a major step forward for smaller models. They are saying that on many evaluations it has now caught up to Opus 3.
Missing from this chart is o1, which is in some ways not a fair comparison since it uses so much inference compute, but does greatly outperform everything here on the AIME and some other tasks.
METR: We conducted an independent pre-deployment assessment of the updated Claude 3.5 Sonnet model and will share our report soon.
We only have very early feedback so far, so it's hard to tell how much what I will be [...]
---
Outline:
(01:32) OK, Computer
(05:16) What Could Possibly Go Wrong
(11:33) The Quest for Lunch
(14:07) Aside: Someone Please Hire The Guy Who Names Playstations
(17:15) Coding
(18:10) Startups Get Their Periodic Reminder
(19:36) Live From Janus World
(26:19) Forgot about Opus
The original text contained 3 images which were described by AI.
---
First published:
October 24th, 2024Source:
---
https://www.lesswrong.com/posts/jZigzT3GLZoFTATG4/claude-sonnet-3-5-1-and-haiku-3-5Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
-
The Information reports that OpenAI is close to finalizing its transformation to an ordinary Public Benefit B-Corporation. OpenAI has tossed its cap over the wall on this, giving its investors the right to demand refunds with interest if they don’t finish the transition in two years.
Microsoft very much wants this transition to happen. They would be the big winner, with an OpenAI that wants what is good for business. This also comes at a time when relations between Microsoft and OpenAI are fraying, and OpenAI is threatening to invoke its AGI clause to get out of its contract with Microsoft. That type of clause is the kind of thing they’re doubtless looking to get rid of as part of this.
The $37.5 billion question is, what stake will the non-profit get in the new OpenAI?
For various reasons that I will explore here, I think [...]
---
Outline:
(01:14) The Valuation in Question
(05:08) The Control Premium
(08:26) The Quest for AGI is OpenAI's Telos and Business Model
(11:37) OpenAI's Value is Mostly in the Extreme Upside
The original text contained 3 images which were described by AI.
---
First published:
October 21st, 2024Source:
---
https://www.lesswrong.com/posts/5RweEwgJR2JxyCDPF/the-mask-comes-off-at-what-priceNarrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
-
Dario Amodei is thinking about the potential. The result is a mostly good essay called Machines of Loving Grace, outlining what can be done with ‘powerful AI’ if we had years of what was otherwise relative normality to exploit it in several key domains, and we avoided negative outcomes and solved the control and alignment problems. As he notes, a lot of pretty great things would then be super doable.
Anthropic also offers us improvements to its Responsible Scaling Policy (RSP, or what SB 1047 called an SSP). Still much left to do, but a clear step forward there.
Daniel Kokotajlo and Dean Ball have teamed up on an op-ed for Time on the need for greater regulatory transparency. It's very good.
Also, it's worth checking out the Truth Terminal saga. It's not as scary as it might look at first glance, but it is definitely [...]
---
Outline:
(01:01) Language Models Offer Mundane Utility
(05:10) Language Models Don’t Offer Mundane Utility
(11:21) Deepfaketown and Botpocalypse Soon
(19:52) They Took Our Jobs
(20:33) Get Involved
(20:48) Introducing
(21:58) In Other AI News
(26:08) Truth Terminal High Weirdness
(34:54) Quiet Speculations
(44:45) Copyright Confrontation
(45:02) AI and the 2024 Presidential Election
(46:02) The Quest for Sane Regulations
(51:00) The Week in Audio
(53:40) Just Think of the Potential
(01:15:09) Reactions to Machines of Loving Grace
(01:25:32) Assuming the Can Opener
(01:32:32) Rhetorical Innovation
(01:35:41) Anthropic Updates its Responsible Scaling Policy (RSP/SSP)
(01:41:35) Aligning a Smarter Than Human Intelligence is Difficult
(01:43:36) The Lighter Side
The original text contained 11 images which were described by AI.
---
First published:
October 17th, 2024Source:
---
https://www.lesswrong.com/posts/zSNLvRBhyphwuYdeC/ai-86-just-think-of-the-potentialNarrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
-
It's monthly roundup time again, and it's happily election-free.
Thinking About the Roman Empire's Approval Rating
Propaganda works, ancient empires edition. This includes the Roman Republic being less popular than the Roman Empire and people approving of Sparta, whereas Persia and Carthage get left behind. They’re no FDA.
Polling USA: Net Favorable Opinion Of:
Ancient Athens: +44%
Roman Empire: +30%
Ancient Sparta: +23%
Roman Republican: +26%
Carthage: +13%
Holy Roman Empire: +7%
Persian Empire: +1%
Visigoths: -7%
Huns: -29%
YouGov / June 6, 2024 / n=2205
The Five Star Problem
What do we do about all 5-star ratings collapsing the way Peter describes here?
Peter Wildeford: TBH I am pretty annoyed that when I rate stuff the options are:
* “5 stars – everything was good enough I guess”
* “4 [...]
---
Outline:
(00:11) Thinking About the Roman Empire's Approval Rating
(01:13) The Five Star Problem
(06:35) Cooking at Home Being Cheaper is Weird
(08:18) With Fans Like These
(09:37) Journalist, Expose Thyself
(13:03) On Not Going the Extra Mile
(13:13) The Rocket Man Said a Bad Bad Thing
(16:27) The Joy of Bad Service
(19:07) Saying What is Not
(19:27) Concentration
(20:26) Should You Do What You Love?
(22:08) Should You Study Philosophy?
(24:31) The Destined Face
(25:09) Tales of Twitter
(34:14) Antisocial Media
(35:01) TikTok On the Clock
(39:07) Tier List of Champions
(40:50) Technology Advances
(42:15) Hotel Hype
(44:44) Government Working
(46:55) I Was Promised Flying Self-Driving Cars
(47:21) For Your Entertainment
(56:50) Cultural Dynamism
(58:43) Hansonian Features
(01:02:19) Variously Effective Altruism
(01:02:45) Nobel Intentions
(01:05:04) Gamers Gonna Game Game Game Game Game
(01:20:17) Sports Go Sports and the Problems with TV Apps These Days
(01:23:46) An Economist Seeks Lunch
(01:30:35) The Lighter Side
The original text contained 6 images which were described by AI.
---
First published:
October 16th, 2024Source:
---
https://www.lesswrong.com/posts/Hq9ccwansFgqTueHA/monthly-roundup-23-october-2024Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
-
Previous Economics Roundups: #1, #2, #3
Fun With Campaign Proposals (1)
Since this section discusses various campaign proposals, I’ll reiterate:
I could not be happier with my decision not to cover the election outside of the particular areas that I already cover. I have zero intention of telling anyone who to vote for. That's for you to decide.
All right, that's out of the way. On with the fun. And it actually is fun, if you keep your head on straight. Or at least it's fun for me. If you feel differently, no blame for skipping the section.
Last time the headliner was Kamala Harris and her no good, very bad tax proposals, especially her plan to tax unrealized capital gains.
This time we get to start with the no good, very bad proposals of Donald Trump.
This is the stupidest proposal [...]
---
Outline:
(00:10) Fun With Campaign Proposals (1)
(06:43) Campaign Proposals (2): Tariffs
(09:34) Car Seats as Contraception
(10:04) They Didn’t Take Our Jobs
(11:11) Yay Prediction Markets
(13:10) Very High Marginal Tax Rates
(15:52) Hard Work
(17:53) Yay Price Gouging (Yep, It's That Time Again)
(22:36) The Death of Chinese Venture Capital
(24:52) Economic Growth
(25:17) People Really Hate Inflation
(29:23) Garbage In, Garbage Out
(30:11) Insurance
(32:07) Yes, You Should Still Learn to Code
(32:29) Not Working From Home
(34:02) Various Older Economics Papers
The original text contained 5 images which were described by AI.
---
First published:
October 15th, 2024Source:
---
https://www.lesswrong.com/posts/ru9YGuGscGuDHfXTJ/economics-roundup-4Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
-
Both Geoffrey Hinton and Demis Hassabis were given the Nobel Prize this week, in Physics and Chemistry respectively. Congratulations to both of them along with all the other winners. AI will be central to more and more of scientific progress over time. This felt early, but not as early as you would think.
The two big capability announcements this week were OpenAI's canvas, their answer to Anthropic's artifacts to allow you to work on documents or code outside of the chat window in a way that seems very useful, and Meta announcing a new video generation model with various cool features, that they’re wisely not releasing just yet.
I also have two related corrections from last week, and an apology: Joshua Achiam is OpenAI's new head of Mission Alignment, not of Alignment as I incorrectly said. The new head of Alignment Research is Mia Glaese. That mistake [...]
---
Outline:
(01:30) Language Models Offer Mundane Utility
(09:10) Language Models Don’t Offer Mundane Utility
(13:11) Blank Canvas
(17:13) Meta Video
(18:58) Deepfaketown and Botpocalypse Soon
(21:22) They Took Our Jobs
(24:45) Get Involved
(26:01) Introducing
(26:14) AI Wins the Nobel Prize
(28:51) In Other AI News
(30:05) Quiet Speculations
(34:22) The Mask Comes Off
(37:17) The Quest for Sane Regulations
(41:02) The Week in Audio
(43:13) Rhetorical Innovation
(48:20) The Carbon Question
(50:27) Aligning a Smarter Than Human Intelligence is Difficult
(55:48) People Are Trying Not to Die
The original text contained 6 images which were described by AI.
---
First published:
October 10th, 2024Source:
---
https://www.lesswrong.com/posts/wTriAw9mB6b5FwH5g/ai-85-ai-wins-the-nobel-prizeNarrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
-
Joshua Achiam is the OpenAI Head of Mission Alignment
I start off this post with an apology for two related mistakes from last week.
The first is the easy correction: I incorrectly thought he was the head of ‘alignment’ at OpenAI rather than his actual title ‘mission alignment.’
Both are important, and make one's views important, but they’re very different.
The more serious error, which got quoted some elsewhere, was: In the section about OpenAI, I noted some past comments from Joshua Achiam, and interpreted them as him lecturing EAs that misalignment risk from AGI was not real.
While in isolation I believe this is a reasonable way to interpret this quote, this issue is important to get right especially if I’m going to say things like that. Looking at it only that way was wrong. I both used a poor method to contact [...]
---
Outline:
(00:04) Joshua Achiam is the OpenAI Head of Mission Alignment
(01:50) Joshua Achiam Has a Very Different Model of AI Existential Risk
(05:00) Joshua is Strongly Dismissive of Alternative Models of AI X-Risk
(10:05) Would Ordinary Safety Practices Would Be Sufficient for AI?
(12:25) Visions of the Future
(14:53) Joshua Achiam versus Eliezer Yudkowsky
(22:47) People Are Going to Give AI Power
(29:32) Value is Complicated
(38:22) Conclusion
The original text contained 1 image which was described by AI.
---
First published:
October 10th, 2024Source:
---
https://www.lesswrong.com/posts/WavWheRLhxnofKHva/joshua-achiam-public-statement-analysisNarrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
-
Introduction: Better than a Podcast
Andrej Karpathy continues to be a big fan of NotebookLM, especially its podcast creation feature. There is something deeply alien to me about this proposed way of consuming information, but I probably shouldn’t knock it (too much) until I try it?
Others are fans as well.
Carlos Perez: Google with NotebookLM may have accidentally stumbled upon an entirely new way of interacting with AI. Its original purpose was to summarize literature. But one unexpected benefit is when it's used to talk about your expressions (i.e., conversations or lectures). This is when you discover the insight of multiple interpretations! Don’t just render a summary one time; have it do so several times. You’ll then realize how different interpretations emerge, often in unexpected ways.
Delip Rao gives the engine two words repeated over and over, the AI podcast hosts describe what it [...]
---
Outline:
(00:05) Introduction: Better than a Podcast
(03:16) Language Models Offer Mundane Utility
(04:04) Language Models Don’t Offer Mundane Utility
(09:24) Copyright Confrontation
(10:44) Deepfaketown and Botpocalypse Soon
(14:45) They Took Our Jobs
(19:23) The Art of the Jailbreak
(19:39) Get Involved
(20:00) Introducing
(20:37) OpenAI Dev Day
(34:40) In Other AI News
(38:03) The Mask Comes Off
(55:42) Quiet Speculations
(59:10) The Quest for Sane Regulations
(01:00:04) The Week in Audio
(01:01:54) Rhetorical Innovation
(01:19:08) Remember Who Marc Andreessen Is
(01:22:35) A Narrow Path
(01:30:36) Aligning a Smarter Than Human Intelligence is Difficult
(01:33:45) The Wit and Wisdom of Sam Altman
(01:35:25) The Lighter Side
The original text contained 10 images which were described by AI.
---
First published:
October 3rd, 2024Source:
---
https://www.lesswrong.com/posts/bWrZhfaTD5EDjwkLo/ai-84-better-than-a-podcastNarrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
-
It's over, until such a future time as either we are so back, or it is over for humanity.
Gavin Newsom has vetoed SB 1047.
Newsom's Message In Full
Quoted text is him, comments are mine.
To the Members of the California State Senate: I am returning Senate Bill 1047 without my signature.
This bill would require developers of large artificial intelligence (Al) models, and those providing the computing power to train such models, to put certain safeguards and policies in place to prevent catastrophic harm. The bill would also establish the Board of Frontier Models – a state entity – to oversee the development of these models.
It is worth pointing out here that mostly the ‘certain safeguards and policies’ was ‘have a policy at all, tell us what it is and then follow it.’ But there were some specific things that [...]
---
Outline:
(00:15) Newsom's Message In Full
(10:42) Newsom's Explanation Does Not Make Sense
(15:21) Newsom's Proposed Path of Use Regulation is Terrible for Everyone
(23:02) Newsom's Proposed Path of Use Regulation Doesn’t Prevent X-Risk
(26:49) Newsom Says He Wants to Regulate Small Entrepreneurs and Academia
(29:20) What If Something Goes Really Wrong?
(30:12) Could Newsom Come Around?
(35:10) Timing is Everything
(36:23) SB 1047 Was Popular
(39:41) What Did the Market Have to Say?
(41:51) What Newsom Did Sign
(54:00) Paths Forward
The original text contained 1 image which was described by AI.
---
First published:
October 1st, 2024Source:
---
https://www.lesswrong.com/posts/6kZ6gW5DEZKFfqvZD/newsom-vetoes-sb-1047Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
-
Previously: The Fundamentals, The Gamblers, The Business
We have now arrived at the topics most central to this book, aka ‘The Future.’
Rationalism and Effective Altruism (EA)
The Manifest conference was also one of the last reporting trips that I made for this book. And it confirmed for me that the River is real—not just some literary device I invented. (6706)
Yep. The River is real.
I consider myself, among many things, a straight up rationalist.
I do not consider myself an EA, and never have.
This completes the four quadrants of the two-by-two of [does Nate knows it well, does Zvi knows it well]. The first two, where Nate was in his element, went very well. The third clearly was less exacting, as one would expect, but pretty good.
Now I have the information advantage, even more than I did [...]
---
Outline:
(00:16) Rationalism and Effective Altruism (EA)
(06:01) Cost-Benefit Analysis
(09:04) How About Trying At All
(10:11) The Virtues of Rationality
(11:56) Effective Altruism and Rationality, Very Different of Course
(24:37) The Story of OpenAI
(30:19) Altman, OpenAI and AI Existential Risk
(38:26) Tonight at 11: Doom
(01:00:39) AI Existential Risk: They’re For It
(01:07:42) To Pause or Not to Pause
(01:11:11) You Need Better Decision Theory
(01:15:27) Understanding the AI
(01:19:43) Aligning the AI
(01:23:50) A Glimpse of Our Possible Future
(01:28:16) The Closing Motto
---
First published:
September 27th, 2024Source:
---
https://www.lesswrong.com/posts/5qbcmKdfWc7vskrRD/book-review-on-the-edge-the-futureNarrated by TYPE III AUDIO.
-
We interrupt Nate Silver week here at Don’t Worry About the Vase to bring you some rather big AI news: OpenAI and Sam Altman are planning on fully taking their masks off, discarding the nonprofit board's nominal control and transitioning to a for-profit B-corporation, in which Sam Altman will have equity.
We now know who they are and have chosen to be. We know what they believe in. We know what their promises and legal commitments are worth. We know what they plan to do, if we do not stop them.
They have made all this perfectly clear. I appreciate the clarity.
On the same day, Mira Murati, the only remaining person at OpenAI who in any visible way opposed Altman during the events of last November, resigned without warning along with two other senior people, joining a list that now includes among others several OpenAI [...]
---
Outline:
(01:51) Language Models Offer Mundane Utility
(04:17) Language Models Don’t Offer Mundane Utility
(06:35) The Mask Comes Off
(18:51) Fun with Image Generation
(18:54) Deepfaketown and Botpocalypse Soon
(19:50) They Took Our Jobs
(20:49) The Art of the Jailbreak
(21:28) OpenAI Advanced Voice Mode
(26:03) Introducing
(28:29) In Other AI News
(30:30) Quiet Speculations
(34:00) The Quest for Sane Regulations
(42:21) The Week in Audio
(42:47) Rhetorical Innovation
(56:50) Aligning a Smarter Than Human Intelligence is Difficult
(01:00:36) Other People Are Not As Worried About AI Killing Everyone
(01:01:53) The Lighter Side
The original text contained 6 images which were described by AI.
---
First published:
September 26th, 2024Source:
---
https://www.lesswrong.com/posts/FeqY7NWcFMn8haWCR/ai-83-the-mask-comes-offNarrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
-
Previously: The Fundamentals, The Gamblers
Having previously handled the literal gamblers, we are ready to move on to those who Do Business using Riverian principles.
Or at least while claiming to use Riverian principles, since Silicon Valley doesn’t fit into the schema as cleanly as many other groups. That's where we begin this section, starting at the highest possible conceptual level.
Time to talk real money.
Why Can You Do This Trade?
First law of trading: For you to buy, someone must sell. Or for you to sell, someone must buy. And there can’t be someone else doing the trade before you did it.
Why did they do that, and why did no one else take the trade first? Until you understand why you are able to do this trade, you should be highly suspicious.
“Every single thing we do, I can [...]
---
Outline:
(00:41) Why Can You Do This Trade?
(03:08) In a World of Venture Capital
(10:54) Short Termism Hypothesis
(12:42) Non-Determinism and its Discontents
(14:57) The Founder, the Fox and the Hedgehog
(17:11) The Team to Beat
(24:22) Silicon Valley Versus Risk
(35:14) The Keynesian Beauty Contest
(40:57) The Secret of Their Success is Deal Flow
(50:00) The Valley Beside the River
(53:07) Checkpoint Three
(53:37) Fun With SBF and Crypto Fraud
(01:01:53) Other Crypto Thoughts Unrelated to SBF
(01:04:50) Checkpoint Four
The original text contained 1 image which was described by AI.
---
First published:
September 25th, 2024Source:
---
https://www.lesswrong.com/posts/Hfb3pc9HwdcCP7pys/book-review-on-the-edge-the-businessNarrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
-
Previously: Book Review: On the Edge: The Fundamentals
As I said in the Introduction, I loved this part of the book. Let's get to it.
Poker and Game Theory
When people talk about game theory, they mostly talk solving for the equilibrium, and how to play your best game or strategy (there need not be a formal game) against adversaries who are doing the same.
I think of game theory like Frank Sinatra thinks of New York City: “If I can make it there, I’ll make it anywhere.” If you can compete against people performing at their best, you’re going to be a winner in almost any game you play. But if you build a strategy around exploiting inferior competition, it's unlikely to be a winning approach outside of a specific, narrow setting. What plays well in Peoria doesn’t necessarily play well in New York. [...]
---
Outline:
(00:18) Poker and Game Theory
(06:53) Sports Randomized Sports
(11:17) Knowing Theory Versus Memorization Versus Practice
(16:15) More About Tells
(19:20) Feeling the Probabilities
(20:35) Feeling Sad About It
(28:33) The Iowa Gambling Task
(31:39) The Greatest Risk
(37:20) Tournament Poker Is Super High Variance
(42:42) The Art of the Degen
(48:43) Why Do They Insist on Calling it Luck
(51:56) The Poker Gender Gap
(54:36) A Potential Cheater
(58:30) Making a Close Decision
(01:00:19) Other Games at the Casino
(01:03:22) Slot Machines Considered Harmful
(01:08:23) Where I Draw the Line
(01:11:14) A Brief History of Vegas and Casinos (as told by Nate Silver)
(01:16:44) We Got Us a Whale
(01:21:41) Donald Trump and Atlantic City Were Bad At Casinos
(01:25:17) How To Design a Casino
(01:26:46) The Wide World of Winning at Sports Gambling
(01:41:01) Limatime
(01:43:45) The Art of Getting Down
(01:45:29) Oh Yeah That Guy
(01:55:34) The House Sometimes Wins
(02:01:24) The House Is Probably a Coward
(02:11:19) DFS and The Problem of Winners
(02:16:08) Balancing the Action
(02:18:44) The Market Maker
(02:22:45) The Closing Line is Hard to Beat
(02:25:11) Winning is Hard
(02:29:58) What Could Be, Unburdened By What Has Been
(02:34:52) Finding Edges Big and Small
(02:40:12) Checkpoint Two
The original text contained 2 images which were described by AI.
---
First published:
September 24th, 2024Source:
---
https://www.lesswrong.com/posts/mkyMx4FtJrfuGnsrm/book-review-on-the-edge-the-gamblersNarrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
- Mostra di più