Episódios
-
Today Iâm reacting to the recent Scott Aaronson interview on the Win-Win podcast with Liv Boeree and Igor Kurganov.
Prof. Aaronson is the Director of the Quantum Information Center at the University of Texas at Austin. Heâs best known for his research advancing the frontier of complexity theory, especially quantum complexity theory, and making complex insights from his field accessible to a wider readership via his blog.
Scott is one of my biggest intellectual influences. His famous Who Can Name The Bigger Number essay and his long-running blog are among my best memories of coming across high-quality intellectual content online as a teen. His posts and lectures taught me much of what I know about complexity theory.
Scott recently completed a two-year stint at OpenAI focusing on the theoretical foundations of AI safety, so I was interested to hear his insider account.
Unfortunately, what I heard in the interview confirms my worst fears about the meaning of âsafetyâ at todayâs AI companies: that theyâre laughably clueless at how to achieve any measure of safety, but instead of doing the adult thing and slowing down their capabilities work, theyâre pushing forward recklessly.
00:00 Introducing Scott Aaronson
02:17 Scott's Recruitment by OpenAI
04:18 Scott's Work on AI Safety at OpenAI
08:10 Challenges in AI Alignment
12:05 Watermarking AI Outputs
15:23 The State of AI Safety Research
22:13 The Intractability of AI Alignment
34:20 Policy Implications and the Call to Pause AI
38:18 Out-of-Distribution Generalization
45:30 Moral Worth Criterion for Humans
51:49 Quantum Mechanics and Human Uniqueness
01:00:31 Quantum No-Cloning Theorem
01:12:40 Scott Is Almost An Accelerationist?
01:18:04 Geoffrey Hinton's Proposal for Analog AI
01:36:13 The AI Arms Race and the Need for Regulation
01:39:41 Scott Aronson's Thoughts on Sam Altman
01:42:58 Scott Rejects the Orthogonality Thesis
01:46:35 Final Thoughts
01:48:48 Lethal Intelligence Clip
01:51:42 Outro
Show Notes
Scottâs Interview on Win-Win with Liv Boeree and Igor Kurganov: https://www.youtube.com/watch?v=ANFnUHcYza0
Scottâs Blog: https://scottaaronson.blog
PauseAI Website: https://pauseai.info
PauseAI Discord: https://discord.gg/2XXWXvErfA
Watch the Lethal Intelligence video and check out LethalIntelligence.ai! Itâs an AWESOME new animated intro to AI risk.
Doom Debatesâ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
Today Iâm reacting to a July 2024 interview that Prof. Subbarao Kambhampati did on Machine Learning Street Talk.
Rao is a Professor of Computer Science at Arizona State University, and one of the foremost voices making the claim that while LLMs can generate creative ideas, they canât truly reason.
The episode covers a range of topics including planning, creativity, the limits of LLMs, and why Rao thinks LLMs are essentially advanced N-gram models.
00:00 Introduction
02:54 Essentially N-Gram Models?
10:31 The Manhole Cover Question
20:54 Reasoning vs. Approximate Retrieval
47:03 Explaining Jokes
53:21 Caesar Cipher Performance
01:10:44 Creativity vs. Reasoning
01:33:37 Reasoning By Analogy
01:48:49 Synthetic Data
01:53:54 The ARC Challenge
02:11:47 Correctness vs. Style
02:17:55 AIs Becoming More Robust
02:20:11 Block Stacking Problems
02:48:12 PlanBench and Future Predictions
02:58:59 Final Thoughts
Show Notes
Raoâs interview on Machine Learning Street Talk: https://www.youtube.com/watch?v=y1WnHpedi2A
Raoâs Twitter: https://x.com/rao2z
PauseAI Website: https://pauseai.info
PauseAI Discord: https://discord.gg/2XXWXvErfA
Watch the Lethal Intelligence video and check out LethalIntelligence.ai! Itâs an AWESOME new animated intro to AI risk.
Doom Debatesâ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
Estão a faltar episódios?
-
In this episode of Doom Debates, I discuss AI existential risks with my pseudonymous guest Nethys.
Nethy shares his journey into AI risk awareness, influenced heavily by LessWrong and Eliezer Yudkowsky. We explore the vulnerability of society to emerging technologies, the challenges of AI alignment, and why he believes our current approaches are insufficient, ultimately resulting in 99.999% P(Doom).
00:00 Nethys Introduction
04:47 The Vulnerable World Hypothesis
10:01 Whatâs Your P(Doom)âą
14:04 Nethysâs Banger YouTube Comment
26:53 Living with High P(Doom)
31:06 Losing Access to Distant Stars
36:51 Defining AGI
39:09 The Convergence of AI Models
47:32 The Role of âUnlicensedâ Thinkers
52:07 The PauseAI Movement
58:20 Lethal Intelligence Video Clip
Show Notes
Eliezer Yudkowskyâs post on âDeath with Dignityâ: https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy
PauseAI Website: https://pauseai.info
PauseAI Discord: https://discord.gg/2XXWXvErfA
Watch the Lethal Intelligence video and check out LethalIntelligence.ai! Itâs an AWESOME new animated intro to AI risk.
Doom Debatesâ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
Fraser Cain is the publisher of Universe Today, co-host of Astronomy Cast, a popular YouTuber about all things space, and guess what⊠he has a high P(doom)! Thatâs why heâs joining me on Doom Debates for a very special AI + space crossover episode.
00:00 Fraser Cainâs Background and Interests
5:03 Whatâs Your P(Doom)âą
07:05 Our Vulnerable World
15:11 Donât Look Up
22:18 Cosmology and the Search for Alien Life
31:33 Stars = Terrorists
39:03 The Great Filter and the Fermi Paradox
55:12 Grabby Aliens Hypothesis
01:19:40 Life Around Red Dwarf Stars?
01:22:23 Epistemology of Grabby Aliens
01:29:04 Multiverses
01:33:51 Quantum Many Worlds vs. Copenhagen Interpretation
01:47:25 Simulation Hypothesis
01:51:25 Final Thoughts
SHOW NOTES
Fraserâs YouTube channel: https://www.youtube.com/@frasercain
Universe Today (space and astronomy news): https://www.universetoday.com/
Max Tegmarkâs book that explains 4 levels of multiverses: https://www.amazon.com/Our-Mathematical-Universe-Ultimate-Reality/dp/0307744256
Robin Hansonâs ideas:
Grabby Aliens: https://grabbyaliens.com
The Great Filter: https://en.wikipedia.org/wiki/Great_Filter
Life in a high-dimensional space: https://www.overcomingbias.com/p/life-in-1kdhtml
---
Watch the Lethal Intelligence video and check out LethalIntelligence.ai! Itâs an AWESOME new animated intro to AI risk.
---
Doom Debatesâ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
Vaden Masrani and Ben Chugg, hosts of the Increments Podcast, are back for a Part II! This time weâre going straight to debating my favorite topic, AI doom.
00:00 Introduction
02:23 High-Level AI Doom Argument
17:06 How Powerful Could Intelligence Be?
22:34 âKnowledge Creationâ
48:33 âCreativityâ
54:57 Stand-Up Comedy as a Test for AI
01:12:53 Vaden & Benâs Goalposts
01:15:00 How to Change Lironâs Mind
01:20:02 LLMs are Stochastic Parrots?
01:34:06 Tools vs. Agents
01:39:51 Instrumental Convergence and AI Goals
01:45:51 Intelligence vs. Morality
01:53:57 Mainline Futures
02:16:50 Lethal Intelligence Video
Show Notes
Vaden & Benâs Podcast: https://www.youtube.com/@incrementspod
Recommended playlists from their podcast:
* The Bayesian vs Popperian Epistemology Series
* The Conjectures and Refutations Series
Vadenâs Twitter: https://x.com/vadenmasrani
Benâs Twitter: https://x.com/BennyChugg
Watch the Lethal Intelligence video and check out LethalIntelligence.ai! Itâs an AWESOME new animated intro to AI risk.
Doom Debatesâ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
Dr. Andrew Critch is the co-founder of the Center for Applied Rationality, a former Research Fellow at the Machine Intelligence Research Institute (MIRI), a Research Scientist at the UC Berkeley Center for Human Compatible AI, and the co-founder of a new startup called Healthcare Agents.
Dr. Critchâs P(Doom) is a whopping 85%! But his most likely doom scenario isnât what you might expect. He thinks humanity will successfully avoid a self-improving superintelligent doom scenario, only to still go extinct via the slower process of âindustrial dehumanizationâ.
00:00 Introduction01:43 Dr. Critchâs Perspective on LessWrong Sequences06:45 Bayesian Epistemology15:34 Dr. Critch's Time at MIRI18:33 Whatâs Your P(Doom)âą26:35 Doom Scenarios40:38 AI Timelines43:09 Defining âAGIâ48:27 Superintelligence53:04 The Speed Limit of Intelligence01:12:03 The Obedience Problem in AI01:21:22 Artificial Superintelligence and Human Extinction01:24:36 Global AI Race and Geopolitics01:34:28 Future Scenarios and Human Relevance01:48:13 Extinction by Industrial Dehumanization01:58:50 Automated Factories and Human Control02:02:35 Global Coordination Challenges02:27:00 Healthcare Agents02:35:30 Final Thoughts
---
Show Notes
Dr. Critchâs LessWrong post explaining his P(Doom) and most likely doom scenarios: https://www.lesswrong.com/posts/Kobbt3nQgv3yn29pr/my-motivation-and-theory-of-change-for-working-in-ai
Dr. Critchâs Website: https://acritch.com/
Dr. Critchâs Twitter: https://twitter.com/AndrewCritchPhD
---
Doom Debatesâ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
Itâs time for AI Twitter Beefs #2:
00:42 Jack Clark (Anthropic) vs. Holly Elmore (PauseAI US)
11:02 Beff Jezos vs. Eliezer Yudkowsky, Carl Feynman
18:10 Geoffrey Hinton vs. OpenAI & Meta
25:14 Samuel Hammond vs. Liron
30:26 Yann LeCun vs. Eliezer Yudkowsky
37:13 Roon vs. Eliezer Yudkowsky
41:37 Tyler Cowen vs. AI Doomers
52:54 David Deutsch vs. Liron
Twitter people referenced:
* Jack Clark: https://x.com/jackclarkSF
* Holly Elmore: https://x.com/ilex_ulmus
* PauseAI US: https://x.com/PauseAIUS
* Geoffrey Hinton: https://x.com/GeoffreyHinton
* Samuel Hammond: https://x.com/hamandcheese
* Yann LeCun: https://x.com/ylecun
* Eliezer Yudkowsky: https://x.com/esyudkowsky
* Roon: https://x.com/tszzl
* Beff Jezos: https://x.com/basedbeffjezos
* Carl Feynman: https://x.com/carl_feynman
* Tyler Cowen: https://x.com/tylercowen
* David Deutsch: https://x.com/DavidDeutschOxf
Show Notes
Holly Elmoreâs EA forum post about scouts vs. soldiers
Manifund info & donation page for PauseAI US: https://manifund.org/projects/pauseai-us-2025-through-q2
PauseAI.info - join the Discord and find me in the #doom-debates channel!
Doom Debatesâ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
Vaden Masrani and Ben Chugg, hosts of the Increments Podcast, are joining me to debate Bayesian vs. Popperian epistemology.
Iâm on the Bayesian side, heavily influenced by the writings of Eliezer Yudkowsky. Vaden and Ben are on the Popperian side, heavily influenced by David Deutsch and the writings of Popper himself.
We dive into the theoretical underpinnings of Bayesian reasoning and Solomonoff induction, contrasting them with the Popperian perspective, and explore real-world applications such as predicting elections and economic policy outcomes.
The debate highlights key philosophical differences between our two epistemological frameworks, and sets the stage for further discussions on superintelligence and AI doom scenarios in an upcoming Part II.
00:00 Introducing Vaden and Ben
02:51 Setting the Stage: Epistemology and AI Doom
04:50 Whatâs Your P(Doom)âą
13:29 Popperian vs. Bayesian Epistemology
31:09 Engineering and Hypotheses
38:01 Solomonoff Induction
45:21 Analogy to Mathematical Proofs
48:42 Popperian Reasoning and Explanations
54:35 Arguments Against Bayesianism
58:33 Against Probability Assignments
01:21:49 Popperâs Definition of âContentâ
01:31:22 Heliocentric Theory Example
01:31:34 âHard to Varyâ Explanations
01:44:42 Coin Flipping Example
01:57:37 Expected Value
02:12:14 Prediction Market Calibration
02:19:07 Futarchy
02:29:14 Prediction Markets as AI Lower Bound
02:39:07 A Test for Prediction Markets
2:45:54 Closing Thoughts
Show Notes
Vaden & Benâs Podcast: https://www.youtube.com/@incrementspod
Vadenâs Twitter: https://x.com/vadenmasrani
Benâs Twitter: https://x.com/BennyChugg
Bayesian reasoning: https://en.wikipedia.org/wiki/Bayesian_inference
Karl Popper: https://en.wikipedia.org/wiki/Karl_Popper
Vaden's blog post on Cox's Theorem and Yudkowsky's claims of "Laws of Rationality": https://vmasrani.github.io/blog/2021/the_credence_assumption/
Vadenâs disproof of probabilistic induction (including Solomonoff Induction): https://arxiv.org/abs/2107.00749
Vadenâs referenced post about predictions being uncalibrated > 1yr out: https://forum.effectivealtruism.org/posts/hqkyaHLQhzuREcXSX/data-on-forecasting-accuracy-across-different-time-horizons#Calibrations
Article by Gavin Leech and Misha Yagudin on the reliability of forecasters: https://ifp.org/can-policymakers-trust-forecasters/
Sources for claim that superforecasters gave a P(doom) below 1%: https://80000hours.org/2024/09/why-experts-and-forecasters-disagree-about-ai-risk/https://www.astralcodexten.com/p/the-extinction-tournament
Vadenâs Slides on Content vs Probability: https://vmasrani.github.io/assets/pdf/popper_good.pdf
Doom Debatesâ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
Our top researchers and industry leaders have been warning us that superintelligent AI may cause human extinction in the next decade.
If you haven't been following all the urgent warnings, I'm here to bring you up to speed.
* Human-level AI is coming soon
* Itâs an existential threat to humanity
* The situation calls for urgent action
Listen to this 15-minute intro to get the lay of the land.
Then follow these links to learn more and see how you can help:
* The Compendium
A longer written introduction to AI doom by Connor Leahy et al
* AGI Ruin â A list of lethalities
A comprehensive list by Eliezer Yudkowksy of reasons why developing superintelligent AI is unlikely to go well for humanity
* AISafety.info
A catalogue of AI doom arguments and responses to objections
* PauseAI.info
The largest volunteer org focused on lobbying world government to pause development of superintelligent AI
* PauseAI Discord
Chat with PauseAI members, see a list of projects and get involved
---
Doom Debatesâ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
Prof. Lee Cronin is the Regius Chair of Chemistry at the University of Glasgow. His research aims to understand how life might arise from non-living matter. In 2017, he invented âAssembly Theoryâ as a way to measure the complexity of molecules and gain insight into the earliest evolution of life.
Today weâre debating Lee's claims about the limits of AI capabilities, and my claims about the risk of extinction from superintelligent AGI.
00:00 Introduction
04:20 Assembly Theory
05:10 Causation and Complexity
10:07 Assembly Theory in Practice
12:23 The Concept of Assembly Index
16:54 Assembly Theory Beyond Molecules
30:13 P(Doom)
32:39 The Statement on AI Risk
42:18 Agency and Intent
47:10 RescueBotâs Intent vs. a Clockâs
53:42 The Future of AI and Human Jobs
57:34 The Limits of AI Creativity
01:04:33 The Complexity of the Human Brain
01:19:31 Superintelligence: Fact or Fiction?
01:29:35 Final Thoughts
Leeâs Wikipedia: https://en.wikipedia.org/wiki/Leroy_Cronin
Leeâs Twitter: https://x.com/leecronin
Leeâs paper on Assembly Theory: https://arxiv.org/abs/2206.02279
Doom Debatesâ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
Ben Horowitz, cofounder and General Partner at Andreessen Horowitz (a16z), says nuclear proliferation is good.
I was shocked because I thought we all agreed nuclear proliferation is VERY BAD.
If Ben and a16z canât appreciate the existential risks of nuclear weapons proliferation, why would anyone ever take them seriously on the topic of AI regulation?
00:00 Introduction
00:49 Ben Horowitz on Nuclear Proliferation
02:12 Ben Horowitz on Open Source AI
05:31 Nuclear Non-Proliferation Treaties
10:25 Escalation Spirals
15:20 Rogue Actors
16:33 Nuclear Accidents
17:19 Safety Mechanism Failures
20:34 The Role of Human Judgment in Nuclear Safety
21:39 The 1983 Soviet Nuclear False Alarm
22:50 a16zâs Disingenuousness
23:46 Martin Casado and Marc Andreessen
24:31 Nuclear Equilibrium
26:52 Why I Care
28:09 Wrap Up
Sources of this episodeâs video clips:
Ben Horowitzâs interview on Upstream with Erik Torenberg: https://www.youtube.com/watch?v=oojc96r3Kuo
Martin Casado and Marc Andreessen talking about AI on the a16z Podcast: https://www.youtube.com/watch?v=0wIUK0nsyUg
Roger Skaerâs TikTok: https://www.tiktok.com/@rogerskaer
George W. Bush and John Kerry Presidential Debate (September 30, 2004): https://www.youtube.com/watch?v=WYpP-T0IcyA
Barack Obamaâs Prague Remarks on Nuclear Disarmament: https://www.youtube.com/watch?v=QKSn1SXjj2s
John Kerryâs Remarks at the 2015 Nuclear Nonproliferation Treaty Review Conference: https://www.youtube.com/watch?v=LsY1AZc1K7w
Show notes:
Nuclear War, A Scenario by Annie Jacobsen: https://www.amazon.com/Nuclear-War-Scenario-Annie-Jacobsen/dp/0593476093
Dr. Strangelove or: How I learned to Stop Worrying and Love the Bomb: https://en.wikipedia.org/wiki/Dr._Strangelove
1961 Goldsboro B-52 Crash: https://en.wikipedia.org/wiki/1961_Goldsboro_B-52_crash
1983 Soviet Nuclera False Alarm Incident: https://en.wikipedia.org/wiki/1983_Soviet_nuclear_false_alarm_incident
List of military nuclear accidents: https://en.wikipedia.org/wiki/List_of_military_nuclear_accidents
Doom Debatesâ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
Today Iâm reacting to Arvind Narayananâs interview with Robert Wright on the Nonzero podcast: https://www.youtube.com/watch?v=MoB_pikM3NY
Dr. Narayanan is a Professor of Computer Science and the Director of the Center for Information Technology Policy at Princeton. He just published a new book called AI Snake Oil: What Artificial Intelligence Can Do, What It Canât, and How to Tell the Difference.
Arvind claims AI is ânormal technology like the internetâ, and never sees fit to bring up the impact or urgency of AGI. So Iâll take it upon myself to point out all the questions where someone who takes AGI seriously would give different answers.
00:00 Introduction
01:49 AI is âNormal Technologyâ?
09:25 Playing Chess vs. Moving Chess Pieces
12:23 AI Has To Learn From Its Mistakes?
22:24 The Symbol Grounding Problem and AI's Understanding
35:56 Human vs AI Intelligence: The Fundamental Difference
36:37 The Cognitive Reflection Test
41:34 The Role of AI in Cybersecurity
43:21 Attack vs. Defense Balance in (Cyber)War
54:47 Taking AGI Seriously
01:06:15 Final Thoughts
Show Notes
The original Nonzero podcast episode with Arvind Narayanan and Robert Wright: https://www.youtube.com/watch?v=MoB_pikM3NY
Arvindâs new book, AI Snake Oil: https://www.amazon.com/Snake-Oil-Artificial-Intelligence-Difference-ebook/dp/B0CW1JCKVL
Arvindâs Substack: https://aisnakeoil.com
Arvindâs Twitter: https://x.com/random_walker
Robert Wrightâs Twitter: https://x.com/robertwrighter
Robert Wrightâs Nonzero Newsletter: https://nonzero.substack.com
Robâs excellent post about symbol grounding (Yes, AIs âunderstandâ things): https://nonzero.substack.com/p/yes-ais-understand-things
My previous episode of Doom Debates reacting to Arvind Narayanan on Harry Stebbingsâ podcast: https://www.youtube.com/watch?v=lehJlitQvZE
Doom Debatesâ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
Dr. Keith Duggar from Machine Learning Street Talk was the subject of my recent reaction episode about whether GPT o1 can reason. But instead of ignoring or blocking me, Keith was brave enough to come into the lionâs den and debate his points with me⊠and his P(doom) might shock you!
First we debate whether Keithâs distinction between Turing Machines and Discrete Finite Automata is useful for understanding limitations of current LLMs. Then I take Keith on a tour of alignment, orthogonality, instrumental convergence, and other popular stations on the âdoom trainâ, to compare our views on each.
Keith was a great sport and I think this episode is a classic!
00:00 Introduction
00:46 Keithâs Background
03:02 Keithâs P(doom)
14:09 Are LLMs Turing Machines?
19:09 Liron Concedes on a Point!
21:18 Do We Need >1MB of Context?
27:02 Examples to Illustrate Keithâs Point
33:56 Is Terence Tao a Turing Machine?
38:03 Factoring Numbers: Human vs. LLM
53:24 Training LLMs with Turing-Complete Feedback
1:02:22 What Does the Pillar Problem Illustrate?
01:05:40 Boundary between LLMs and Brains
1:08:52 The 100-Year View
1:18:29 Intelligence vs. Optimization Power
1:23:13 Is Intelligence Sufficient To Take Over?
01:28:56 The Hackable Universe and AI Threats
01:31:07 Nuclear Extinction vs. AI Doom
1:33:16 Can We Just Build Narrow AI?
01:37:43 Orthogonality Thesis and Instrumental Convergence
01:40:14 Debating the Orthogonality Thesis
02:03:49 The Rocket Alignment Problem
02:07:47 Final Thoughts
Show Notes
Keithâs show: https://www.youtube.com/@MachineLearningStreetTalk
Keithâs Twitter: https://x.com/doctorduggar
Keithâs fun brain teaser that LLMs canât solve yet, about a pillar with four holes: https://youtu.be/nO6sDk6vO0g?si=diGUY7jW4VFsV0TJ&t=3684
Eliezer Yudkowskyâs classic post about the âRocket Alignment Problemâ: https://intelligence.org/2018/10/03/rocket-alignment/
Doom Debatesâ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates.
đŁ You can now chat with me and other listeners in the #doom-debates channel of the PauseAI discord: https://discord.gg/2XXWXvErfA
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
Sam Kirchner and Remmelt Ellen, leaders of the Stop AI movement, think the only way to effectively protest superintelligent AI development is with civil disobedience.
Not only are they staging regular protests in front of AI labs, theyâre barricading the entrances and blocking traffic, then allowing themselves to be repeatedly arrested.
Is civil disobedience the right strategy to pause or stop AI?
00:00 Introducing Stop AI
00:38 Arrested at OpenAI Headquarters
01:14 Stop AIâs Funding
01:26 Blocking Entrances Strategy
03:12 Protest Logistics and Arrest
08:13 Blocking Traffic
12:52 Arrest and Legal Consequences
18:31 Commitment to Nonviolence
21:17 A Day in the Life of a Protestor
21:38 Civil Disobedience
25:29 Planning the Next Protest
28:09 Stop AI Goals and Strategies
34:27 The Ethics and Impact of AI Protests
42:20 Call to Action
Show Notes
StopAI's next protest is on October 21, 2024 at OpenAI, 575 Florida St, San Francisco, CA 94110.
StopAI Website: https://StopAI.info
StopAI Discord: https://discord.gg/gbqGUt7ZN4
Disclaimer: I (Liron) am not part of StopAI, but I am a member of PauseAI, which also has a website and Discord you can join.
PauseAI Website: https://pauseai.info
PauseAI Discord: https://discord.gg/2XXWXvErfA
There's also a special #doom-debates channel in the PauseAI Discord just for us :)
Doom Debatesâ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
This episode is a continuation of Q&A #1 Part 1 where I answer YOUR questions!
00:00 Introduction
01:20 Planning for a good outcome?
03:10 Stock Picking Advice
08:42 Dumbing It Down for Dr. Phil
11:52 Will AI Shorten Attention Spans?
12:55 Historical Nerd Life
14:41 YouTube vs. Podcast Metrics
16:30 Video Games
26:04 Creativity
30:29 Does AI Doom Explain the Fermi Paradox?
36:37 Grabby Aliens
37:29 Types of AI Doomers
44:44 Early Warning Signs of AI Doom
48:34 Do Current AIs Have General Intelligence?
51:07 How Liron Uses AI
53:41 Is âDoomerâ a Good Term?
57:11 Lironâs Favorite Books
01:05:21 Effective Altruism
01:06:36 The Doom Debates Community
---
Show Notes
PauseAI Discord: https://discord.gg/2XXWXvErfA
Robin Hansonâs Grabby Aliens theory: https://grabbyaliens.com
Prof. David Kippingâs response to Robin Hansonâs Grabby Aliens: https://www.youtube.com/watch?v=tR1HTNtcYw0
My explanation of âAI completenessâ, but actually I made a mistake because the term I previously coined is âgoal completenessâ: https://www.lesswrong.com/posts/iFdnb8FGRF4fquWnc/goal-completeness-is-like-turing-completeness-for-agi
^ Goal-Completeness (and the corresponding Shapira-Yudkowsky Thesis) might be my best/only original contribution to AI safety research, albeit a small one. Max Tegmark even retweeted it.
a16z's Ben Horowitz claiming nuclear proliferation is good, actually: https://x.com/liron/status/1690087501548126209
---
Doom Debatesâ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
Thanks for being one of the first Doom Debates subscribers and sending in your questions! This episode is Part 1; stay tuned for Part 2 coming soon.
00:00 Introduction
01:17 Is OpenAI a sinking ship?
07:25 College Education
13:20 Asperger's
16:50 Elon Musk: Genius or Clown?
22:43 Double Crux
32:04 Why Call Doomers a Cult?
36:45 How I Prepare Episodes
40:29 Dealing with AI Unemployment
44:00 AI Safety Research Areas
46:09 Fighting a Losing Battle
53:03 Lironâs IQ
01:00:24 Final Thoughts
Explanation of Double Cruxhttps://www.lesswrong.com/posts/exa5kmvopeRyfJgCy/double-crux-a-strategy-for-mutual-understanding
Best Doomer Arguments
The LessWrong sequences by Eliezer Yudkowsky: https://ReadTheSequences.com
LethalIntelligence.ai â Directory of people who are good at explaining doom
Rob Milesâ Explainer Videos: https://www.youtube.com/c/robertmilesai
For Humanity Podcast with John Sherman - https://www.youtube.com/@ForHumanityPodcast
PauseAI community â https://PauseAI.info â join the Discord!
AISafety.info â Great reference for various arguments
Best Non-Doomer Arguments
Carl Shulman â https://www.dwarkeshpatel.com/p/carl-shulman
Quintin Pope and Nora Belrose â https://optimists.ai
Robin Hanson â https://www.youtube.com/watch?v=dTQb6N3_zu8
How I prepared to debate Robin Hanson
Ideological Turing Test (me taking Robinâs side): https://www.youtube.com/watch?v=iNnoJnuOXFA
Walkthrough of my outline of prepared topics: https://www.youtube.com/watch?v=darVPzEhh-I
Doom Debatesâ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
In todayâs episode, instead of reacting to a long-form presentation of someoneâs position, Iâm reporting on the various AI x-risk-related tiffs happening in my part of the world. And by âmy part of the worldâ I mean my Twitter feed.
00:00 Introduction
01:55 Followup to my MSLT reaction episode
03:48 Double Crux
04:53 LLMs: Finite State Automata or Turing Machines?
16:11 Amjad Masad vs. Helen Toner and Eliezer Yudkowsky
17:29 How Will AGI Literally Kill Us?
33:53 Roon
37:38 Prof. Lee Cronin
40:48 Defining AI Creativity
43:44 Naval Ravikant
46:57 Pascal's Scam
54:10 Martin Casado and SB 1047
01:12:26 Final Thoughts
Links referenced in the episode:
* Eliezer Yudkowskyâs interview on the Logan Bartlett Show. Highly recommended: https://www.youtube.com/watch?v=_8q9bjNHeSo
* Double Crux, the core rationalist technique I use when Iâm âdebatingâ: https://www.lesswrong.com/posts/exa5kmvopeRyfJgCy/double-crux-a-strategy-for-mutual-understanding
* The problem with arguing âby definitionâ, a classic LessWrong post: https://www.lesswrong.com/posts/cFzC996D7Jjds3vS9/arguing-by-definition
Twitter people referenced:
* Amjad Masad: https://x.com/amasad
* Eliezer Yudkowsky: https://x.com/esyudkowsky
* Helen Toner: https://x.com/hlntnr
* Roon: https://x.com/tszzl
* Lee Cronin: https://x.com/leecronin
* Naval Ravikant: https://x.com/naval
* Geoffrey Miller: https://x.com/primalpoly
* Martin Casado: https://x.com/martin_casado
* Yoshua Bengio: https://x.com/yoshua_bengio
* Your boy: https://x.com/liron
Doom Debatesâ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
How smart is OpenAIâs new model, o1? What does âreasoningâ ACTUALLY mean? What do computability theory and complexity theory tell us about the limitations of LLMs?
Dr. Tim Scarfe and Dr. Keith Duggar, hosts of the popular Machine Learning Street Talk podcast, posted an interesting video discussing these issues⊠FOR ME TO DISAGREE WITH!!!
00:00 Introduction
02:14 Computability Theory
03:40 Turing Machines
07:04 Complexity Theory and AI
23:47 Reasoning
44:24 o1
47:00 Finding gold in the Sahara
56:20 Self-Supervised Learning and Chain of Thought
01:04:01 The Miracle of AI Optimization
01:23:57 Collective Intelligence
01:25:54 The Argument Against LLMs' Reasoning
01:49:29 The Swiss Cheese Metaphor for AI Knowledge
02:02:37 Final Thoughts
Original source: https://www.youtube.com/watch?v=nO6sDk6vO0g
Follow Machine Learning Street Talk: https://www.youtube.com/@MachineLearningStreetTalk
Zvi Mowshowitz's authoritative GPT-o1 post: https://thezvi.wordpress.com/2024/09/16/gpt-4o1/
Join the conversation at DoomDebates.com or youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of AI extinction. Thanks for watching.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
Yuval Noah Harari is a historian, philosopher, and bestselling author known for his thought-provoking works on human history, the future, and our evolving relationship with technology. His 2011 book, Sapiens: A Brief History of Humankind, took the world by storm, offering a sweeping overview of human history from the emergence of Homo sapiens to the present day.
Harari just published a new book which is largely about AI. Itâs called Nexus: A Brief History of Information Networks from the Stone Age to AI. Letâs go through the latest interview he did as part of his book tour to see where he stands on AI extinction risk.
00:00 Introduction
04:30 Defining AI vs. non-AI
20:43 AI and Language Mastery
29:37 AI's Potential for Manipulation
31:30 Information is Connection?
37:48 AI and Job Displacement
48:22 Consciousness vs. Intelligence
52:02 The Alignment Problem
59:33 Final Thoughts
Source podcast: https://www.youtube.com/watch?v=78YN1e8UXdM
Follow Yuval Noah Harari: x.com/harari_yuval
Follow Steven Bartlett, host of Diary of a CEO: x.com/StevenBartlett
Join the conversation at DoomDebates.com or youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of AI extinction. Thanks for watching.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
It's finally here, the Doom Debates / Dr. Phil crossover episode you've all been asking for đ
The full episode is called âAI: The Future of Education?"
While the main focus was AI in education, I'm glad the show briefly touched on how we're all gonna die. Everything in the show related to AI extinction is clipped here.
Join the conversation at DoomDebates.com or youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of AI extinction. Thanks for watching.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com - Mostrar mais