Эпизоды
-
It’s time for AI Twitter Beefs #2:
00:42 Jack Clark (Anthropic) vs. Holly Elmore (PauseAI US)
11:02 Beff Jezos vs. Eliezer Yudkowsky, Carl Feynman
18:10 Geoffrey Hinton vs. OpenAI & Meta
25:14 Samuel Hammond vs. Liron
30:26 Yann LeCun vs. Eliezer Yudkowsky
37:13 Roon vs. Eliezer Yudkowsky
41:37 Tyler Cowen vs. AI Doomers
52:54 David Deutsch vs. Liron
Twitter people referenced:
* Jack Clark: https://x.com/jackclarkSF
* Holly Elmore: https://x.com/ilex_ulmus
* PauseAI US: https://x.com/PauseAIUS
* Geoffrey Hinton: https://x.com/GeoffreyHinton
* Samuel Hammond: https://x.com/hamandcheese
* Yann LeCun: https://x.com/ylecun
* Eliezer Yudkowsky: https://x.com/esyudkowsky
* Roon: https://x.com/tszzl
* Beff Jezos: https://x.com/basedbeffjezos
* Carl Feynman: https://x.com/carl_feynman
* Tyler Cowen: https://x.com/tylercowen
* David Deutsch: https://x.com/DavidDeutschOxf
Show Notes
Holly Elmore’s EA forum post about scouts vs. soldiers
Manifund info & donation page for PauseAI US: https://manifund.org/projects/pauseai-us-2025-through-q2
PauseAI.info - join the Discord and find me in the #doom-debates channel!
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
Vaden Masrani and Ben Chugg, hosts of the Increments Podcast, are joining me to debate Bayesian vs. Popperian epistemology.
I’m on the Bayesian side, heavily influenced by the writings of Eliezer Yudkowsky. Vaden and Ben are on the Popperian side, heavily influenced by David Deutsch and the writings of Popper himself.
We dive into the theoretical underpinnings of Bayesian reasoning and Solomonoff induction, contrasting them with the Popperian perspective, and explore real-world applications such as predicting elections and economic policy outcomes.
The debate highlights key philosophical differences between our two epistemological frameworks, and sets the stage for further discussions on superintelligence and AI doom scenarios in an upcoming Part II.
00:00 Introducing Vaden and Ben
02:51 Setting the Stage: Epistemology and AI Doom
04:50 What’s Your P(Doom)™
13:29 Popperian vs. Bayesian Epistemology
31:09 Engineering and Hypotheses
38:01 Solomonoff Induction
45:21 Analogy to Mathematical Proofs
48:42 Popperian Reasoning and Explanations
54:35 Arguments Against Bayesianism
58:33 Against Probability Assignments
01:21:49 Popper’s Definition of “Content”
01:31:22 Heliocentric Theory Example
01:31:34 “Hard to Vary” Explanations
01:44:42 Coin Flipping Example
01:57:37 Expected Value
02:12:14 Prediction Market Calibration
02:19:07 Futarchy
02:29:14 Prediction Markets as AI Lower Bound
02:39:07 A Test for Prediction Markets
2:45:54 Closing Thoughts
Show Notes
Vaden & Ben’s Podcast: https://www.youtube.com/@incrementspod
Vaden’s Twitter: https://x.com/vadenmasrani
Ben’s Twitter: https://x.com/BennyChugg
Bayesian reasoning: https://en.wikipedia.org/wiki/Bayesian_inference
Karl Popper: https://en.wikipedia.org/wiki/Karl_Popper
Vaden's blog post on Cox's Theorem and Yudkowsky's claims of "Laws of Rationality": https://vmasrani.github.io/blog/2021/the_credence_assumption/
Vaden’s disproof of probabilistic induction (including Solomonoff Induction): https://arxiv.org/abs/2107.00749
Vaden’s referenced post about predictions being uncalibrated > 1yr out: https://forum.effectivealtruism.org/posts/hqkyaHLQhzuREcXSX/data-on-forecasting-accuracy-across-different-time-horizons#Calibrations
Article by Gavin Leech and Misha Yagudin on the reliability of forecasters: https://ifp.org/can-policymakers-trust-forecasters/
Sources for claim that superforecasters gave a P(doom) below 1%: https://80000hours.org/2024/09/why-experts-and-forecasters-disagree-about-ai-risk/https://www.astralcodexten.com/p/the-extinction-tournament
Vaden’s Slides on Content vs Probability: https://vmasrani.github.io/assets/pdf/popper_good.pdf
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
Пропущенные эпизоды?
-
Our top researchers and industry leaders have been warning us that superintelligent AI may cause human extinction in the next decade.
If you haven't been following all the urgent warnings, I'm here to bring you up to speed.
* Human-level AI is coming soon
* It’s an existential threat to humanity
* The situation calls for urgent action
Listen to this 15-minute intro to get the lay of the land.
Then follow these links to learn more and see how you can help:
* The Compendium
A longer written introduction to AI doom by Connor Leahy et al
* AGI Ruin — A list of lethalities
A comprehensive list by Eliezer Yudkowksy of reasons why developing superintelligent AI is unlikely to go well for humanity
* AISafety.info
A catalogue of AI doom arguments and responses to objections
* PauseAI.info
The largest volunteer org focused on lobbying world government to pause development of superintelligent AI
* PauseAI Discord
Chat with PauseAI members, see a list of projects and get involved
---
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
Prof. Lee Cronin is the Regius Chair of Chemistry at the University of Glasgow. His research aims to understand how life might arise from non-living matter. In 2017, he invented “Assembly Theory” as a way to measure the complexity of molecules and gain insight into the earliest evolution of life.
Today we’re debating Lee's claims about the limits of AI capabilities, and my claims about the risk of extinction from superintelligent AGI.
00:00 Introduction
04:20 Assembly Theory
05:10 Causation and Complexity
10:07 Assembly Theory in Practice
12:23 The Concept of Assembly Index
16:54 Assembly Theory Beyond Molecules
30:13 P(Doom)
32:39 The Statement on AI Risk
42:18 Agency and Intent
47:10 RescueBot’s Intent vs. a Clock’s
53:42 The Future of AI and Human Jobs
57:34 The Limits of AI Creativity
01:04:33 The Complexity of the Human Brain
01:19:31 Superintelligence: Fact or Fiction?
01:29:35 Final Thoughts
Lee’s Wikipedia: https://en.wikipedia.org/wiki/Leroy_Cronin
Lee’s Twitter: https://x.com/leecronin
Lee’s paper on Assembly Theory: https://arxiv.org/abs/2206.02279
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
Ben Horowitz, cofounder and General Partner at Andreessen Horowitz (a16z), says nuclear proliferation is good.
I was shocked because I thought we all agreed nuclear proliferation is VERY BAD.
If Ben and a16z can’t appreciate the existential risks of nuclear weapons proliferation, why would anyone ever take them seriously on the topic of AI regulation?
00:00 Introduction
00:49 Ben Horowitz on Nuclear Proliferation
02:12 Ben Horowitz on Open Source AI
05:31 Nuclear Non-Proliferation Treaties
10:25 Escalation Spirals
15:20 Rogue Actors
16:33 Nuclear Accidents
17:19 Safety Mechanism Failures
20:34 The Role of Human Judgment in Nuclear Safety
21:39 The 1983 Soviet Nuclear False Alarm
22:50 a16z’s Disingenuousness
23:46 Martin Casado and Marc Andreessen
24:31 Nuclear Equilibrium
26:52 Why I Care
28:09 Wrap Up
Sources of this episode’s video clips:
Ben Horowitz’s interview on Upstream with Erik Torenberg: https://www.youtube.com/watch?v=oojc96r3Kuo
Martin Casado and Marc Andreessen talking about AI on the a16z Podcast: https://www.youtube.com/watch?v=0wIUK0nsyUg
Roger Skaer’s TikTok: https://www.tiktok.com/@rogerskaer
George W. Bush and John Kerry Presidential Debate (September 30, 2004): https://www.youtube.com/watch?v=WYpP-T0IcyA
Barack Obama’s Prague Remarks on Nuclear Disarmament: https://www.youtube.com/watch?v=QKSn1SXjj2s
John Kerry’s Remarks at the 2015 Nuclear Nonproliferation Treaty Review Conference: https://www.youtube.com/watch?v=LsY1AZc1K7w
Show notes:
Nuclear War, A Scenario by Annie Jacobsen: https://www.amazon.com/Nuclear-War-Scenario-Annie-Jacobsen/dp/0593476093
Dr. Strangelove or: How I learned to Stop Worrying and Love the Bomb: https://en.wikipedia.org/wiki/Dr._Strangelove
1961 Goldsboro B-52 Crash: https://en.wikipedia.org/wiki/1961_Goldsboro_B-52_crash
1983 Soviet Nuclera False Alarm Incident: https://en.wikipedia.org/wiki/1983_Soviet_nuclear_false_alarm_incident
List of military nuclear accidents: https://en.wikipedia.org/wiki/List_of_military_nuclear_accidents
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
Today I’m reacting to Arvind Narayanan’s interview with Robert Wright on the Nonzero podcast: https://www.youtube.com/watch?v=MoB_pikM3NY
Dr. Narayanan is a Professor of Computer Science and the Director of the Center for Information Technology Policy at Princeton. He just published a new book called AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference.
Arvind claims AI is “normal technology like the internet”, and never sees fit to bring up the impact or urgency of AGI. So I’ll take it upon myself to point out all the questions where someone who takes AGI seriously would give different answers.
00:00 Introduction
01:49 AI is “Normal Technology”?
09:25 Playing Chess vs. Moving Chess Pieces
12:23 AI Has To Learn From Its Mistakes?
22:24 The Symbol Grounding Problem and AI's Understanding
35:56 Human vs AI Intelligence: The Fundamental Difference
36:37 The Cognitive Reflection Test
41:34 The Role of AI in Cybersecurity
43:21 Attack vs. Defense Balance in (Cyber)War
54:47 Taking AGI Seriously
01:06:15 Final Thoughts
Show Notes
The original Nonzero podcast episode with Arvind Narayanan and Robert Wright: https://www.youtube.com/watch?v=MoB_pikM3NY
Arvind’s new book, AI Snake Oil: https://www.amazon.com/Snake-Oil-Artificial-Intelligence-Difference-ebook/dp/B0CW1JCKVL
Arvind’s Substack: https://aisnakeoil.com
Arvind’s Twitter: https://x.com/random_walker
Robert Wright’s Twitter: https://x.com/robertwrighter
Robert Wright’s Nonzero Newsletter: https://nonzero.substack.com
Rob’s excellent post about symbol grounding (Yes, AIs ‘understand’ things): https://nonzero.substack.com/p/yes-ais-understand-things
My previous episode of Doom Debates reacting to Arvind Narayanan on Harry Stebbings’ podcast: https://www.youtube.com/watch?v=lehJlitQvZE
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
Dr. Keith Duggar from Machine Learning Street Talk was the subject of my recent reaction episode about whether GPT o1 can reason. But instead of ignoring or blocking me, Keith was brave enough to come into the lion’s den and debate his points with me… and his P(doom) might shock you!
First we debate whether Keith’s distinction between Turing Machines and Discrete Finite Automata is useful for understanding limitations of current LLMs. Then I take Keith on a tour of alignment, orthogonality, instrumental convergence, and other popular stations on the “doom train”, to compare our views on each.
Keith was a great sport and I think this episode is a classic!
00:00 Introduction
00:46 Keith’s Background
03:02 Keith’s P(doom)
14:09 Are LLMs Turing Machines?
19:09 Liron Concedes on a Point!
21:18 Do We Need >1MB of Context?
27:02 Examples to Illustrate Keith’s Point
33:56 Is Terence Tao a Turing Machine?
38:03 Factoring Numbers: Human vs. LLM
53:24 Training LLMs with Turing-Complete Feedback
1:02:22 What Does the Pillar Problem Illustrate?
01:05:40 Boundary between LLMs and Brains
1:08:52 The 100-Year View
1:18:29 Intelligence vs. Optimization Power
1:23:13 Is Intelligence Sufficient To Take Over?
01:28:56 The Hackable Universe and AI Threats
01:31:07 Nuclear Extinction vs. AI Doom
1:33:16 Can We Just Build Narrow AI?
01:37:43 Orthogonality Thesis and Instrumental Convergence
01:40:14 Debating the Orthogonality Thesis
02:03:49 The Rocket Alignment Problem
02:07:47 Final Thoughts
Show Notes
Keith’s show: https://www.youtube.com/@MachineLearningStreetTalk
Keith’s Twitter: https://x.com/doctorduggar
Keith’s fun brain teaser that LLMs can’t solve yet, about a pillar with four holes: https://youtu.be/nO6sDk6vO0g?si=diGUY7jW4VFsV0TJ&t=3684
Eliezer Yudkowsky’s classic post about the “Rocket Alignment Problem”: https://intelligence.org/2018/10/03/rocket-alignment/
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates.
📣 You can now chat with me and other listeners in the #doom-debates channel of the PauseAI discord: https://discord.gg/2XXWXvErfA
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
Sam Kirchner and Remmelt Ellen, leaders of the Stop AI movement, think the only way to effectively protest superintelligent AI development is with civil disobedience.
Not only are they staging regular protests in front of AI labs, they’re barricading the entrances and blocking traffic, then allowing themselves to be repeatedly arrested.
Is civil disobedience the right strategy to pause or stop AI?
00:00 Introducing Stop AI
00:38 Arrested at OpenAI Headquarters
01:14 Stop AI’s Funding
01:26 Blocking Entrances Strategy
03:12 Protest Logistics and Arrest
08:13 Blocking Traffic
12:52 Arrest and Legal Consequences
18:31 Commitment to Nonviolence
21:17 A Day in the Life of a Protestor
21:38 Civil Disobedience
25:29 Planning the Next Protest
28:09 Stop AI Goals and Strategies
34:27 The Ethics and Impact of AI Protests
42:20 Call to Action
Show Notes
StopAI's next protest is on October 21, 2024 at OpenAI, 575 Florida St, San Francisco, CA 94110.
StopAI Website: https://StopAI.info
StopAI Discord: https://discord.gg/gbqGUt7ZN4
Disclaimer: I (Liron) am not part of StopAI, but I am a member of PauseAI, which also has a website and Discord you can join.
PauseAI Website: https://pauseai.info
PauseAI Discord: https://discord.gg/2XXWXvErfA
There's also a special #doom-debates channel in the PauseAI Discord just for us :)
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
This episode is a continuation of Q&A #1 Part 1 where I answer YOUR questions!
00:00 Introduction
01:20 Planning for a good outcome?
03:10 Stock Picking Advice
08:42 Dumbing It Down for Dr. Phil
11:52 Will AI Shorten Attention Spans?
12:55 Historical Nerd Life
14:41 YouTube vs. Podcast Metrics
16:30 Video Games
26:04 Creativity
30:29 Does AI Doom Explain the Fermi Paradox?
36:37 Grabby Aliens
37:29 Types of AI Doomers
44:44 Early Warning Signs of AI Doom
48:34 Do Current AIs Have General Intelligence?
51:07 How Liron Uses AI
53:41 Is “Doomer” a Good Term?
57:11 Liron’s Favorite Books
01:05:21 Effective Altruism
01:06:36 The Doom Debates Community
---
Show Notes
PauseAI Discord: https://discord.gg/2XXWXvErfA
Robin Hanson’s Grabby Aliens theory: https://grabbyaliens.com
Prof. David Kipping’s response to Robin Hanson’s Grabby Aliens: https://www.youtube.com/watch?v=tR1HTNtcYw0
My explanation of “AI completeness”, but actually I made a mistake because the term I previously coined is “goal completeness”: https://www.lesswrong.com/posts/iFdnb8FGRF4fquWnc/goal-completeness-is-like-turing-completeness-for-agi
^ Goal-Completeness (and the corresponding Shapira-Yudkowsky Thesis) might be my best/only original contribution to AI safety research, albeit a small one. Max Tegmark even retweeted it.
a16z's Ben Horowitz claiming nuclear proliferation is good, actually: https://x.com/liron/status/1690087501548126209
---
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
Thanks for being one of the first Doom Debates subscribers and sending in your questions! This episode is Part 1; stay tuned for Part 2 coming soon.
00:00 Introduction
01:17 Is OpenAI a sinking ship?
07:25 College Education
13:20 Asperger's
16:50 Elon Musk: Genius or Clown?
22:43 Double Crux
32:04 Why Call Doomers a Cult?
36:45 How I Prepare Episodes
40:29 Dealing with AI Unemployment
44:00 AI Safety Research Areas
46:09 Fighting a Losing Battle
53:03 Liron’s IQ
01:00:24 Final Thoughts
Explanation of Double Cruxhttps://www.lesswrong.com/posts/exa5kmvopeRyfJgCy/double-crux-a-strategy-for-mutual-understanding
Best Doomer Arguments
The LessWrong sequences by Eliezer Yudkowsky: https://ReadTheSequences.com
LethalIntelligence.ai — Directory of people who are good at explaining doom
Rob Miles’ Explainer Videos: https://www.youtube.com/c/robertmilesai
For Humanity Podcast with John Sherman - https://www.youtube.com/@ForHumanityPodcast
PauseAI community — https://PauseAI.info — join the Discord!
AISafety.info — Great reference for various arguments
Best Non-Doomer Arguments
Carl Shulman — https://www.dwarkeshpatel.com/p/carl-shulman
Quintin Pope and Nora Belrose — https://optimists.ai
Robin Hanson — https://www.youtube.com/watch?v=dTQb6N3_zu8
How I prepared to debate Robin Hanson
Ideological Turing Test (me taking Robin’s side): https://www.youtube.com/watch?v=iNnoJnuOXFA
Walkthrough of my outline of prepared topics: https://www.youtube.com/watch?v=darVPzEhh-I
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
In today’s episode, instead of reacting to a long-form presentation of someone’s position, I’m reporting on the various AI x-risk-related tiffs happening in my part of the world. And by “my part of the world” I mean my Twitter feed.
00:00 Introduction
01:55 Followup to my MSLT reaction episode
03:48 Double Crux
04:53 LLMs: Finite State Automata or Turing Machines?
16:11 Amjad Masad vs. Helen Toner and Eliezer Yudkowsky
17:29 How Will AGI Literally Kill Us?
33:53 Roon
37:38 Prof. Lee Cronin
40:48 Defining AI Creativity
43:44 Naval Ravikant
46:57 Pascal's Scam
54:10 Martin Casado and SB 1047
01:12:26 Final Thoughts
Links referenced in the episode:
* Eliezer Yudkowsky’s interview on the Logan Bartlett Show. Highly recommended: https://www.youtube.com/watch?v=_8q9bjNHeSo
* Double Crux, the core rationalist technique I use when I’m “debating”: https://www.lesswrong.com/posts/exa5kmvopeRyfJgCy/double-crux-a-strategy-for-mutual-understanding
* The problem with arguing “by definition”, a classic LessWrong post: https://www.lesswrong.com/posts/cFzC996D7Jjds3vS9/arguing-by-definition
Twitter people referenced:
* Amjad Masad: https://x.com/amasad
* Eliezer Yudkowsky: https://x.com/esyudkowsky
* Helen Toner: https://x.com/hlntnr
* Roon: https://x.com/tszzl
* Lee Cronin: https://x.com/leecronin
* Naval Ravikant: https://x.com/naval
* Geoffrey Miller: https://x.com/primalpoly
* Martin Casado: https://x.com/martin_casado
* Yoshua Bengio: https://x.com/yoshua_bengio
* Your boy: https://x.com/liron
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
How smart is OpenAI’s new model, o1? What does “reasoning” ACTUALLY mean? What do computability theory and complexity theory tell us about the limitations of LLMs?
Dr. Tim Scarfe and Dr. Keith Duggar, hosts of the popular Machine Learning Street Talk podcast, posted an interesting video discussing these issues… FOR ME TO DISAGREE WITH!!!
00:00 Introduction
02:14 Computability Theory
03:40 Turing Machines
07:04 Complexity Theory and AI
23:47 Reasoning
44:24 o1
47:00 Finding gold in the Sahara
56:20 Self-Supervised Learning and Chain of Thought
01:04:01 The Miracle of AI Optimization
01:23:57 Collective Intelligence
01:25:54 The Argument Against LLMs' Reasoning
01:49:29 The Swiss Cheese Metaphor for AI Knowledge
02:02:37 Final Thoughts
Original source: https://www.youtube.com/watch?v=nO6sDk6vO0g
Follow Machine Learning Street Talk: https://www.youtube.com/@MachineLearningStreetTalk
Zvi Mowshowitz's authoritative GPT-o1 post: https://thezvi.wordpress.com/2024/09/16/gpt-4o1/
Join the conversation at DoomDebates.com or youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of AI extinction. Thanks for watching.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
Yuval Noah Harari is a historian, philosopher, and bestselling author known for his thought-provoking works on human history, the future, and our evolving relationship with technology. His 2011 book, Sapiens: A Brief History of Humankind, took the world by storm, offering a sweeping overview of human history from the emergence of Homo sapiens to the present day.
Harari just published a new book which is largely about AI. It’s called Nexus: A Brief History of Information Networks from the Stone Age to AI. Let’s go through the latest interview he did as part of his book tour to see where he stands on AI extinction risk.
00:00 Introduction
04:30 Defining AI vs. non-AI
20:43 AI and Language Mastery
29:37 AI's Potential for Manipulation
31:30 Information is Connection?
37:48 AI and Job Displacement
48:22 Consciousness vs. Intelligence
52:02 The Alignment Problem
59:33 Final Thoughts
Source podcast: https://www.youtube.com/watch?v=78YN1e8UXdM
Follow Yuval Noah Harari: x.com/harari_yuval
Follow Steven Bartlett, host of Diary of a CEO: x.com/StevenBartlett
Join the conversation at DoomDebates.com or youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of AI extinction. Thanks for watching.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
It's finally here, the Doom Debates / Dr. Phil crossover episode you've all been asking for 😂
The full episode is called “AI: The Future of Education?"
While the main focus was AI in education, I'm glad the show briefly touched on how we're all gonna die. Everything in the show related to AI extinction is clipped here.
Join the conversation at DoomDebates.com or youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of AI extinction. Thanks for watching.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
Dr. Roman Yampolskiy is the director of the Cyber Security Lab at the University of Louisville. His new book is called AI: Unexplainable, Unpredictable, Uncontrollable.
Roman’s P(doom) from AGI is a whopping 99.999%, vastly greater than my P(doom) of 50%. It’s a rare debate when I’m LESS doomy than my opponent!
This is a cross-post from the For Humanity podcast hosted by John Sherman. For Humanity is basically a sister show of Doom Debates. Highly recommend subscribing!
00:00 John Sherman’s Intro
05:21 Diverging Views on AI Safety and Control
12:24 The Challenge of Defining Human Values for AI
18:04 Risks of Superintelligent AI and Potential Solutions
33:41 The Case for Narrow AI
45:21 The Concept of Utopia
48:33 AI's Utility Function and Human Values
55:48 Challenges in AI Safety Research
01:05:23 Breeding Program Proposal
01:14:05 The Reality of AI Regulation
01:18:04 Concluding Thoughts
01:23:19 Celebration of Life
This episode on For Humanity’s channel: https://www.youtube.com/watch?v=KcjLCZcBFoQ
For Humanity on YouTube: https://www.youtube.com/@ForHumanityPodcast
For Humanity on X: https://x.com/ForHumanityPod
Buy Roman’s new book: https://www.amazon.com/Unexplainable-Unpredictable-Uncontrollable-Artificial-Intelligence/dp/103257626X
Join the conversation at DoomDebates.com or youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of AI extinction. Thanks for watching.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
Jobst Landgrebe, co-author of Why Machines Will Never Rule The World: Artificial Intelligence Without Fear, argues that AI is fundamentally limited in achieving human-like intelligence or consciousness due to the complexities of the human brain which are beyond mathematical modeling.
Contrary to my view, Jobst has a very low opinion of what machines will be able to achieve in the coming years and decades.
He’s also a devout Christian, which makes our clash of perspectives funnier.
00:00 Introduction
03:12 AI Is Just Pattern Recognition?
06:46 Mathematics and the Limits of AI
12:56 Complex Systems and Thermodynamics
33:40 Transhumanism and Genetic Engineering
47:48 Materialism
49:35 Transhumanism as Neo-Paganism
01:02:38 AI in Warfare
01:11:55 Is This Science?
01:25:46 Conclusion
Source podcast: https://www.youtube.com/watch?v=xrlT1LQSyNU
Join the conversation at DoomDebates.com or youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of AI extinction. Thanks for watching.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
Today I’m reacting to the 20VC podcast with Harry Stebbings and Princeton professor Arvind Narayanan.
Prof. Narayanan is known for his critical perspective on the misuse and over-hype of artificial intelligence, which he often refers to as “AI snake oil”. Narayanan’s critiques aim to highlight the gap between what AI can realistically achieve, and the often misleading promises made by companies and researchers.
I analyze Arvind’s takes on the comparative dangers of AI and nuclear weapons, the limitations of current AI models, and AI’s trajectory toward being a commodity rather than a superintelligent god.
00:00 Introduction
01:21 Arvind’s Perspective on AI
02:07 Debating AI's Compute and Performance
03:59 Synthetic Data vs. Real Data
05:59 The Role of Compute in AI Advancement
07:30 Challenges in AI Predictions
26:30 AI in Organizations and Tacit Knowledge
33:32 The Future of AI: Exponential Growth or Plateau?
36:26 Relevance of Benchmarks
39:02 AGI
40:59 Historical Predictions
46:28 OpenAI vs. Anthropic
52:13 Regulating AI
56:12 AI as a Weapon
01:02:43 Sci-Fi
01:07:28 Conclusion
Original source: https://www.youtube.com/watch?v=8CvjVAyB4O4
Follow Arvind Narayanan: x.com/random_walker
Follow Harry Stebbings: x.com/HarryStebbings
Join the conversation at DoomDebates.com or youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of AI extinction. Thanks for watching.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
Today I’m reacting to the Bret Weinstein’s recent appearance on the Diary of a CEO podcast with Steven Bartlett. Bret is an evolutionary biologist known for his outspoken views on social and political issues.
Bret gets off to a promising start, saying that AI risk should be “top of mind” and poses “five existential threats”. But his analysis is shallow and ad-hoc, and ends in him dismissing the idea of trying to use regulation as a tool to save our species from a recognized existential threat.
I believe we can raise the level of AI doom discourse by calling out these kinds of basic flaws in popular media on the subject.
00:00 Introduction
02:02 Existential Threats from AI
03:32 The Paperclip Problem
04:53 Moral Implications of Ending Suffering
06:31 Inner vs. Outer Alignment
08:41 AI as a Tool for Malicious Actors
10:31 Attack vs. Defense in AI
18:12 The Event Horizon of AI
21:42 Is Language More Prime Than Intelligence?
38:38 AI and the Danger of Echo Chambers
46:59 AI Regulation
51:03 Mechanistic Interpretability
56:52 Final Thoughts
Original source: youtube.com/watch?v=_cFu-b5lTMU
Follow Bret Weinstein: x.com/BretWeinstein
Follow Steven Bartlett: x.com/StevenBartlett
Join the conversation at DoomDebates.com or youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of AI extinction. Thanks for watching.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
California's SB 1047 bill, authored by CA State Senator Scott Wiener, is the leading attempt by a US state to regulate catastrophic risks from frontier AI in the wake of President Biden's 2023 AI Executive Order.
Today’s debate:
Holly Elmore, Executive Director of Pause AI US, representing Pro- SB 1047
Greg Tanaka, Palo Alto City Councilmember, representing Anti- SB 1047
Key Bill Supporters: Geoffrey Hinton, Yoshua Bengio, Anthropic, PauseAI, and about a 2/3 majority of California voters surveyed.
Key Bill Opponents: OpenAI, Google, Meta, Y Combinator, Andreessen Horowitz
Links
Greg mentioned that the "Supporters & Opponents" tab on this page lists organizations who registered their support and opposition. The vast majority of organizations listed here registered support against the bill: https://digitaldemocracy.calmatters.org/bills/ca_202320240sb1047
Holly mentioned surveys of California voters showing popular support for the bill:1. Center for AI Safety survey shows 77% support: https://drive.google.com/file/d/1wmvstgKo0kozd3tShPagDr1k0uAuzdDM/view2. Future of Life Institute survey shows 59% support: https://futureoflife.org/ai-policy/poll-shows-popularity-of-ca-sb1047/
Follow Holly: x.com/ilex_ulmus
Follow Greg: x.com/GregTanaka
Join the conversation on DoomDebates.com or youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of extinction. Thanks for watching.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
Today I’m reacting to David Shapiro’s response to my previous episode, and also to David’s latest episode with poker champion & effective altruist Igor Kurganov.
I challenge David's optimistic stance on superintelligent AI inherently aligning with human values. We touch on factors like instrumental convergence and resource competition. David and I continue to clash over whether we should pause AI development to mitigate potential catastrophic risks. I also respond to David's critiques of AI safety advocates.
00:00 Introduction
01:08 David's Response and Engagement
03:02 The Corrigibility Problem
05:38 Nirvana Fallacy
10:57 Prophecy and Faith-Based Assertions
22:47 AI Coexistence with Humanity
35:17 Does Curiosity Make AI Value Humans?
38:56 Instrumental Convergence and AI's Goals
46:14 The Fermi Paradox and AI's Expansion
51:51 The Future of Human and AI Coexistence
01:04:56 Concluding Thoughts
Join the conversation on DoomDebates.com or youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of extinction. Thanks for listening.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com - Показать больше