Episodes

  • Podcast: Conversations with Tyler
    Episode: Peter Thiel on Political Theology
    Release date: 2024-04-17


    In this conversation recorded live in Miami, Tyler and Peter Thiel dive deep into the complexities of political theology, including why it’s a concept we still need today, why Peter’s against Calvinism (and rationalism), whether the Old Testament should lead us to be woke, why Carl Schmitt is enjoying a resurgence, whether we’re entering a new age of millenarian thought, the one existential risk Peter thinks we’re overlooking, why everyone just muddling through leads to disaster, the role of the katechon, the political vision in Shakespeare, how AI will affect the influence of wordcels, Straussian messages in the Bible, what worries Peter about Miami, and more.

    Read a full transcript enhanced with helpful links, or watch the full video.

    Recorded February 21st, 2024.

    Other ways to connect

    Follow us on X and Instagram Follow Tyler on X Follow Peter on X Sign up for our newsletter Join our Discord Email us: [email protected] Learn more about Conversations with Tyler and other Mercatus Center podcasts here.
  • Podcast: Making Sense with Sam Harris
    Episode: #361 — Sam Bankman-Fried & Effective Altruism
    Release date: 2024-04-01


    Sam Harris speaks with William MacAskill about the implosion of FTX and the effect that it has had on the Effective Altruism movement. They discuss the logic of “earning to give,” the mind of SBF, his philanthropy, the character of the EA community, potential problems with focusing on long-term outcomes, AI risk, the effects of the FTX collapse on Will personally, and other topics.

    If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.

    Learning how to train your mind is the single greatest investment you can make in life. That’s why Sam Harris created the Waking Up app. From rational mindfulness practice to lessons on some of life’s most important topics, join Sam as he demystifies the practice of meditation and explores the theory behind it.


  • Episodes manquant?

    Cliquez ici pour raffraichir la page manuellement.

  • Podcast: Dwarkesh Podcast
    Episode: Sholto Douglas & Trenton Bricken - How to Build & Understand GPT-7's Mind
    Release date: 2024-03-28


    Had so much fun chatting with my good friends Trenton Bricken and Sholto Douglas on the podcast.

    No way to summarize it, except:

    This is the best context dump out there on how LLMs are trained, what capabilities they're likely to soon have, and what exactly is going on inside them.

    You would be shocked how much of what I know about this field, I've learned just from talking with them.

    To the extent that you've enjoyed my other AI interviews, now you know why.

    So excited to put this out. Enjoy! I certainly did :)

    Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform.

    There's a transcript with links to all the papers the boys were throwing down - may help you follow along.

    Follow Trenton and Sholto on Twitter.

    Timestamps

    (00:00:00) - Long contexts

    (00:16:12) - Intelligence is just associations

    (00:32:35) - Intelligence explosion & great researchers

    (01:06:52) - Superposition & secret communication

    (01:22:34) - Agents & true reasoning

    (01:34:40) - How Sholto & Trenton got into AI research

    (02:07:16) - Are feature spaces the wrong way to think about intelligence?

    (02:21:12) - Will interp actually work on superhuman models

    (02:45:05) - Sholto’s technical challenge for the audience

    (03:03:57) - Rapid fire



    Get full access to Dwarkesh Podcast at www.dwarkeshpatel.com/subscribe
  • Podcast: Joe Carlsmith Audio
    Episode: An even deeper atheism
    Release date: 2024-01-11


    Who isn't a paperclipper?

    Text version here: https://joecarlsmith.com/2024/01/11/an-even-deeper-atheism

    This essay is part of a series I'm calling "Otherness and control in the age of AGI." I'm hoping that individual essays can be read fairly well on their own, but see here for brief summaries of the essays that have been released thus far: https://joecarlsmith.com/2024/01/02/otherness-and-control-in-the-age-of-agi



  • Podcast: a16z Podcast
    Episode: Drones, Data, and Deterrence: Technology's Role in Public Safety
    Release date: 2024-01-10


    Flock is a public safety technology platform that operates in over 4,000 cities across the United States, and solves about 2,200 crimes daily. That’s 10 percent of reported crimes nationwide.

    Taken from a16z’s recent LP Summit, a16z General Partner David Ulevitch joins forces with Flock Safety’s founder, Garrett Langley and Sheriff Kevin McMahill of the Las Vegas Metropolitan Police Department. Together, they cover the delicate balance between using technology to combat crime and respecting individual privacy, and explore the use of drones and facial recognition, building trust within communities, and the essence of objective policing.

    Resources:

    Find Garret on Twitter: https://twitter.com/glangley

    Find Sheriff McMahill on Twitter: https://twitter.com/Sheriff_LVMPD

    Find David on Twitter:https://twitter.com/davidu

    Learn more about Flock Safety: https://www.flocksafety.com

    Stay Updated:

    Find a16z on Twitter: https://twitter.com/a16z

    Find a16z on LinkedIn: https://www.linkedin.com/company/a16z

    Subscribe on your favorite podcast app: https://a16z.simplecast.com/

    Follow our host: https://twitter.com/stephsmithio

    Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.


  • Podcast: "The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis
    Episode: Biden's Executive Order and AI Safety with Flo Crivello, Founder of Lindy AI
    Release date: 2023-11-03


    In this episode, Flo Crivello, founder of Lindy AI, joins Nathan to chat about Biden’s executive order, and the state of AI safety. They discuss Flo’s thoughts on the executive order, building AGI kill switches, self driving cars, and more. If you need an ERP platform, check out our sponsor NetSuite: https://netsuite.com/cognitive.

    We're hiring across the board at Turpentine and for Erik's personal team on other projects he's incubating. He's hiring a Chief of Staff, EA, Head of Special Projects, Investment Associate, and more. For a list of JDs, check out: eriktorenberg.com.

    SPONSORS: Netsuite | Omneky

    Shopify is the global commerce platform that helps you sell at every stage of your business. Shopify powers 10% of ALL eCommerce in the US. And Shopify's the global force behind Allbirds, Rothy's, and Brooklinen, and 1,000,000s of other entrepreneurs across 175 countries.From their all-in-one e-commerce platform, to their in-person POS system – wherever and whatever you're selling, Shopify's got you covered. With free Shopify Magic, sell more with less effort by whipping up captivating content that converts – from blog posts to product descriptions using AI. Sign up for $1/month trial period: https://shopify.com/cognitive

    NetSuite has 25 years of providing financial software for all your business needs. More than 36,000 businesses have already upgraded to NetSuite by Oracle, gaining visibility and control over their financials, inventory, HR, eCommerce, and more. If you're looking for an ERP platform ✅ head to NetSuite: https://netsuite.com/cognitive and download your own customized KPI checklist.

    Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off.

    RECOMMENDED PODCAST:

    Every week investor and writer of the popular newsletter The Diff, Byrne Hobart, and co-host Erik Torenberg discuss today’s major inflection points in technology, business, and markets – and help listeners build a diversified portfolio of trends and ideas for the future. Subscribe to “The Riff” with Byrne Hobart and Erik Torenberg: https://link.chtbl.com/theriff

    TIMESTAMPS:

    (00:00) Episode Preview

    (00:06:42) The natural order of technological progress

    (00:07:00) Self driving cars

    (00:10:57) Where is Flo accelerationist?

    (00:12:34) Artificial intelligence as a new form of life

    (00:17:08) - Sponsors: Oracle | Omneky

    (00:18:05) Silicon-based intelligence vs carbon-based intelligence

    (00:24:36) Executive Order

    (00:29:32) How would a GPU kill switch work?

    (00:31:24) “Let’s not regulate model development, but applications”

    (00:32:08) - Sponsor: Netsuite

    (00:36:00) GPT-4 is the most critical component for AGI

    (00:38:00) AGI in 2-8 years

    (00:39:26) Eureka moment from a general system

    (00:48:00) AI research with China

    (00:52:00) Does AI have subjective experience? The Mu response

    The Cognitive Revolution is brought to you by the Turpentine Media network.

    Producer: Vivian Meng

    Executive Producers: Amelia Salyers, and Erik Torenberg

    Editor: Graham Bessellieu

    For inquiries about guests or sponsoring the podcast, please email [email protected]


  • Podcast: Dwarkesh Podcast
    Episode: Paul Christiano - Preventing an AI Takeover
    Release date: 2023-10-31


    Paul Christiano is the world’s leading AI safety researcher. My full episode with him is out!

    We discuss:

    - Does he regret inventing RLHF, and is alignment necessarily dual-use?

    - Why he has relatively modest timelines (40% by 2040, 15% by 2030),

    - What do we want post-AGI world to look like (do we want to keep gods enslaved forever)?

    - Why he’s leading the push to get to labs develop responsible scaling policies, and what it would take to prevent an AI coup or bioweapon,

    - His current research into a new proof system, and how this could solve alignment by explaining model's behavior

    - and much more.

    Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.

    Open Philanthropy

    Open Philanthropy is currently hiring for twenty-two different roles to reduce catastrophic risks from fast-moving advances in AI and biotechnology, including grantmaking, research, and operations.

    For more information and to apply, please see the application: https://www.openphilanthropy.org/research/new-roles-on-our-gcr-team/

    The deadline to apply is November 9th; make sure to check out those roles before they close.

    Timestamps

    (00:00:00) - What do we want post-AGI world to look like?

    (00:24:25) - Timelines

    (00:45:28) - Evolution vs gradient descent

    (00:54:53) - Misalignment and takeover

    (01:17:23) - Is alignment dual-use?

    (01:31:38) - Responsible scaling policies

    (01:58:25) - Paul’s alignment research

    (02:35:01) - Will this revolutionize theoretical CS and math?

    (02:46:11) - How Paul invented RLHF

    (02:55:10) - Disagreements with Carl Shulman

    (03:01:53) - Long TSMC but not NVIDIA



    Get full access to Dwarkesh Podcast at www.dwarkeshpatel.com/subscribe
  • Podcast: Tetragrammaton with Rick Rubin
    Episode: Tyler Cowen: From Avant-Garde to Pop (Bonus DJ Episode)
    Release date: 2023-10-18


    Tyler Cowen has long nurtured an obsession with music. It’s one of the few addictions Tyler believes is actually conducive to a fulfilling intellectual life.

    In this bonus episode, an addendum to Rick’s conversation with Tyler, Rick sits with Tyler as he plays and talks through the music that moves him: from the outer bounds of the avant-garde to contemporary pop music and all points in between.


  • Podcast: Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas
    Episode: 233 | Hugo Mercier on Reasoning and Skepticism
    Release date: 2023-04-17


    Here at the Mindscape Podcast, we are firmly pro-reason. But what does that mean, fundamentally and in practice? How did humanity come into the idea of not just doing things, but doing things for reasons? In this episode we talk with cognitive scientist Hugo Mercier about these issues. He is the co-author (with Dan Sperber) of The Enigma of Reason, about how the notion of reason came to be, and more recently author of Not Born Yesterday, about who we trust and what we believe. He argues that our main shortcoming is not being insufficiently skeptical of radical claims, but of being too skeptical of claims that don't fit our views.

    Support Mindscape on Patreon.

    Hugo Mercier received a Ph.D. in cognitive sciences from the École des Hautes Études en Sciences Sociales. He is currently a Permanent CNRS Research Scientist at the Institut Jean Nicod, Paris. Among his awards are the Prime d’excellence from the CNRS.

    Web siteGoogle Scholar publicationsAmazon author pageTwitter

    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.


  • Podcast: The Book Club
    Episode: Tom Holland: Dominion
    Release date: 2019-12-04


    In this week's Book Club, Sam's guest is the historian Tom Holland, author of the new book Dominion: The Making of the Western Mind. The book, though as Tom remarks, you might not know it from the cover, is essentially a history of Christianity -- and an account of the myriad ways, many of them invisible to us, that it has shaped and continues to shape Western culture. It's a book and an argument that takes us from Ancient Babylon to Harvey Weinstein's hotel room, draws in the Beatles and the Nazis, and orbits around two giant figures: St Paul and Nietzsche. Is there a single discernible, distinctive Christian way of thinking? Is secularism Christianity by other means? And are our modern-day culture wars between alt-righters and woke progressives a post-Christian phenomenon or, as Tom argues, essentially a civil war between two Christian sects?

    Presented by Sam Leith.

  • Podcast: Clearer Thinking with Spencer Greenberg
    Episode: How quickly is AI advancing? And should you be working in the field? (with Danny Hernandez)
    Release date: 2023-08-23


    Read the full transcript here.

    Along what axes and at what rates is the AI industry growing? What algorithmic developments have yielded the greatest efficiency boosts? When, if ever, will we hit the upper limits of the amount of computing power, data, money, etc., we can throw at AI development? Why do some people seemingly become fixated on particular tasks that particular AI models can't perform and draw the conclusion that AIs are still pretty dumb and won't be taking our jobs any time soon? What kinds of tasks are more or less easily automatable? Should more people work on AI? What does it mean to "take ownership" of our friendships? What sorts of thinking patterns employed by AI engineers can be beneficial in other areas of life? How can we make better decisions, especially about large things like careers and relationships?

    Danny Hernandez was an early AI researcher at OpenAI and Anthropic. He's best known for measuring macro progress in AI. For example, he helped show that the compute of the largest training runs was growing at 10x per year between 2012 and 2017. He also helped show an algorithmic equivalent of Moore's Law that was faster, and he's done work on scaling laws and mechanistic interpretability of learning from repeated data. He is currently focused on alignment research.

    Staff

    Spencer Greenberg — Host / DirectorJosh Castle — ProducerRyan Kessler — Audio EngineerUri Bram — FactotumWeAmplify — TranscriptionistsMiles Kestran — Marketing

    Music

    Broke for FreeJosh WoodwardLee RosevereQuiet Music for Tiny Robotswowamusiczapsplat.com

    Affiliates

    Clearer ThinkingGuidedTrackMind EasePositlyUpLift[Read more]
  • Podcast: Invest Like the Best with Patrick O'Shaughnessy
    Episode: Samo Burja - The Great Founder Theory of History - [Invest Like the Best, EP.339]
    Release date: 2023-08-01


    My guest today is Samo Burja. Samo is the founder of consulting firm, Bismark Analysis, and has dedicated his life’s work to understanding why there has never been an immortal society. His research focuses on institutions, the founders behind them, how they rise and why they always fall in the end. As you’ll hear, Samo has an encyclopedic grasp of history and his work has led him to some fascinating theories about human progress, the nature of exceptional founders, and the future of different societies across the world. Please enjoy my conversation with Samo Burja.

    Listen to Founders Podcast

    Founders Episode 311: James Cameron

    For the full show notes, transcript, and links to mentioned content, check out the episode page here.

    -----

    This episode is brought to you by Tegus. Tegus is the modern research platform for leading investors. Tired of running your own expert calls to get up to speed on a company? Tegus lets you ramp faster and find answers to critical questions more efficiently than any alternative method. The gold standard for research, the Tegus platform delivers unmatched access to timely, qualitative insights through the largest and most differentiated expert call transcript database. With over 60,000 transcripts spanning 22,000 public and private companies, investors can accelerate their fundamental research process by discovering highly-differentiated and reliable insights that can’t be found anywhere else in the market. As a listener, drive your next investment thesis forward with Tegus for free at tegus.co/patrick.

    -----

    Invest Like the Best is a property of Colossus, LLC. For more episodes of Invest Like the Best, visit joincolossus.com/episodes.

    Past guests include Tobi Lutke, Kevin Systrom, Mike Krieger, John Collison, Kat Cole, Marc Andreessen, Matthew Ball, Bill Gurley, Anu Hariharan, Ben Thompson, and many more.

    Stay up to date on all our podcasts by signing up to Colossus Weekly, our quick dive every Sunday highlighting the top business and investing concepts from our podcasts and the best of what we read that week. Sign up here.

    Follow us on Twitter: @patrick_oshag | @JoinColossus

    Show Notes

    (00:02:52) - (First question) - The core thesis behind the Great Founder Theory

    (00:06:40) - Great ideas inevitably being discovered at some point in history

    (00:08:45) - The historic implications of a global adoption of the Great Founder Theory

    (00:10:51) - The different possible directions of future trends

    (00:17:08) - Distinctions between great founders versus live players

    (00:22:15) - Common misconceptions about what qualifies one as a great founder

    (00:24:38) - Noteworthy great founders in the United States

    (00:28:34) - Recurring observable traits and common themes of great founders

    (00:31:29) - Using caution when projecting a mythic lens onto great founders

    (00:37:53) - Social technology as the upstream effects of prior material technology

    (00:43:32) - Whether or not institutions play a role in propagating the work of great founders

    (00:49:08) - The role of power and differences between owned and borrowed power

    (00:56:51) - Additional ideas that play an outsized role in shaping the world

    (01:01:09) - A differing worldview to his own that he finds interesting

    (01:04:53) - Whether or not capital allocators can benefit from the Great Founder Theory

    (01:07:37) - The kindest thing anyone has ever done for him


  • Podcast: The Joe Walker Podcast (Jolly Swagman formerly)
    Episode: Stephen Wolfram — Constructing the Computational Paradigm
    Release date: 2023-08-16


    Stephen Wolfram is a physicist, computer scientist and businessman. He is the founder and CEO of Wolfram Research, the creator of Mathematica and Wolfram Alpha, and the author of A New Kind of Science.

    Full transcript available at: jnwpod.com.

    See omnystudio.com/listener for privacy information.


  • Podcast: Dwarkesh Podcast
    Episode: Dario Amodei (Anthropic CEO) - Scaling, Alignment, & AI Progress
    Release date: 2023-08-08


    Here is my conversation with Dario Amodei, CEO of Anthropic.

    Dario is hilarious and has fascinating takes on what these models are doing, why they scale so well, and what it will take to align them.

    Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.

    Timestamps

    (00:00:00) - Introduction

    (00:01:00) - Scaling

    (00:15:46) - Language

    (00:22:58) - Economic Usefulness

    (00:38:05) - Bioterrorism

    (00:43:35) - Cybersecurity

    (00:47:19) - Alignment & mechanistic interpretability

    (00:57:43) - Does alignment research require scale?

    (01:05:30) - Misuse vs misalignment

    (01:09:06) - What if AI goes well?

    (01:11:05) - China

    (01:15:11) - How to think about alignment

    (01:31:31) - Is modern security good enough?

    (01:36:09) - Inefficiencies in training

    (01:45:53) - Anthropic’s Long Term Benefit Trust

    (01:51:18) - Is Claude conscious?

    (01:56:14) - Keeping a low profile



    Get full access to Dwarkesh Podcast at www.dwarkeshpatel.com/subscribe
  • Podcast: No Priors: Artificial Intelligence | Machine Learning | Technology | Startups
    Episode: Will Everyone Have a Personal AI? With Mustafa Suleyman, Founder of DeepMind and Inflection
    Release date: 2023-05-11


    Mustafa Suleyman, co-founder of DeepMind and now co-founder and CEO of Inflection AI, joins Sarah and Elad to discuss how his interests in counseling, conflict resolution, and intelligence led him to start an AI lab that pioneered deep reinforcement learning, lead applied AI and policy efforts at Google, and more recently found Inflection and launch Pi.

    Mustafa offers insights on the changing structure of the web, the pressure Google faces in the age of AI personalization, predictions for model architectures, how to measure emotional intelligence in AIs, and the thinking behind Pi: the AI companion that knows you, is aligned to your interests, and provides companionship.

    Sarah and Elad also discuss Mustafa’s upcoming book, The Coming Wave (release September 12, 2023), which examines the political ramifications of AI and digital biology revolutions.

    No Priors is now on YouTube! Subscribe to the channel on YouTube and like this episode.

    Show Links:

    Forbes - Startup From Reid Hoffman and Mustafa Suleyman Debuts ChatBotInflection.aiMustafa-Suleyman.ai

    Sign up for new podcasts every week. Email feedback to [email protected]

    Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @mustafasuleymn

    Show Notes:

    [00:06] - From Conflict Resolution to AI Pioneering

    [10:36] - Defining Intelligence

    [15:32] - DeepMind's Journey and Breakthroughs

    [24:45] - The Future of Personal AI Companionship

    [33:22] - AI and the Future of Personalized Content

    [41:49] - The Launch of Pi

    [51:12] - Mustafa’s New Book The Coming Wave


  • Podcast: Dwarkesh Podcast
    Episode: Carl Shulman (Pt 2) - AI Takeover, Bio & Cyber Attacks, Detecting Deception, & Humanity's Far Future
    Release date: 2023-06-26


    The second half of my 7 hour conversation with Carl Shulman is out!

    My favorite part! And the one that had the biggest impact on my worldview.

    Here, Carl lays out how an AI takeover might happen:

    * AI can threaten mutually assured destruction from bioweapons,

    * use cyber attacks to take over physical infrastructure,

    * build mechanical armies,

    * spread seed AIs we can never exterminate,

    * offer tech and other advantages to collaborating countries, etc

    Plus we talk about a whole bunch of weird and interesting topics which Carl has thought about:

    * what is the far future best case scenario for humanity

    * what it would look like to have AI make thousands of years of intellectual progress in a month

    * how do we detect deception in superhuman models

    * does space warfare favor defense or offense

    * is a Malthusian state inevitable in the long run

    * why markets haven't priced in explosive economic growth

    * & much more

    Carl also explains how he developed such a rigorous, thoughtful, and interdisciplinary model of the biggest problems in the world.

    Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.

    Catch part 1 here

    Timestamps

    (0:00:00 - Intro

    (0:00:47 - AI takeover via cyber or bio

    (0:32:27 - Can we coordinate against AI?

    (0:53:49 - Human vs AI colonizers

    (1:04:55 - Probability of AI takeover

    (1:21:56 - Can we detect deception?

    (1:47:25 - Using AI to solve coordination problems

    (1:56:01 - Partial alignment

    (2:11:41 - AI far future

    (2:23:04 - Markets & other evidence

    (2:33:26 - Day in the life of Carl Shulman

    (2:47:05 - Space warfare, Malthusian long run, & other rapid fire



    Get full access to Dwarkesh Podcast at www.dwarkeshpatel.com/subscribe
  • Podcast: Joe Carlsmith Audio
    Episode: Predictable updating about AI risk
    Release date: 2023-05-08


    How worried about AI risk will we feel in the future, when we can see advanced machine intelligence up close? We should worry accordingly now. Text version here: https://joecarlsmith.com/2023/05/08/predictable-updating-about-ai-risk


  • Podcast: Dwarkesh Podcast
    Episode: Carl Shulman (Pt 1) - Intelligence Explosion, Primate Evolution, Robot Doublings, & Alignment
    Release date: 2023-06-14


    In terms of the depth and range of topics, this episode is the best I’ve done.

    No part of my worldview is the same after talking with Carl Shulman. He's the most interesting intellectual you've never heard of.

    We ended up talking for 8 hours, so I'm splitting this episode into 2 parts.

    This part is about Carl’s model of an intelligence explosion, which integrates everything from:

    * how fast algorithmic progress & hardware improvements in AI are happening,

    * what primate evolution suggests about the scaling hypothesis,

    * how soon before AIs could do large parts of AI research themselves, and whether there would be faster and faster doublings of AI researchers,

    * how quickly robots produced from existing factories could take over the economy.

    We also discuss the odds of a takeover based on whether the AI is aligned before the intelligence explosion happens, and Carl explains why he’s more optimistic than Eliezer.

    The next part, which I’ll release next week, is about all the specific mechanisms of an AI takeover, plus a whole bunch of other galaxy brain stuff.

    Maybe 3 people in the world have thought as rigorously as Carl about so many interesting topics. This was a huge pleasure.

    Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.

    Timestamps

    (00:00:00) - Intro

    (00:01:32) - Intelligence Explosion

    (00:18:03) - Can AIs do AI research?

    (00:39:00) - Primate evolution

    (01:03:30) - Forecasting AI progress

    (01:34:20) - After human-level AGI

    (02:08:39) - AI takeover scenarios



    Get full access to Dwarkesh Podcast at www.dwarkeshpatel.com/subscribe
  • Podcast: Conversations with Tyler
    Episode: Peter Singer on Utilitarianism, Influence, and Controversial Ideas
    Release date: 2023-06-07


    Peter Singer is one of the world’s most influential living philosophers, whose ideas have motivated millions of people to change how they eat, how they give, and how they interact with each other and the natural world.

    Peter joined Tyler to discuss whether utilitarianism is only tractable at the margin, how Peter thinks about the meat-eater problem, why he might side with aliens over humans, at what margins he would police nature, the utilitarian approach to secularism and abortion, what he’s learned producing the Journal of Controversial Ideas, what he’d change about the current Effective Altruism movement, where Derek Parfit went wrong, to what extent we should respect the wishes of the dead, why professional philosophy is so boring, his advice on how to enjoy our lives, what he’ll be doing after retiring from teaching, and more.

    Read a full transcript enhanced with helpful links, or watch the full video.

    Recorded May 25th, 2023

    Other ways to connect

    Follow us on Twitter and Instagram Follow Tyler on Twitter Follow Peter on Twitter Email us: [email protected] Learn more about Conversations with Tyler and other Mercatus Center podcasts here.

    Photo credit: Katarzyna de Lazari-Radek


  • Podcast: 80,000 Hours Podcast
    Episode: #152 – Joe Carlsmith on navigating serious philosophical confusion
    Release date: 2023-05-19


    What is the nature of the universe? How do we make decisions correctly? What differentiates right actions from wrong ones?

    Such fundamental questions have been the subject of philosophical and theological debates for millennia. But, as we all know, and surveys of expert opinion make clear, we are very far from agreement. So... with these most basic questions unresolved, what’s a species to do?

    In today's episode, philosopher Joe Carlsmith — Senior Research Analyst at Open Philanthropy — makes the case that many current debates in philosophy ought to leave us confused and humbled. These are themes he discusses in his PhD thesis, A stranger priority? Topics at the outer reaches of effective altruism.

    Links to learn more, summary and full transcript.

    To help transmit the disorientation he thinks is appropriate, Joe presents three disconcerting theories — originating from him and his peers — that challenge humanity's self-assured understanding of the world.

    The first idea is that we might be living in a computer simulation, because, in the classic formulation, if most civilisations go on to run many computer simulations of their past history, then most beings who perceive themselves as living in such a history must themselves be in computer simulations. Joe prefers a somewhat different way of making the point, but, having looked into it, he hasn't identified any particular rebuttal to this 'simulation argument.'

    If true, it could revolutionise our comprehension of the universe and the way we ought to live...

    Other two ideas cut for length — click here to read the full post.

    These are just three particular instances of a much broader set of ideas that some have dubbed the "train to crazy town." Basically, if you commit to always take philosophy and arguments seriously, and try to act on them, it can lead to what seem like some pretty crazy and impractical places. So what should we do with this buffet of plausible-sounding but bewildering arguments?

    Joe and Rob discuss to what extent this should prompt us to pay less attention to philosophy, and how we as individuals can cope psychologically with feeling out of our depth just trying to make the most basic sense of the world.

    In today's challenging conversation, Joe and Rob discuss all of the above, as well as:

    What Joe doesn't like about the drowning child thought experimentAn alternative thought experiment about helping a stranger that might better highlight our intrinsic desire to help othersWhat Joe doesn't like about the expression “the train to crazy town”Whether Elon Musk should place a higher probability on living in a simulation than most other peopleWhether the deterministic twin prisoner’s dilemma, if fully appreciated, gives us an extra reason to keep promisesTo what extent learning to doubt our own judgement about difficult questions -- so-called “epistemic learned helplessness” -- is a good thingHow strong the case is that advanced AI will engage in generalised power-seeking behaviour

    Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

    Producer: Keiran Harris

    Audio mastering: Milo McGuire and Ben Cordell

    Transcriptions: Katy Moore