Episodes

  • It seems like the loudest voices in AI often fall into one of two groups. There are the boomers – the techno-optimists – who think that AI is going to bring us into an era of untold prosperity. And then there are the doomers, who think there’s a good chance AI is going to lead to the end of humanity as we know it.

    While these two camps are, in many ways, completely at odds with one another, they do share one thing in common: they both buy into the hype of artificial intelligence.

    But when you dig deeper into these systems, it becomes apparent that both of these visions – the utopian one and the doomy one – are based on some pretty tenuous assumptions.

    Kate Crawford has been trying to understand how AI systems are built for more than a decade. She’s the co-founder of the AI Now institute, a leading AI researcher at Microsoft, and the author of Atlas of AI: Power, Politics and the Planetary Cost of AI.

    Crawford was studying AI long before this most recent hype cycle. So I wanted to have her on the show to explain how AI really works. Because even though it can seem like magic, AI actually requires huge amounts of data, cheap labour and energy in order to function. So even if AI doesn’t lead to utopia, or take over the world, it is transforming the planet – by depleting its natural resources, exploiting workers, and sucking up our personal data. And that’s something we need to be paying attention to.

    Mentioned:

    “ELIZA—A Computer Program For the Study of Natural Language Communication Between Man And Machine” by Joseph Weizenbaum

    “Microsoft, OpenAI plan $100 billion data-center project, media report says,” Reuters

    “Meta ‘discussed buying publisher Simon & Schuster to train AI’” by Ella Creamer

    “Google pauses Gemini AI image generation of people after racial ‘inaccuracies’” by Kelvin Chan And Matt O’brien

    “OpenAI and Apple announce partnership,” OpenAI

    Fairwork

    “New Oxford Report Sheds Light on Labour Malpractices in the Remote Work and AI Booms” by Fairwork

    “The Work of Copyright Law in the Age of Generative AI” by Kate Crawford, Jason Schultz

    “Generative AI’s environmental costs are soaring – and mostly secret” by Kate Crawford

    “Artificial intelligence guzzles billions of liters of water” by Manuel G. Pascual

    “S.3732 – Artificial Intelligence Environmental Impacts Act of 2024″

    “Assessment of lithium criticality in the global energy transition and addressing policy gaps in transportation” by Peter Greim, A. A. Solomon, Christian Breyer

    “Calculating Empires” by Kate Crawford and Vladan Joler

    Further Reading:

    “Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence” by Kate Crawford

    “Excavating AI” by Kate Crawford and Trevor Paglen

    “Understanding the work of dataset creators” from Knowing Machines

    “Should We Treat Data as Labor? Moving beyond ‘Free’” by I. Arrieta-Ibarra et al.

  • Think about the last time you felt let down by the health care system. You probably don’t have to go back far. In wealthy countries around the world, medical systems that were once robust are now crumbling. Doctors and nurses, tasked with an ever expanding range of responsibilities, are busier than ever, which means they have less and less time for patients. In the United States, the average doctor’s appointment lasts seven minutes. In South Korea, it’s only two.

    Without sufficient time and attention, patients are suffering. There are 12 million significant misdiagnoses in the US every year, and 800,000 of those result in death or disability. (While the same kind of data isn’t available in Canada, similar trends are almost certainly happening here as well).

    Eric Topol says medicine has become decidedly inhuman – and the consequences have been disastrous. Topol is a cardiologist and one of the most widely cited medical researchers in the world. In his latest book, Deep Medicine, he argues that the best way to make health care human again is to embrace the inhuman, in the form of artificial intelligence.

    Mentioned:

    “Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again” by Eric Topol

    “The frequency of diagnostic errors in outpatient care: estimations from three large observational studies involving US adult populations” by H. Singh, A. Meyer, E. Thomas

    “Burden of serious harms from diagnostic error in the USA” by David Newman-Toker, et al.

    “How Expert Clinicians Intuitively Recognize a Medical Diagnosis” by J. Brush Jr, J. Sherbino, G. Norman

    “A Randomized Controlled Study of Art Observation Training to Improve Medical Student Ophthalmology Skills” by Jaclyn Gurwin, et al.

    “Abridge becomes Epic’s First Pal, bringing generative AI to more providers and patients, including those at Emory Healthcare”

    “Why Doctors Should Organize” by Eric Topol

    “How This Rural Health System Is Outdoing Silicon Valley” by Erika Fry

    Further Reading:

    "The Importance Of Being" by Abraham Verghese

  • Missing episodes?

    Click here to refresh the feed.

  • Earlier this year, Elon Musk’s company Neuralink successfully installed one of their brain implants in a 29 year old quadriplegic man named Noland Arbaugh. The device changed Arbaugh’s life. He no longer needs a mouth stylus to control his computer or play video games. Instead, he can use his mind.

    The brain-computer interface that Arbaugh uses is part of an emerging field known as neurotechnology that promises to reshape the way we live. A wide range of AI empowered neurotechnologies may allow disabled people like Arbaugh to regain independence, or give us the ability to erase traumatic memories in patients suffering from PTSD.

    But it doesn’t take great leaps to envision how these technologies could be abused as well. Law enforcement agencies in the United Arab Emirates have used neurotechnology to read the minds of criminal suspects, and convict them based on what they’ve found. And corporations are developing ways to advertise to potential customers in their dreams. Remarkably, both of these things appear to be legal, as there are virtually no laws explicitly governing neurotechnology.

    All of which makes Nita Farahany’s work incredibly timely. Farahany is a professor of law and philosophy at Duke University and the author of The Battle for Your Brain: Defending the Right to Think Freely in the Age of Neurotechnology.

    Farahany isn’t fatalistic about neurotech – in fact, she uses some of it herself. But she is adamant that we need to start developing laws and guardrails as soon as possible, because it may not be long before governments, employers and corporations have access to our brains.


    Mentioned:

    “PRIME Study Progress Update – User Experience,” Neuralink

    “Paralysed man walks using device that reconnects brain with muscles,” The Guardian

    Cognitive Warfare – NATO’s ACT

    The Ethics of Neurotechnology: UNESCO appoints international expert group to prepare a new global standard

  • When Eugenia Kuyda saw Her for the first time – the 2013 film about a man who falls in love with his virtual assistant – it didn’t read as science fiction. That’s because she was developing a remarkably similar technology: an AI chatbot that could function as a close friend, or even a romantic partner.

    That idea would eventually become the basis for Replika, Kuyda’s AI startup. Today, Replika has millions of active users – that’s millions of people who have AI friends, AI siblings and AI partners.


    When I first heard about the idea behind Replika, I thought it sounded kind of dystopian. I envisioned a world where we’d rather spend time with our AI friends than our real ones. But that’s not the world Kuyda is trying to build. In fact, she thinks chatbots will actually make people more social, not less, and that the cure for our technologically exacerbated loneliness might just be more technology.


    Mentioned:

    “ELIZA—A Computer Program For the Study of Natural Language Communication Between Man And Machine” by Joseph Weizenbaum

    “elizabot.js”, implemented by Norbert Landsteiner

    “Speak, Memory” by Casey Newton (The Verge)

    “Creating a safe Replika experience” by Replika

    “The Year of Magical Thinking” by Joan Didion

    Additional Reading:

    The Globe & Mail: “They fell in love with the Replika AI chatbot. A policy update left them heartbroken”

    “Loneliness and suicide mitigation for students using GPT3-enabled chatbots” by Maples, Cerit, Vishwanath, & Pea

    “Learning from intelligent social agents as social and intellectual mirrors” by Maples, Pea, Markowitz

  • In the last few years, artificial intelligence has gone from a novelty to perhaps the most influential technology we’ve ever seen. The people building AI are convinced that it will eradicate disease, turbocharge productivity, and solve climate change. It feels like we’re on the cusp of a profound societal transformation. And yet, I can’t shake the feeling we’ve been here before. Fifteen years ago, there was a similar wave of optimism around social media: it was going to connect the world, catalyze social movements and spur innovation. It may have done some of these things. But it also made us lonelier, angrier, and occasionally detached from reality.

    Few people understand this trajectory better than Maria Ressa. Ressa is a Filipino journalist, and the CEO of a news organization called Rappler. Like many people, she was once a fervent believer in the power of social media. Then she saw how it could be abused. In 2016, she reported on how Rodrigo Duterte, then president of the Philippines, had weaponized Facebook in the election he’d just won. After publishing those stories, Ressa became a target herself, and her inbox was flooded with death threats. In 2021, she won the Nobel Peace Prize.

    I wanted this to be our first episode because I think, as novel as AI is, it has undoubtedly been shaped by the technologies, the business models, and the CEOs that came before it. And Ressa thinks we’re about to repeat the mistakes we made with social media all over again.

    Mentioned:

    “How to Stand Up to a Dictator” by Maria Ressa

    “A Shocking Amount of the Web is Machine Translated: Insights from Multi-Way Parallelism” by Thompson et al.

    Rappler’s Matrix Protocol Chat App: Rappler Communities

    “Democracy Report 2023: Defiance in the Face of Autocratization” by V-Dem

    “The Foundation Model Transparency Index” by Stanford HAI (Human-Centered Artificial Intelligence)

    “All the ways Trump’s campaign was aided by Facebook, ranked by importance” by Philip Bump (The Washington Post)

    “Our Epidemic of Loneliness and Isolation” by U.S. Surgeon General Dr. Vivek H. Murthy

  • We are living in an age of breakthroughs propelled by advances in artificial intelligence. Technologies that were once the realm of science fiction will become our reality: robot best friends, bespoke gene editing, brain implants that make us smarter.

    Every other Tuesday Taylor Owen sits down with someone shaping this rapidly approaching future.

    The first two episodes will be released on May 7th. Subscribe now so you don’t miss an episode.