Episodes
-
Margrethe Vestager has spent the past decade standing up to Silicon Valley. As the EUâs Competition Commissioner, sheâs waged landmark legal battles against tech giants like Meta, Microsoft and Amazon. Her two latest wins will cost Apple and Google billions of dollars.
With her decade-long tenure as one of the worldâs most powerful anti-trust watchdogs coming to an end, Vestager has turned her attention to AI. She spearheaded the EUâs AI Act, which will be the first and, so far, most ambitious piece of AI legislation in the world.
But the clock is ticking â both on her term and on the global race to govern AI, which Vestager says we have âvery little timeâ to get right.
Mentioned:
The EU Artificial Intelligence Act
âDutch scandal serves as a warning for Europe over risks of using algorithms,â by Melissa HeikkilĂ€
âBelgian man dies by suicide following exchanges with chatbotâ by Lauren Walker
The Digital Services Act
The Digital Markets Act
General Data Protection Regulation (GDPR)
âThe future of European competitivenessâ by Mario Draghi
âGoverning AI for Humanity: Final Reportâ by the United Nations Secretary-Generalâs High-level Advisory Body
The Artificial Intelligence and Data Act (AIDA)
Further Reading:
âApple, Google must pay billions in back taxes and fines, E.U. court rulesâ by Ellen Francis and Cat Zakrzewski
âOpenAI Lobbied the E.U. to Water Down AI Regulationâ by Billy Perrigo
âThe total eclipse of Margrethe Vestagerâ by Samuel Stolton
âDigital Empires: The Global Battle to Regulate Technologyâ by Anu Bradford
âThe Brussels Effect: How the European Union Rules the Worldâ by Anu Bradford
-
Weâre off this week, so weâre bringing you an episode from our Globe and Mail sister show Lately.
That creeping feeling that everything online is getting worse has a name: âenshittification,â a term for the slow degradation of our experience on digital platforms. The enshittification cycle is why you now have to wade through slop to find anything useful on Google, and why your charger is different from your BFFâs.
According to Cory Doctorow, the man who coined the memorable moniker, this digital decay isnât inevitable. Itâs a symptom of corporate under-regulation and monopoly â practices being challenged in courts around the world, like the US Department of Justiceâs antitrust suit against Google.
Cory Doctorow is a British-Canadian journalist, blogger and author of Chokepoint Capitalism, as well as speculative fiction works like The Lost Cause and the new novella Spill.
Every Friday, Lately takes a deep dive into the big, defining trends in business and tech that are reshaping our every day. Itâs hosted by Vass Bednar.
Machines Like Us will be back in two weeks.
-
Missing episodes?
-
The tech lobby has quietly turned Silicon Valley into the most powerful political operation in America.
Pro crypto donors are now responsible for almost half of all corporate donations this election. Elon Musk has gone from an occasional online troll to, as one of our guests calls him, âMAGAâs Minister of Propaganda.â And for the first time, the once reliably blue Silicon Valley seems to be shifting to the right. What does all this mean for the upcoming election?
To help us better understand this moment, we spoke with three of the most prominent tech writers in the U.S. Charles Duhigg (author of the bestseller Supercommunicators) has a recent piece in the New Yorker called âSilicon Valley, the New Lobbying Monster.â Charlie Warzel is a staff writer at the Atlantic, and Nitasha Tiku is a tech culture reporter at the Washington Post.Mentioned:
âSilicon Valley, the New Lobbying Monsterâ by Charles Duhigg
âBig Crypto, Big Spending: Crypto Corporations Spend an Unprecedented $119 Million Influencing Electionsâ by Rick Claypool via Public Citizen
âIâm Running Out of Ways to Explain How Bad This Isâ by Charlie Warzel
âElon Musk Has Reached a New Lowâ by Charlie Warzel
âThe movement to diversify Silicon Valley is crumbling amid attacks on DEIâ by Naomi Nix, Cat Zakrzewski and Nitasha Tiku
âThe Techno-Optimist Manifestoâ by Marc Andreessen
âTrump Vs. Biden: Tech Policy,â The Ben & Marc Show
âThe MAGA Aesthetic Is AI Slopâ by Charlie Warzel
Further Reading:
âBiden's FTC took on big tech, big pharma and more. What antitrust legacy will Biden leave behind?â by Paige Sutherland and Meghna Chakrabarti
âInside the Harris campaignâs blitz to win back Silicon Valleyâ by Cat Zakrzewski, Nitasha Tiku and Elizabeth Dwoskin
âThe Little Tech Agendaâ by Marc Andreessen and Ben Horowitz
âSilicon Valley had Harrisâs back for decades. Will she return the favor?â by Cristiano Lima-Strong and Cat Zakrzewski
âSECâs Gensler turns tide against crypto in courtsâ by Declan Harty
âTrump vs. Harris is dividing Silicon Valley into feuding political campsâ by Trisha Thadani, Elizabeth Dwoskin, Nitasha Tiku and Gerrit De Vynck
âInside the powerful Peter Thiel network that anointed JD Vanceâ by Elizabeth Dwoskin, Cat Zakrzewski, Nitasha Tiku and Josh Dawsey
-
What kind of future are we building for ourselves? In some ways, thatâs the central question of this show.
Itâs also a central question of speculative fiction. And one that few people have tried to answer as thoughtfully â and as poetically â as Emily St. John Mandel.
Mandel is one of Canadaâs great writers. Sheâs the author of six award winning novels, the most recent of which is Sea of Tranquility â a story about a future where we have moon colonies and time travelling detectives. But Mandel might be best known for Station Eleven, which was made into a big HBO miniseries in 2021. In Station Eleven, Mandel envisioned a very different future. One where a pandemic has wiped out nearly everyone on the planet, and the world has returned to a pre industrial state. In other words, a world without technology.
I think speculative fiction carries tremendous power. In fact, I think that AI is ultimately an act of speculation. The AI we have chosen to build, and our visions of what AI could become, have been shaped by acts of imagination.
So I wanted to speak to someone who has made a career imagining other worlds, and thinking about how humans will fit into them.
Mentioned:
âLast Night in Montrealâ by Emily St. John Mandel
âStation Elevenâ by Emily St. John Mandel
The Nobel Prize in Literature 2014 â Lecture by Patrick Modiano
âThe Glass Hotelâ by Emily St. John Mandel
âSea of Tranquilityâ by Emily St. John Mandel
Summary of the 2023 WGA MBA, Writers Guild of America
Her (2013)
âThe Handmaidâs Taleâ by Margaret Atwood
âShell Gameâ by Evan Ratliff
Replika
Further Reading:
âCan AI Companions Cure Loneliness?,â Machines Like Us
âYoshua Bengio Doesnât Think Weâre Ready for Superhuman AI. Weâre Building it Anyway.,â Machines Like Us
âThe Roadâ by Cormac McCarthy
-
A couple of weeks ago, I was at this splashy AI conference in Montreal called All In. It was â how should I say this â a bit over the top. There were smoke machines, thumping dance music, food trucks. It was a far cry from the quiet research labs where AI was developed.
While I remain skeptical of the promise of artificial intelligence, this conference made it clear that the industry is, well, all in. The stage was filled with startup founders promising that AI was going to revolutionize the way we work, and government officials saying AI was going to supercharge the economy.And then there was Yoshua Bengio.
Bengio is one of AIâs pioneering figures. In 2018, he and two colleagues won the Turing Award â the closest thing computer science has to a Nobel Prize â for their work on deep learning. In 2022, he was the most cited computer scientist in the world. It wouldnât be hyperbolic to suggest that AI as we know it today might not exist without Yoshua Bengio.
But in the last couple of years, Bengio has had an epiphany of sorts. And he now believes that, left unchecked, AI has the potential to wipe out humanity. So these days, heâs dedicated himself to AI safety. Heâs a professor at the University of Montreal and the founder of MILA - the Quebec Artificial Intelligence Institute.
And he was at this big AI conference too, amidst all these Silicon Valley types, pleading with the industry to slow down before itâs too late.
Mentioned:
âPersonal and Psychological Dimensions of AI Researchers Confronting AI Catastrophic Risksâ by Yoshua Bengio
âDeep Learningâ by Yann LeCun, Yoshua Bengio, Geoffrey Hinton
âComputing Machinery and Intelligenceâ by Alan Turing
âInternational Scientific Report on the Safety of Advanced AIâ
âSafetywashing: Do AI Safety Benchmarks Actually Measure Safety Progress?â by R. Ren et al.
âSB 1047: Safe and Secure Innovation for Frontier Artificial Intelligence Models Actâ
Further reading:
ââDeep Learningâ Guru Reveals the Future of AIâ by Cade Metz
âMontrĂ©al Declaration for a Responsible Development of Artificial Intelligenceâ
âThis A.I. Subcultureâs Motto: Go, Go, Goâ By Kevin Roose
âReasoning through arguments against taking AI safety seriouslyâ by Yoshua Bengio
-
In 2015, 195 countries gathered in Paris to discuss how to address the climate crisis. Although there was plenty they couldnât agree on, there was one point of near-absolute consensus: if the planet becomes 2°C hotter than it was before industrialization, the effects will be catastrophic. Despite that consensus, we have continued barrelling toward that 2°C threshold. And while the world is finally paying attention to climate change, the pace of our action is radically out of step with the severity of the problem. What is becoming increasingly clear is that just cutting our emissions â by switching to clean energy or driving electric cars â will not be sufficient. We will also need some bold technological solutions if we want to maintain some semblance of life as we know it.
Luckily, everything is on the table. Grinding entire mountains into powder and dumping them into oceans. Sucking carbon directly out of the air and burying it underground. Spraying millions of tons of sulphur dioxide directly into the atmosphere.
Gwynne Dyer has spent the past four years interviewing the worldâs leading climate scientists about the moonshots that could save the planet. Dyer is a journalist and historian who has written a dozen books over his career, and has become one of Canadaâs most trusted commentators on war and geopolitics.
But his latest book, Intervention Earth, is about the battle to save the planet.
Like any reporting on the climate, itâs inevitably a little depressing. But with this book Dyer has also given us a different way of thinking about the climate crisis â and maybe even a road map for how technology could help us avoid our own destruction.
Mentioned:
âIntervention Earth: Life-Saving Ideas from the Worldâs Climate Engineersâ by Gwynne Dyer
âScientists warn Earth warming faster than expected â due to reduction in ship pollutionâ by Nicole Mortillaro
âGlobal warming in the pipelineâ by James Hansen, et al.
âAlbedo Enhancement by Stratospheric Sulfur Injections: A Contribution to Resolve a Policy Dilemma?â by Paul Crutzen
Further Reading:
Interview with Hans Joachim Schellnhuber and Gwynne Dyer
-
For nearly a year now, the world has been transfixed â and horrified â by whatâs happening in the Gaza Strip. Yet for all the media coverage, there seems to be far less known about how this war is actually being fought. And the how of this conflict, and its enormous human toll, might end up being its most enduring legacy.
In April, the Israeli magazine +972 published a story describing how Israel was using an AI system called Lavender to target potential enemies for air strikes, sometimes with a margin of error as high as 10 per cent.
I remember reading that story back in the spring and being shocked, not that such tools existed, but that they were already being used at this scale on the battlefield. P.W. Singer was less surprised. Singer is one of the worldâs foremost experts on the future of warfare. Heâs a strategist at the think tank New America, a professor of practice at Arizona State University, and a consultant for everyone from the US military to the FBI.
So if anyone can help us understand the black box of autonomous weaponry and AI warfare, itâs P.W. Singer.
Mentioned:
ââThe Gospelâ: how Israel uses AI to select bombing targets in Gazaâ by Harry Davies, Bethan McKernan, and Dan Sabbagh
ââLavenderâ: The AI machine directing Israelâs bombing spree in Gazaâ by Yuval Abraham
âGhost Fleet: A Novel of the Next World Warâ by P. W. Singer and August Cole
Further Reading:
âBurn-In: A Novel of the Real Robotic Revolutionâ by P. W. Singer and August Cole
âThe AI revolution is already hereâ by P. W. Singer
âHumans must be held responsible for decisions AI weapons makeâ in The Asahi Shimbun
âUseful Fictionâ
-
Things do not look good for journalism right now. This year, Bell Media, VICE, and the CBC all announced significant layoffs. In the US, there were cuts at the Washington Post, the LA Times, Vox and NPR â to name just a few. A recent study from Northwestern University found that an average of two and a half American newspapers closed down every single week in 2023 (up from two a week the year before).
One of the central reasons for this is that the advertising model that has supported journalism for more than a century has collapsed. Simply put, Google and Meta have built a better advertising machine, and theyâve crippled journalismâs business model in the process.
It wasnât always obvious this was going to happen. Fifteen or twenty years ago, a lot of publishers were actually making deals with social media companies, thinking they were going to lead to bigger audiences and more clicks.
But these turned out to be faustian bargains. The journalism industry took a nosedive, while Google and Meta became two of the most profitable companies in the world.
And now we might be doing it all over again with a new wave of tech companies like OpenAI. Over the past several years, OpenAI, operating in a kind of legal grey area, has trained its models on news content it hasnât paid for. While some news outlets, like the New York Times, have chosen to sue OpenAI for copyright infringement, many publishers (including The Atlantic, the Financial Times, and NewsCorp) have elected to sign deals with OpenAI instead.
Julia Angwin has been worried about the thorny relationship between big tech and journalism for years. Sheâs written a book about MySpace, documented the rise of big tech, and won a Pulitzer for her tech reporting with the Wall Street Journal.
She was also one of the few people warning publishers the first time around that making deals with social media companies maybe wasnât the best idea.
Now, sheâs ringing the alarm again, this time as a New York Times contributing opinion writer and the CEO of a journalism startup called Proof News that is preoccupied with the question of how to get people reliable information in the age of AI.
Mentions:
âStealing MySpace: The Battle to Control the Most Popular Website in America,â by Julia Angwin
âWhat They Knowâ WSJ series by Julia Angwin
âThe Bad News About the Newsâ by Robert G. Kaiser
âThe Times Sues OpenAI and Microsoft Over A.I. Use of Copyrighted Workâ by By Michael M. Grynbaum and Ryan Mac
âSeeking Reliable Election Information? Donât Trust AIâ by Julia Angwin, Alondra Nelson, Rina Palta
Further Reading:
âDragnet Nation: A Quest for Privacy, Security, and Freedom in a World of Relentless Surveillanceâ by Julia Angwin
âA Letter From Our Founderâ by Julia Angwin
-
Last year, the venture capitalist Marc Andreesen published a document he called âThe Techno-Optimist Manifesto.â In it, he argued that âeverything good is downstream of growth,â government regulation is bad, and that the only way to achieve real progress is through technology.
Of course, Silicon Valley has always been driven by libertarian sensibilities and an optimistic view of technology. But the radical techno-optimism of people like Andreesen, and billionaire entrepreneurs like Peter Thiel and Elon Musk, has morphed into something more extreme. In their view, technology and government are always at odds with one another.
But if thatâs true, then how do you explain someone like Audrey Tang?
Tang, who, until May of this year, was Taiwanâs first Minister of Digital Affairs, is unabashedly optimistic about technology. But sheâs also a fervent believer in the power of democratic government.
To many in Silicon Valley, this is an oxymoron. But Tang doesnât see it that way. To her, technology and government are â and have always been â symbiotic.
So I wanted to ask her what a technologically enabled democracy might look like â and she has plenty of ideas. At times, our conversation got a little bit wonky. But ultimately, this is a conversation about a better, more inclusive form of democracy. And why she thinks technology will get us there.
Just a quick note: we recorded this interview a couple of months ago, while Tang was still the Minister of Digital Affairs.
Mentions:
âvTaiwanâ
âPolisâ
âPlurality: The Future of Collaborative Technology and Democracyâ by E. Glen Weyl, Audrey Tang and âż» Community
âCollective Constitutional AI: Aligning a Language Model with Public Input,â Anthropic
Further Reading:
âThe simple but ingenious system Taiwan uses to crowdsource its lawsâ by Chris Horton
âHow Taiwanâs Unlikely Digital Minister Hacked the Pandemicâ by Andrew Leonard
-
If you listened to our last couple of episodes, youâll have heard some pretty skeptical takes on AI. But if you look at the stock market right now, you wonât see any trace of that skepticism. Since the launch of ChatGPT in late 2022, the chip company NVIDIA, whose chips are used in the majority of AI systems, has seen their stock shoot up by 700%. A month ago, that briefly made them the most valuable company in the world, with a market cap of more than $3.3 trillion.
And itâs not just chip companies. The S&P 500 (the index that tracks the 500 largest companies in the U.S.) is at an all-time high this year, in no small part because of the sheen of AI. And here in Canada, a new report from Microsoft claims that generative AI will add $187 billion to the domestic economy by 2030. As wild as these numbers are, they may just be the tip of the iceberg. Some researchers argue that AI will completely revolutionize our economy, leading to per capita growth rates of 30%. In case those numbers mean absolutely nothing to you, 25 years of 30% growth means weâd be a thousand times richer than we are now. Itâs hard to imagine what that world would like â or how the average person fits into it. Luckily, Rana Foroohar has given this some thought. Foroohar is a global business columnist and an associate editor at The Financial Times. I wanted to have her on the show to help me work through what these wild predictions really mean and, most importantly, whether or not she thinks theyâll come to fruition.
Mentioned:
âPower and Progress: Our Thousand-Year Struggle Over Technology and Prosperityâ by Daron Acemoglu and Simon Johnson (2023)
âManias, Panics, and Crashes: A History of Financial Crisesâ by Charles P. Kindleberger (1978)
âIrrational Exuberanceâ by Robert J. Shiller (2016)
âGen AI: Too much spend, too little benefit?â by Goldman Sachs Research (2024)
âWorkers could be the ones to regulate AIâ by Rana Foroohar (Financial Times, 2023)
âThe Financial Times and OpenAI strike content licensing dealâ (Financial Times, 2024)
âIs AI about to kill whatâs left of journalism?â by Rana Foroohar (Financial Times, 2024)
âDeaths of Despair and the Future of Capitalismâ by Anne Case and Angus Deaton (2020)
âThe China Shock: Learning from Labor Market Adjustment to Large Changes in Tradeâ by David H. Autor, David Dorn & Gordon H. Hanson (2016)
Further Reading:
âBeware AI euphoriaâ by Rana Foroohar (Financial Times, 2024)
âAlphaGoâ by Google DeepMind (2020)
-
Douglas Rushkoff has spent the last thirty years studying how digital technologies have shaped our world. The renowned media theorist is the author of twenty books, the host of the Team Human podcast, and a professor of Media Theory and Digital Economics at City University of New York. But when I sat down with him, he didnât seem all that excited to be talking about AI. Instead, he suggested â I think only half jokingly â that heâd rather be talking about the new reboot of Dexter.
Rushkoffâs lack of enthusiasm around AI may stem from the fact that he doesnât see it as the ground shifting technology that some do. Rather, he sees generative artificial intelligence as just the latest in a long line of communication technologies â more akin to radio or television than fire or electricity.
But while he may not believe that artificial intelligence is going to bring about some kind of techno-utopia, he does think its impact will be significant. So eventually we did talk about AI. And we ended up having an incredibly lively conversation about whether computers can create real art, how the âCalifornia ideologyâ has shaped artificial intelligence, and why itâs not too late to ensure that technology is enabling human flourishing â not eroding it.
Mentioned:
âCyberiaâ by Douglas Rushkoff
âThe Original WIRED Manifestoâ by Louis Rossetto
âThe Long Boom: A History of the Future, 1980â2020âł by Peter Schwartz and Peter Leyden
âSurvival of the Richest: Escape Fantasies of the Tech Billionairesâ by Douglas Rushkoff
âArtificial Creativity: How AI teaches us to distinguish between humans, art, and industryâ by Douglas Rushkoffâ by Douglas Rushkoff
âEmpirical Science Began as a Domination Fantasyâ by Douglas Rushkoff
âA Declaration of the Independence of Cyberspaceâ by John Perry Barlow
âThe Californian Ideologyâ by Richard Barbrook and Andy Cameron
âCan AI Bring Humanity Back to Health Care?,â Machines Like Us Episode 5
Further Reading:
âThe Medium is the Massage: An Inventory of Effectsâ by Marshall McLuhan
âTechnopoly: The Surrender of Culture to Technologyâ by Neil Postman
âAmusing Ourselves to Deathâ by Neil Postman
-
It seems like the loudest voices in AI often fall into one of two groups. There are the boomers â the techno-optimists â who think that AI is going to bring us into an era of untold prosperity. And then there are the doomers, who think thereâs a good chance AI is going to lead to the end of humanity as we know it.
While these two camps are, in many ways, completely at odds with one another, they do share one thing in common: they both buy into the hype of artificial intelligence.
But when you dig deeper into these systems, it becomes apparent that both of these visions â the utopian one and the doomy one â are based on some pretty tenuous assumptions.
Kate Crawford has been trying to understand how AI systems are built for more than a decade. Sheâs the co-founder of the AI Now institute, a leading AI researcher at Microsoft, and the author of Atlas of AI: Power, Politics and the Planetary Cost of AI.
Crawford was studying AI long before this most recent hype cycle. So I wanted to have her on the show to explain how AI really works. Because even though it can seem like magic, AI actually requires huge amounts of data, cheap labour and energy in order to function. So even if AI doesnât lead to utopia, or take over the world, it is transforming the planet â by depleting its natural resources, exploiting workers, and sucking up our personal data. And thatâs something we need to be paying attention to.
Mentioned:âELIZAâA Computer Program For the Study of Natural Language Communication Between Man And Machineâ by Joseph Weizenbaum
âMicrosoft, OpenAI plan $100 billion data-center project, media report says,â Reuters
âMeta âdiscussed buying publisher Simon & Schuster to train AIââ by Ella Creamer
âGoogle pauses Gemini AI image generation of people after racial âinaccuraciesââ by Kelvin Chan And Matt Oâbrien
âOpenAI and Apple announce partnership,â OpenAI
Fairwork
âNew Oxford Report Sheds Light on Labour Malpractices in the Remote Work and AI Boomsâ by Fairwork
âThe Work of Copyright Law in the Age of Generative AIâ by Kate Crawford, Jason Schultz
âGenerative AIâs environmental costs are soaring â and mostly secretâ by Kate Crawford
âArtificial intelligence guzzles billions of liters of waterâ by Manuel G. Pascual
âS.3732 â Artificial Intelligence Environmental Impacts Act of 2024âł
âAssessment of lithium criticality in the global energy transition and addressing policy gaps in transportationâ by Peter Greim, A. A. Solomon, Christian Breyer
âCalculating Empiresâ by Kate Crawford and Vladan Joler
Further Reading:
âAtlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligenceâ by Kate Crawford
âExcavating AIâ by Kate Crawford and Trevor Paglen
âUnderstanding the work of dataset creatorsâ from Knowing Machines
âShould We Treat Data as Labor? Moving beyond âFreeââ by I. Arrieta-Ibarra et al.
-
Think about the last time you felt let down by the health care system. You probably donât have to go back far. In wealthy countries around the world, medical systems that were once robust are now crumbling. Doctors and nurses, tasked with an ever expanding range of responsibilities, are busier than ever, which means they have less and less time for patients. In the United States, the average doctorâs appointment lasts seven minutes. In South Korea, itâs only two.
Without sufficient time and attention, patients are suffering. There are 12 million significant misdiagnoses in the US every year, and 800,000 of those result in death or disability. (While the same kind of data isnât available in Canada, similar trends are almost certainly happening here as well).
Eric Topol says medicine has become decidedly inhuman â and the consequences have been disastrous. Topol is a cardiologist and one of the most widely cited medical researchers in the world. In his latest book, Deep Medicine, he argues that the best way to make health care human again is to embrace the inhuman, in the form of artificial intelligence.
Mentioned:âDeep Medicine: How Artificial Intelligence Can Make Healthcare Human Againâ by Eric Topol
âThe frequency of diagnostic errors in outpatient care: estimations from three large observational studies involving US adult populationsâ by H. Singh, A. Meyer, E. Thomas
âBurden of serious harms from diagnostic error in the USAâ by David Newman-Toker, et al.
âHow Expert Clinicians Intuitively Recognize a Medical Diagnosisâ by J. Brush Jr, J. Sherbino, G. Norman
âA Randomized Controlled Study of Art Observation Training to Improve Medical Student Ophthalmology Skillsâ by Jaclyn Gurwin, et al.
âAbridge becomes Epicâs First Pal, bringing generative AI to more providers and patients, including those at Emory Healthcareâ
âWhy Doctors Should Organizeâ by Eric Topol
âHow This Rural Health System Is Outdoing Silicon Valleyâ by Erika Fry
Further Reading:
"The Importance Of Being" by Abraham Verghese
-
Earlier this year, Elon Muskâs company Neuralink successfully installed one of their brain implants in a 29 year old quadriplegic man named Noland Arbaugh. The device changed Arbaughâs life. He no longer needs a mouth stylus to control his computer or play video games. Instead, he can use his mind.
The brain-computer interface that Arbaugh uses is part of an emerging field known as neurotechnology that promises to reshape the way we live. A wide range of AI empowered neurotechnologies may allow disabled people like Arbaugh to regain independence, or give us the ability to erase traumatic memories in patients suffering from PTSD.
But it doesnât take great leaps to envision how these technologies could be abused as well. Law enforcement agencies in the United Arab Emirates have used neurotechnology to read the minds of criminal suspects, and convict them based on what theyâve found. And corporations are developing ways to advertise to potential customers in their dreams. Remarkably, both of these things appear to be legal, as there are virtually no laws explicitly governing neurotechnology.
All of which makes Nita Farahanyâs work incredibly timely. Farahany is a professor of law and philosophy at Duke University and the author of The Battle for Your Brain: Defending the Right to Think Freely in the Age of Neurotechnology.
Farahany isnât fatalistic about neurotech â in fact, she uses some of it herself. But she is adamant that we need to start developing laws and guardrails as soon as possible, because it may not be long before governments, employers and corporations have access to our brains.
Mentioned:âPRIME Study Progress Update â User Experience,â Neuralink
âParalysed man walks using device that reconnects brain with muscles,â The Guardian
Cognitive Warfare â NATOâs ACT
The Ethics of Neurotechnology: UNESCO appoints international expert group to prepare a new global standard
-
When Eugenia Kuyda saw Her for the first time â the 2013 film about a man who falls in love with his virtual assistant â it didnât read as science fiction. Thatâs because she was developing a remarkably similar technology: an AI chatbot that could function as a close friend, or even a romantic partner.
That idea would eventually become the basis for Replika, Kuydaâs AI startup. Today, Replika has millions of active users â thatâs millions of people who have AI friends, AI siblings and AI partners.
When I first heard about the idea behind Replika, I thought it sounded kind of dystopian. I envisioned a world where weâd rather spend time with our AI friends than our real ones. But thatâs not the world Kuyda is trying to build. In fact, she thinks chatbots will actually make people more social, not less, and that the cure for our technologically exacerbated loneliness might just be more technology.
Mentioned:
âELIZAâA Computer Program For the Study of Natural Language Communication Between Man And Machineâ by Joseph Weizenbaum
âelizabot.jsâ, implemented by Norbert Landsteiner
âSpeak, Memoryâ by Casey Newton (The Verge)
âCreating a safe Replika experienceâ by Replika
âThe Year of Magical Thinkingâ by Joan Didion
Additional Reading:
The Globe & Mail: âThey fell in love with the Replika AI chatbot. A policy update left them heartbrokenâ
âLoneliness and suicide mitigation for students using GPT3-enabled chatbotsâ by Maples, Cerit, Vishwanath, & Pea
âLearning from intelligent social agents as social and intellectual mirrorsâ by Maples, Pea, Markowitz
-
In the last few years, artificial intelligence has gone from a novelty to perhaps the most influential technology weâve ever seen. The people building AI are convinced that it will eradicate disease, turbocharge productivity, and solve climate change. It feels like weâre on the cusp of a profound societal transformation. And yet, I canât shake the feeling weâve been here before. Fifteen years ago, there was a similar wave of optimism around social media: it was going to connect the world, catalyze social movements and spur innovation. It may have done some of these things. But it also made us lonelier, angrier, and occasionally detached from reality.
Few people understand this trajectory better than Maria Ressa. Ressa is a Filipino journalist, and the CEO of a news organization called Rappler. Like many people, she was once a fervent believer in the power of social media. Then she saw how it could be abused. In 2016, she reported on how Rodrigo Duterte, then president of the Philippines, had weaponized Facebook in the election heâd just won. After publishing those stories, Ressa became a target herself, and her inbox was flooded with death threats. In 2021, she won the Nobel Peace Prize.
I wanted this to be our first episode because I think, as novel as AI is, it has undoubtedly been shaped by the technologies, the business models, and the CEOs that came before it. And Ressa thinks weâre about to repeat the mistakes we made with social media all over again.
Mentioned:
âHow to Stand Up to a Dictatorâ by Maria Ressa
âA Shocking Amount of the Web is Machine Translated: Insights from Multi-Way Parallelismâ by Thompson et al.
Rapplerâs Matrix Protocol Chat App: Rappler Communities
âDemocracy Report 2023: Defiance in the Face of Autocratizationâ by V-Dem
âThe Foundation Model Transparency Indexâ by Stanford HAI (Human-Centered Artificial Intelligence)
âAll the ways Trumpâs campaign was aided by Facebook, ranked by importanceâ by Philip Bump (The Washington Post)
âOur Epidemic of Loneliness and Isolationâ by U.S. Surgeon General Dr. Vivek H. Murthy
-
We are living in an age of breakthroughs propelled by advances in artificial intelligence. Technologies that were once the realm of science fiction will become our reality: robot best friends, bespoke gene editing, brain implants that make us smarter.
Every other Tuesday Taylor Owen sits down with someone shaping this rapidly approaching future.
The first two episodes will be released on May 7th. Subscribe now so you don’t miss an episode.