Episoder
-
CW: This episode features discussion of suicide and sexual abuse.
In the last episode, we had the journalist Laurie Segall on to talk about the tragic story of Sewell Setzer, a 14 year old boy who took his own life after months of abuse and manipulation by an AI companion from the company Character.ai. The question now is: what's next?
Megan has filed a major new lawsuit against Character.ai in Florida, which could force the company–and potentially the entire AI industry–to change its harmful business practices. So today on the show, we have Meetali Jain, director of the Tech Justice Law Project and one of the lead lawyers in Megan's case against Character.ai. Meetali breaks down the details of the case, the complex legal questions under consideration, and how this could be the first step toward systemic change. Also joining is Camille Carlton, CHT’s Policy Director.
RECOMMENDED MEDIA
Further reading on Sewell’s story
Laurie Segall’s interview with Megan Garcia
The full complaint filed by Megan against Character.AI
Further reading on suicide bots
Further reading on Noam Shazier and Daniel De Frietas’ relationship with Google
The CHT Framework for Incentivizing Responsible Artificial Intelligence Development and Use
Organizations mentioned:
The Tech Justice Law Project
The Social Media Victims Law Center
Mothers Against Media Addiction
Parents SOS
Parents Together
Common Sense Media
RECOMMENDED YUA EPISODES
When the "Person" Abusing Your Child is a Chatbot: The Tragic Story of Sewell Setzer
Jonathan Haidt On How to Solve the Teen Mental Health Crisis
AI Is Moving Fast. We Need Laws that Will Too.
Corrections:
Meetali referred to certain chatbot apps as banning users under 18, however the settings for the major app stores ban users that are under 17, not under 18.Meetali referred to Section 230 as providing “full scope immunity” to internet companies, however Congress has passed subsequent laws that have made carve outs for that immunity for criminal acts such as sex trafficking and intellectual property theft.
Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X.
-
Content Warning: This episode contains references to suicide, self-harm, and sexual abuse.
Megan Garcia lost her son Sewell to suicide after he was abused and manipulated by AI chatbots for months. Now, she’s suing the company that made those chatbots. On today’s episode of Your Undivided Attention, Aza sits down with journalist Laurie Segall, who's been following this case for months. Plus, Laurie’s full interview with Megan on her new show, Dear Tomorrow.
Aza and Laurie discuss the profound implications of Sewell’s story on the rollout of AI. Social media began the race to the bottom of the brain stem and left our society addicted, distracted, and polarized. Generative AI is set to supercharge that race, taking advantage of the human need for intimacy and connection amidst a widespread loneliness epidemic. Unless we set down guardrails on this technology now, Sewell’s story may be a tragic sign of things to come, but it also presents an opportunity to prevent further harms moving forward.
If you or someone you know is struggling with mental health, you can reach out to the 988 Suicide and Crisis Lifeline by calling or texting 988; this connects you to trained crisis counselors 24/7 who can provide support and referrals to further assistance.
Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
RECOMMENDED MEDIAThe first episode of Dear Tomorrow, from Mostly Human Media
The CHT Framework for Incentivizing Responsible AI Development
Further reading on Sewell’s case
Character.ai’s “About Us” page
Further reading on the addictive properties of AI
RECOMMENDED YUA EPISODESAI Is Moving Fast. We Need Laws that Will Too.
This Moment in AI: How We Got Here and Where We’re Going
Jonathan Haidt On How to Solve the Teen Mental Health Crisis
The AI Dilemma
-
Mangler du episoder?
-
Social media disinformation did enormous damage to our shared idea of reality. Now, the rise of generative AI has unleashed a flood of high-quality synthetic media into the digital ecosystem. As a result, it's more difficult than ever to tell what’s real and what’s not, a problem with profound implications for the health of our society and democracy. So how do we fix this critical issue?
As it turns out, there’s a whole ecosystem of folks to answer that question. One is computer scientist Oren Etzioni, the CEO of TrueMedia.org, a free, non-partisan, non-profit tool that is able to detect AI generated content with a high degree of accuracy. Oren joins the show this week to talk about the problem of deepfakes and disinformation and what he sees as the best solutions.
Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
RECOMMENDED MEDIA
TrueMedia.org
Further reading on the deepfaked image of an explosion near the Pentagon
Further reading on the deepfaked robocall pretending to be President Biden
Further reading on the election deepfake in Slovakia
Further reading on the President Obama lip-syncing deepfake from 2017
One of several deepfake quizzes from the New York Times, test yourself!
The Partnership on AI
C2PA
Witness.org
Truepic
RECOMMENDED YUA EPISODES
‘We Have to Get It Right’: Gary Marcus On Untamed AI
Taylor Swift is Not Alone: The Deepfake Nightmare Sweeping the Internet
Synthetic Humanity: AI & What’s At Stake
CLARIFICATION: Oren said that the largest social media platforms “don’t see a responsibility to let the public know this was manipulated by AI.” Meta has made a public commitment to flagging AI-generated or -manipulated content. Whereas other platforms like TikTok and Snapchat rely on users to flag.
-
Historian Yuval Noah Harari says that we are at a critical turning point. One in which AI’s ability to generate cultural artifacts threatens humanity’s role as the shapers of history. History will still go on, but will it be the story of people or, as he calls them, ‘alien AI agents’?
In this conversation with Aza Raskin, Harari discusses the historical struggles that emerge from new technology, humanity’s AI mistakes so far, and the immediate steps lawmakers can take right now to steer us towards a non-dystopian future.
This episode was recorded live at the Commonwealth Club World Affairs of California.
Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
RECOMMENDED MEDIA
NEXUS: A Brief History of Information Networks from the Stone Age to AI by Yuval Noah Harari
You Can Have the Blue Pill or the Red Pill, and We’re Out of Blue Pills: a New York Times op-ed from 2023, written by Yuval, Aza, and Tristan
The 2023 open letter calling for a pause in AI development of at least 6 months, signed by Yuval and Aza
Further reading on the Stanford Marshmallow Experiment Further reading on AlphaGo’s “move 37”
Further Reading on Social.AI
RECOMMENDED YUA EPISODES
This Moment in AI: How We Got Here and Where We’re Going
The Tech We Need for 21st Century Democracy with Divya Siddarth
Synthetic Humanity: AI & What’s At Stake
The AI Dilemma
Two Million Years in Two Hours: A Conversation with Yuval Noah Harari
-
It’s a confusing moment in AI. Depending on who you ask, we’re either on the fast track to AI that’s smarter than most humans, or the technology is about to hit a wall. Gary Marcus is in the latter camp. He’s a cognitive psychologist and computer scientist who built his own successful AI start-up. But he’s also been called AI’s loudest critic.
On Your Undivided Attention this week, Gary sits down with CHT Executive Director Daniel Barcay to defend his skepticism of generative AI and to discuss what we need to do as a society to get the rollout of this technology right… which is the focus of his new book, Taming Silicon Valley: How We Can Ensure That AI Works for Us.
The bottom line: No matter how quickly AI progresses, Gary argues that our society is woefully unprepared for the risks that will come from the AI we already have.
Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
RECOMMENDED MEDIA
Link to Gary’s book: Taming Silicon Valley: How We Can Ensure That AI Works for Us
Further reading on the deepfake of the CEO of India's National Stock Exchange
Further reading on the deepfake of of an explosion near the Pentagon.
The study Gary cited on AI and false memories.
Footage from Gary and Sam Altman’s Senate testimony.
RECOMMENDED YUA EPISODES
Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn
Taylor Swift is Not Alone: The Deepfake Nightmare Sweeping the Internet
No One is Immune to AI Harms with Dr. Joy Buolamwini
Correction: Gary mistakenly listed the reliability of GPS systems as 98%. The federal government’s standard for GPS reliability is 95%.
-
AI is moving fast. And as companies race to rollout newer, more capable models–with little regard for safety–the downstream risks of those models become harder and harder to counter. On this week’s episode of Your Undivided Attention, CHT’s policy director Casey Mock comes on the show to discuss a new legal framework to incentivize better AI, one that holds AI companies liable for the harms of their products.
Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
RECOMMENDED MEDIA
The CHT Framework for Incentivizing Responsible AI Development
Further Reading on Air Canada’s Chatbot Fiasco
Further Reading on the Elon Musk Deep Fake Scams
The Full Text of SB1047, California’s AI Regulation Bill
Further reading on SB1047
RECOMMENDED YUA EPISODES
Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn
Can We Govern AI? with Marietje Schaake
A First Step Toward AI Regulation with Tom Wheeler
Correction: Casey incorrectly stated the year that the US banned child labor as 1937. It was banned in 1938. -
[This episode originally aired on August 17, 2023] For all the talk about AI, we rarely hear about how it will change our relationships. As we swipe to find love and consult chatbot therapists, acclaimed psychotherapist and relationship expert Esther Perel warns that there’s another harmful “AI” on the rise — Artificial Intimacy — and how it is depriving us of real connection. Tristan and Esther discuss how depending on algorithms can fuel alienation, and then imagine how we might design technology to strengthen our social bonds.
RECOMMENDED MEDIA
Mating in Captivity by Esther Perel
Esther's debut work on the intricacies behind modern relationships, and the dichotomy of domesticity and sexual desire
The State of Affairs by Esther Perel
Esther takes a look at modern relationships through the lens of infidelity
Where Should We Begin? with Esther Perel
Listen in as real couples in search of help bare the raw and profound details of their stories
How’s Work? with Esther Perel
Esther’s podcast that focuses on the hard conversations we're afraid to have at work
Lars and the Real Girl (2007)
A young man strikes up an unconventional relationship with a doll he finds on the internet
Her (2013)
In a near future, a lonely writer develops an unlikely relationship with an operating system designed to meet his every need
RECOMMENDED YUA EPISODES
Big Food, Big Tech and Big AI with Michael Moss
The AI Dilemma
The Three Rules of Humane Tech
Digital Democracy is Within Reach with Audrey Tang
CORRECTION: Esther refers to the 2007 film Lars and the Real Doll. The title of the film is Lars and the Real Girl.
Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
-
Today, the tech industry is the second-biggest lobbying power in Washington, DC, but that wasn’t true as recently as ten years ago. How did we get to this moment? And where could we be going next? On this episode of Your Undivided Attention, Tristan and Daniel sit down with historian Margaret O’Mara and journalist Brody Mullins to discuss how Silicon Valley has changed the nature of American lobbying.
Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
RECOMMENDED MEDIA
The Wolves of K Street: The Secret History of How Big Money Took Over Big Government - Brody’s book on the history of lobbying.
The Code: Silicon Valley and the Remaking of America - Margaret’s book on the historical relationship between Silicon Valley and Capitol Hill
More information on the Google antitrust ruling
More Information on KOSPA
More information on the SOPA/PIPA internet blackout
Detailed breakdown of Internet lobbying from Open Secrets
RECOMMENDED YUA EPISODES
U.S. Senators Grilled Social Media CEOs. Will Anything Change?
Can We Govern AI? with Marietje Schaake
The Race to Cooperation with David Sloan WilsonCORRECTION: Brody Mullins refers to AT&T as having a “hundred million dollar” lobbying budget in 2006 and 2007. While we couldn’t verify the size of their budget for lobbying, their actual lobbying spend was much less than this: $27.4m in 2006 and $16.5m in 2007, according to OpenSecrets.
The views expressed by guests appearing on Center for Humane Technology’s podcast, Your Undivided Attention, are their own, and do not necessarily reflect the views of CHT. CHT does not support or oppose any candidate or party for election to public office
-
It’s been a year and half since Tristan and Aza laid out their vision and concerns for the future of artificial intelligence in The AI Dilemma. In this Spotlight episode, the guys discuss what’s happened since then–as funding, research, and public interest in AI has exploded–and where we could be headed next. Plus, some major updates on social media reform, including the passage of the Kids Online Safety and Privacy Act in the Senate.
Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
RECOMMENDED MEDIA
The AI Dilemma: Tristan and Aza’s talk on the catastrophic risks posed by AI.
Info Sheet on KOSPA: More information on KOSPA from FairPlay.
Situational Awareness by Leopold Aschenbrenner: A widely cited blog from a former OpenAI employee, predicting the rapid arrival of AGI.
AI for Good: More information on the AI for Good summit that was held earlier this year in Geneva.
Using AlphaFold in the Fight Against Plastic Pollution: More information on Google’s use of AlphaFold to create an enzyme to break down plastics.
Swiss Call For Trust and Transparency in AI: More information on the initiatives mentioned by Katharina Frey.
RECOMMENDED YUA EPISODES
War is a Laboratory for AI with Paul Scharre
Jonathan Haidt On How to Solve the Teen Mental Health Crisis
Can We Govern AI? with Marietje Schaake
The Three Rules of Humane Tech
The AI Dilemma
Clarification: Swiss diplomat Nina Frey’s full name is Katharina Frey.
The views expressed by guests appearing on Center for Humane Technology’s podcast, Your Undivided Attention, are their own, and do not necessarily reflect the views of CHT. CHT does not support or oppose any candidate or party for election to public office
-
AI has been a powerful accelerant for biological research, rapidly opening up new frontiers in medicine and public health. But that progress can also make it easier for bad actors to manufacture new biological threats. In this episode, Tristan and Daniel sit down with biologist Kevin Esvelt to discuss why AI has been such a boon for biologists and how we can safeguard society against the threats that AIxBio poses.
Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
RECOMMENDED MEDIA
Sculpting Evolution: Information on Esvelt’s lab at MIT.
SecureDNA: Esvelt’s free platform to provide safeguards for DNA synthesis.
The Framework for Nucleic Acid Synthesis Screening: The Biden admin’s suggested guidelines for DNA synthesis regulation.
Senate Hearing on Regulating AI Technology: C-SPAN footage of Dario Amodei’s testimony to Congress.
The AlphaFold Protein Structure Database
RECOMMENDED YUA EPISODES
U.S. Senators Grilled Social Media CEOs. Will Anything Change?
Big Food, Big Tech and Big AI with Michael Moss
The AI Dilemma
Clarification: President Biden’s executive order only applies to labs that receive funding from the federal government, not state governments.
-
Will AI ever start to think by itself? If it did, how would we know, and what would it mean?
In this episode, Dr. Anil Seth and Aza discuss the science, ethics, and incentives of artificial consciousness. Seth is Professor of Cognitive and Computational Neuroscience at the University of Sussex and the author of Being You: A New Science of Consciousness.
Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
RECOMMENDED MEDIA
Frankenstein by Mary Shelley
A free, plain text version of the Shelley’s classic of gothic literature.
OpenAI’s GPT4o Demo
A video from OpenAI demonstrating GPT4o’s remarkable ability to mimic human sentience.
You Can Have the Blue Pill or the Red Pill, and We’re Out of Blue Pills
The NYT op-ed from last year by Tristan, Aza, and Yuval Noah Harari outlining the AI dilemma.
What It’s Like to Be a Bat
Thomas Nagel’s essay on the nature of consciousness.
Are You Living in a Computer Simulation?
Philosopher Nick Bostrom’s essay on the simulation hypothesis.
Anthropic’s Golden Gate Claude
A blog post about Anthropic’s recent discovery of millions of distinct concepts within their LLM, a major development in the field of AI interpretability.
RECOMMENDED YUA EPISODES
Esther Perel on Artificial Intimacy
Talking With Animals... Using AI
Synthetic Humanity: AI & What’s At Stake
-
Climate change, political instability, hunger. These are just some of the forces behind an unprecedented refugee crisis that’s expected to include over a billion people by 2050. In response to this growing crisis, wealthy governments like the US and the EU are employing novel AI and surveillance technologies to slow the influx of migrants at their borders. But will this rollout stop at the border?
In this episode, Tristan and Aza sit down with Petra Molnar to discuss how borders have become a proving ground for the sharpest edges of technology, and especially AI. Petra is an immigration lawyer and co-creator of the Migration and Technology Monitor. Her new book is “The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence.”
Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
RECOMMENDED MEDIA
The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence
Petra’s newly published book on the rollout of high risk tech at the border.
Bots at the Gate
A report co-authored by Petra about Canada’s use of AI technology in their immigration process.
Technological Testing Grounds
A report authored by Petra about the use of experimental technology in EU border enforcement.
Startup Pitched Tasing Migrants from Drones, Video Reveals
An article from The Intercept, containing the demo for Brinc’s taser drone pilot program.
The UNHCR
Information about the global refugee crisis from the UN.
RECOMMENDED YUA EPISODES
War is a Laboratory for AI with Paul Scharre
No One is Immune to AI Harms with Dr. Joy Buolamwini
Can We Govern AI? With Marietje Schaake
CLARIFICATION:
The iBorderCtrl project referenced in this episode was a pilot project that was discontinued in 2019
-
This week, a group of current and former employees from OpenAI and Google DeepMind penned an open letter accusing the industry’s leading companies of prioritizing profits over safety. This comes after a spate of high profile departures from OpenAI, including co-founder Ilya Sutskever and senior researcher Jan Leike, as well as reports that OpenAI has gone to great lengths to silence would-be whistleblowers.
The writers of the open letter argue that researchers have a “right to warn” the public about AI risks and laid out a series of principles that would protect that right. In this episode, we sit down with one of those writers: William Saunders, who left his job as a research engineer at OpenAI in February. William is now breaking the silence on what he saw at OpenAI that compelled him to leave the company and to put his name to this letter.
RECOMMENDED MEDIA
The Right to Warn Open Letter
My Perspective On "A Right to Warn about Advanced Artificial Intelligence": A follow-up from William about the letter
Leaked OpenAI documents reveal aggressive tactics toward former employees: An investigation by Vox into OpenAI’s policy of non-disparagement.
RECOMMENDED YUA EPISODES
A First Step Toward AI Regulation with Tom WheelerSpotlight on AI: What Would It Take For This to Go Well?Big Food, Big Tech and Big AI with Michael MossCan We Govern AI? With Marietje SchaakeYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
-
Right now, militaries around the globe are investing heavily in the use of AI weapons and drones. From Ukraine to Gaza, weapons systems with increasing levels of autonomy are being used to kill people and destroy infrastructure and the development of fully autonomous weapons shows little signs of slowing down. What does this mean for the future of warfare? What safeguards can we put up around these systems? And is this runaway trend toward autonomous warfare inevitable or will nations come together and choose a different path? In this episode, Tristan and Daniel sit down with Paul Scharre to try to answer some of these questions. Paul is a former Army Ranger, the author of two books on autonomous weapons and he helped the Department of Defense write a lot of its policy on the use of AI in weaponry.
RECOMMENDED MEDIA
Four Battlegrounds: Power in the Age of Artificial Intelligence: Paul’s book on the future of AI in war, which came out in 2023.
Army of None: Autonomous Weapons and the Future of War: Paul’s 2018 book documenting and predicting the rise of autonomous and semi-autonomous weapons as part of modern warfare.
The Perilous Coming Age of AI Warfare: How to Limit the Threat of Autonomous Warfare: Paul’s article in Foreign Affairs based on his recent trip to the battlefield in Ukraine.
The night the world almost almost ended: A BBC documentary about Stanislav Petrov’s decision not to start nuclear war.
AlphaDogfight Trials Final Event: The full simulated dogfight between an AI and human pilot. The AI pilot swept, 5-0.
‘Lavender’: The AI machine directing Israel’s bombing spree in Gaza: An investigation into the use of AI targeting systems by the IDF.
RECOMMENDED YUA EPISODES
The AI ‘Race’: China vs. the US with Jeffrey Ding and Karen HaoCan We Govern AI? with Marietje SchaakeBig Food, Big Tech and Big AI with Michael MossThe Invisible Cyber-War with Nicole PerlrothYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
-
Tech companies say that AI will lead to massive economic productivity gains. But as we know from the first digital revolution, that’s not what happened. Can we do better this time around?
RECOMMENDED MEDIA
Power and Progress by Daron Acemoglu and Simon Johnson Professor Acemoglu co-authored a bold reinterpretation of economics and history that will fundamentally change how you see the world
Can we Have Pro-Worker AI? Professor Acemoglu co-authored this paper about redirecting AI development onto the human-complementary path
Rethinking Capitalism: In Conversation with Daron Acemoglu The Wheeler Institute for Business and Development hosted Professor Acemoglu to examine how technology affects the distribution and growth of resources while being shaped by economic and social incentives
RECOMMENDED YUA EPISODES
The Three Rules of Humane TechThe Tech We Need for 21st Century DemocracyCan We Govern AI?An Alternative to Silicon Valley UnicornsYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
-
Suicides. Self harm. Depression and anxiety. The toll of a social media-addicted, phone-based childhood has never been more stark. It can be easy for teens, parents and schools to feel like they’re trapped by it all. But in this conversation with Tristan Harris, author and social psychologist Jonathan Haidt makes the case that the conditions that led to today’s teenage mental health crisis can be turned around – with specific, achievable actions we all can take starting today.
This episode was recorded live at the San Francisco Commonwealth Club.Correction: Tristan mentions that 40 Attorneys General have filed a lawsuit against Meta for allegedly fostering addiction among children and teens through their products. However, the actual number is 42 Attorneys General who are taking legal action against Meta.
Clarification: Jonathan refers to the Wait Until 8th pledge. By signing the pledge, a parent promises not to give their child a smartphone until at least the end of 8th grade. The pledge becomes active once at least ten other families from their child’s grade pledge the same.
-
Beneath the race to train and release more powerful AI models lies another race: a race by companies and nation-states to secure the hardware to make sure they win AI supremacy.
Correction: The latest available Nvidia chip is the Hopper H100 GPU, which has 80 billion transistors. Since the first commercially available chip had four transistors, the Hopper actually has 20 billion times that number. Nvidia recently announced the Blackwell, which boasts 208 billion transistors - but it won’t ship until later this year.
RECOMMENDED MEDIAChip War: The Fight For the World’s Most Critical Technology by Chris Miller
To make sense of the current state of politics, economics, and technology, we must first understand the vital role played by chips
Gordon Moore Biography & Facts
Gordon Moore, the Intel co-founder behind Moore's Law, passed away in March of 2023
AI’s most popular chipmaker Nvidia is trying to use AI to design chips faster
Nvidia's GPUs are in high demand - and the company is using AI to accelerate chip production
RECOMMENDED YUA EPISODES
Future-proofing Democracy In the Age of AI with Audrey TangHow Will AI Affect the 2024 Elections? with Renee DiResta and Carl Miller
The AI ‘Race’: China vs. the US with Jeffrey Ding and Karen Hao
Protecting Our Freedom of Thought with Nita Farahany
Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
-
What does a functioning democracy look like in the age of artificial intelligence? Could AI even be used to help a democracy flourish? Just in time for election season, Taiwan’s Minister of Digital Affairs Audrey Tang returns to the podcast to discuss healthy information ecosystems, resilience to cyberattacks, how to “prebunk” deepfakes, and more.
RECOMMENDED MEDIA
Testing Theories of American Politics: Elites, Interest Groups, and Average Citizens by Martin Gilens and Benjamin I. Page
This academic paper addresses tough questions for Americans: Who governs? Who really rules?
Recursive Public
Recursive Public is an experiment in identifying areas of consensus and disagreement among the international AI community, policymakers, and the general public on key questions of governance
A Strong Democracy is a Digital Democracy
Audrey Tang’s 2019 op-ed for The New York Times
The Frontiers of Digital Democracy
Nathan Gardels interviews Audrey Tang in Noema
RECOMMENDED YUA EPISODES
Digital Democracy is Within Reach with Audrey Tang
The Tech We Need for 21st Century Democracy with Divya Siddarth
How Will AI Affect the 2024 Elections? with Renee DiResta and Carl Miller
The AI Dilemma
Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
-
Was it political progress, or just political theater? The recent Senate hearing with social media CEOs led to astonishing moments — including Mark Zuckerberg’s public apology to families who lost children following social media abuse. Our panel of experts, including Facebook whistleblower Frances Haugen, untangles the explosive hearing, and offers a look ahead, as well. How will this hearing impact protocol within these social media companies? How will it impact legislation? In short: will anything change?
Clarification: Julie says that shortly after the hearing, Meta’s stock price had the biggest increase of any company in the stock market’s history. It was the biggest one-day gain by any company in Wall Street history.Correction: Frances says it takes Snap three or four minutes to take down exploitative content. In Snap's most recent transparency report, they list six minutes as the median turnaround time to remove exploitative content.
RECOMMENDED MEDIA
Get Media Savvy
Founded by Julie Scelfo, Get Media Savvy is a non-profit initiative working to establish a healthy media environment for kids and families
The Power of One by Frances Haugen
The inside story of France’s quest to bring transparency and accountability to Big Tech
RECOMMENDED YUA EPISODES
Real Social Media Solutions, Now with Frances Haugen
A Conversation with Facebook Whistleblower Frances Haugen
Are the Kids Alright?
Social Media Victims Lawyer Up with Laura Marquez-Garrett
Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
-
Over the past year, a tsunami of apps that digitally strip the clothes off real people has hit the market. Now anyone can create fake non-consensual sexual images in just a few clicks. With cases proliferating in high schools, guest presenter Laurie Segall talks to legal scholar Mary Anne Franks about the AI-enabled rise in deep fake porn and what we can do about it.
Correction: Laurie refers to the app 'Clothes Off.' It’s actually named Clothoff. There are many clothes remover apps in this category.
RECOMMENDED MEDIARevenge Porn: The Cyberwar Against Women
In a five-part digital series, Laurie Segall uncovers a disturbing internet trend: the rise of revenge porn
The Cult of the Constitution
In this provocative book, Mary Anne Franks examines the thin line between constitutional fidelity and constitutional fundamentalism
Fake Explicit Taylor Swift Images Swamp Social Media
Calls to protect women and crack down on the platforms and technology that spread such images have been reignited
RECOMMENDED YUA EPISODES
No One is Immune to AI HarmsEsther Perel on Artificial Intimacy
Social Media Victims Lawyer Up
The AI Dilemma
Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_ - Se mer