Episoder
-
Last week the House of Representatives overwhelmingly passed a bill that would require ByteDance, the Chinese company that owns the popular social media app TikTok, to divest its ownership in the platform or face TikTok being banned in the United States. Although prospects for the bill in the Senate remain uncertain, President Biden has said he will sign the bill if it comes to his desk, and this is the most serious attempt yet to ban the controversial social media app.
Today's podcast is the latest in a series of conversations we've had about TikTok. Matt Perault, the Director of the Center on Technology Policy at the University of North Carolina at Chapel Hill, led a conversation with Alan Rozenshtein, Associate Professor of Law at the University of Minnesota and Senior Editor at Lawfare, and Ramya Krishnan, a Senior Staff Attorney at the Knight First Amendment Institute at Columbia University. They talked about the First Amendment implications of a TikTok ban, whether it's a good idea as a policy matter, and how we should think about foreign ownership of platforms more generally.
Disclaimer: Matt's center receives funding from foundations and tech companies, including funding from TikTok.
Hosted on Acast. See acast.com/privacy for more information.
-
Today, weâre bringing you an episode of Arbiters of Truth, our series on the information ecosystem.
On March 18, the Supreme Court heard oral arguments in Murthy v. Missouri, concerning the potential First Amendment implications of government outreach to social media platformsâwhatâs sometimes known as jawboning. The case arrived at the Supreme Court with a somewhat shaky evidentiary record, but the legal questions raised by government requests or demands to remove online content are real.
To make sense of it all, Lawfare Senior Editor Quinta Jurecic and Matt Perault, the Director of the Center on Technology Policy at UNC-Chapel Hill, called up Alex Abdo, the Litigation Director of the Knight First Amendment Institute at Columbia University. While the law is unsettled, the Supreme Court seemed skeptical of the plaintiffsâ claims of government censorship. But what is the best way to determine what contacts and government requests are and aren't permissible?
If youâre interested in more, you can read the Knight Instituteâs amicus brief in Murthy here and Knightâs series on jawboningâincluding Peraultâs reflectionsâhere.
Hosted on Acast. See acast.com/privacy for more information.
-
Manglende episoder?
-
In May 2023, Montana passed a new law that would ban the use of TikTok within the state starting on January 1, 2024. But as of today, TikTok is still legal in the state of Montanaâthanks to a preliminary injunction issued by a federal district judge, who found that the Montana law likely violated the First Amendment. In Texas, meanwhile, another federal judge recently upheld a more limited ban against the use of TikTok on state-owned devices. What should we make of these rulings, and how should we understand the legal status of efforts to ban TikTok?
Weâve discussed the question of TikTok bans and the First Amendment before on the Lawfare Podcast, when Lawfare Senior Editor Alan Rozenshtein and Matt Perault, Director of the Center on Technology Policy at UNC-Chapel Hill, sat down with Ramya Krishnan, a staff attorney at the Knight First Amendment Institute at Columbia University, and Mary-Rose Papandrea, the Samuel Ashe Distinguished Professor of Constitutional Law at the University of North Carolina School of Law. In light of the Montana and Texas rulings, Matt and Lawfare Senior Editor Quinta Jurecic decided to bring the gang back together and talk about where the TikTok bans stand with Ramya and Mary-Rose, on this episode of Arbiters of Truth, our series on the information ecosystem.
Hosted on Acast. See acast.com/privacy for more information.
-
In 2021, the Wall Street Journal published a monster scoop: a series of articles about Facebookâs inner workings, which showed that employees within the famously secretive company had raised alarms about potential harms caused by Facebookâs products. Now, Jeff Horwitz, the reporter behind that scoop, has a new book out, titled âBroken Codeââwhich dives even deeper into the documents he uncovered from within the company. Heâs one of the most rigorous reporters covering Facebook, now known as Meta.
On this episode of Arbiters of Truth, our series on the information ecosystem Lawfare Senior Editor Quinta Jurecic sat down with Jeff along with Matt Perault, the Director of the Center on Technology Policy at UNC-Chapel Hillâand also someone with close knowledge of Meta from his own time working at the company. They discussed Jeffâs reporting and debated what his findings tell us about how Meta functions as a company and how best to understand its responsibilities for harms traced back to its products.
Hosted on Acast. See acast.com/privacy for more information.
-
Unless youâve been living under a rock, youâve probably heard a great deal over the last year about generative AI and how itâs going to reshape various aspects of our society. That includes elections. With one year until the 2024 U.S. presidential election, we thought it would be a good time to step back and take a look at how generative AI might and might not make a difference when it comes to the political landscape. Luckily, Matt Perault and Scott Babwah Brennen of the UNC Center on Technology Policy have a new report out on just that subject, examining generative AI and political ads.
On this episode of Arbiters of Truth, our series on the information ecosystem, Lawfare Senior Editor Quinta Jurecic and Lawfareâs Fellow in Technology Policy and Law Eugenia Lostri sat down with Matt and Scott to talk through the potential risks and benefits of generative AI when it comes to political advertising. Which concerns are overstated, and which are worth closer attention as we move toward 2024? How should policymakers respond to new uses of this technology in the context of elections?
Hosted on Acast. See acast.com/privacy for more information.
-
Over the course of the last two presidential elections, efforts by social media platforms and independent researchers to prevent falsehoods from spreading about election integrity have become increasingly central to civic health. But the warning signs are flashing as we head into 2024. And platforms are arguably in a worse position to counter falsehoods today than they were in 2020.
How could this be? On this episode of Arbiters of Truth, our series on the information ecosystem, Lawfare Senior Editor Quinta Jurecic sat down with Dean Jackson, who previously sat down with the Lawfare Podcast to discuss his work as a staffer on the Jan. 6 committee. He worked with the Center on Democracy and Technology to put out a new report on the challenges facing efforts to prevent the spread of election disinformation. They talked through the political, legal, and economic pressures that are making this work increasingly difficultâand what it means for 2024.
Hosted on Acast. See acast.com/privacy for more information.
-
Today, weâre bringing you an episode of Arbiters of Truth, our series on the information ecosystem. And weâre discussing the hot topic of the moment: artificial intelligence. There are a lot of less-than-informed takes out there about AI and whether itâs going to kill us allâso weâre glad to be able to share an interview that hopefully cuts through some of that noise.
Janet Haven is the Executive Director of the nonprofit Data and Society and a member of the National Artificial Intelligence Advisory Committee, which provides guidance to the White House on AI issues. Lawfare Senior Editor Quinta Jurecic sat down alongside Matt Perault, Director of the Center on Technology and Policy at UNC-Chapel Hill, to talk through their questions about AI governance with Janet. They discussed how she evaluates the dangers and promises of artificial intelligence, how to weigh the different concerns posed by possible future existential risk to society posed by AI versus the immediate potential downsides of AI in our everyday lives, and what kind of regulation sheâd like to see in this space.
If youâre interested in reading further, Janet mentions this paper from Data and Society on âDemocratizing AIâ in the course of the conversation.
Hosted on Acast. See acast.com/privacy for more information.
-
How much influence do social media platforms have on American politics and society? Itâs a tough question for researchers to answerânot just because itâs so big, but also because platforms rarely if ever provide all the data that would be needed to address the problem.
A new batch of papers released in the journals Science and Nature marks the latest attempt to tackle this question, with access to data provided by Facebookâs parent company Meta. The 2020 Facebook & Instagram Research Election Study, a partnership between Meta researchers and outside academics, studied the platformsâ impact on the 2020 electionâand uncovered some nuanced findings, suggesting that these impacts might be less than youâd expect.
Today on Arbiters of Truth, our series on the information ecosystem, Lawfare Senior Editors Alan Rozenshtein and Quinta Jurecic are joined by the projectâs co-leaders, Talia Stroud of the University of Texas at Austin and Joshua A. Tucker of NYU. They discussed their findings, what it was like to work with Meta, and whether or not this is a model for independent academic research on platforms going forward.
(If youâre interested in more on the project, you can find links to the papers and an overview of the findings here, and an FAQ, provided by Tucker and Stroud, here.)
Hosted on Acast. See acast.com/privacy for more information.
-
Earlier this year, Brian Fishman published a fantastic paper with Brookings thinking through how technology platforms grapple with terrorism and extremism, and how any reform to Section 230 must allow those platforms space to continue doing that work. Thatâs the short description, but the paper is really about so much moreâabout how the work of content moderation actually takes place, how contemporary analyses of the harms of social media fail to address the history of how platforms addressed Islamist terror, and how we should understand âthe original sin of the internet.â
For this episode of Arbiters of Truth, our occasional series on the information ecosystem, Lawfare Senior Editor Quinta Jurecic sat down to talk with Brian about his work. Brian is the cofounder of Cinder, a software platform for the kind of trust and safety work we describe here, and he was formerly a policy director at Meta, where he led the companyâs work on dangerous individuals and organizations.
Hosted on Acast. See acast.com/privacy for more information.
-
Generative AI products have been tearing up the headlines recently. Among the many issues these products raise is whether or not their outputs are protected by Section 230, the foundational statute that shields websites from liability for third-party content.
On this episode of Arbiters of Truth, Lawfareâs occasional series on the information ecosystem, Lawfare Senior Editor Quinta Jurecic and Matt Perault, Director of the Center on Technology and Policy at UNC-Chapel Hill, talked through this question with Senator Ron Wyden and Chris Cox, formerly a U.S. congressman and SEC chairman. Cox and Wyden drafted Section 230 together in 1996âand theyâre skeptical that its protections apply to generative AI.
Disclosure: Matt consults on tech policy issues, including with platforms that work on generative artificial intelligence products and have interests in the issues discussed.
Hosted on Acast. See acast.com/privacy for more information.
-
In 2018, news broke that Facebook had allowed third-party developersâincluding the controversial data analytics firm Cambridge Analyticaâto obtain large quantities of user data in ways that users probably didnât anticipate. The fallout led to a controversy over whether Cambridge Analytica had in some way swung the 2016 election for Trump (spoiler: it almost certainly didnât), but it also generated a $5 billion fine imposed on Facebook by the FTC for violating usersâ privacy. Along with that record-breaking fine, the FTC also imposed a number of requirements on Facebook to improve its approach to privacy.
Itâs been four years since that settlement, and Facebook is now Meta. So how much has really changed within the company? For this episode of Arbiters of Truth, our series on the online information ecosystem, Lawfare Senior Editors Alan Rozenshtein and Quinta Jurecic interviewed Metaâs co-chief privacy officers, Erin Egan and Michel Protti, about the companyâs approach to privacy and its response to the FTCâs settlement order.
At one point in the conversation, Quinta mentions a class action settlement over the Cambridge Analytica scandal. You can read more about the settlement here. Information about Facebookâs legal arguments regarding user privacy interests is available here and here, and you can find more details in the judgeâs ruling denying Facebookâs motion to dismiss.
Note: Meta provides support for Lawfareâs Digital Social Contract paper series. This podcast episode is not part of that series, and Meta does not have any editorial role in Lawfare.
Hosted on Acast. See acast.com/privacy for more information.
-
If someone lies about you, you can usually sue them for defamation. But what if that someone is ChatGPT? Already in Australia, the mayor of a town outside Melbourne has threatened to sue OpenAI because ChatGPT falsely named him a guilty party in a bribery scandal. Could that happen in America? Does our libel law allow that? What does it even mean for a large language model to act with "malice"? Does the First Amendment put any limits on the ability to hold these models, and the companies that make them, accountable for false statements they make? And what's the best way to deal with this problem: private lawsuits or government regulation?
On this episode of Arbiters of Truth, our series on the information ecosystem, Alan Rozenshtein, Associate Professor of Law at the University of Minnesota and Senior Editor at Lawfare, discussed these questions with First Amendment expert Eugene Volokh, Professor of Law at UCLA and the author of a draft paper entitled "Large Libel Models.â
Hosted on Acast. See acast.com/privacy for more information.
-
Over the past few years, TikTok has become a uniquely polarizing social media platform. On the one hand, millions of users, especially those in their teens and twenties, love the app. On the other hand, the government is concerned that TikTok's vulnerability to pressure from the Chinese Communist Party makes it a serious national security threat. There's even talk of banning the app altogether. But would that be legal? In particular, does the First Amendment allow the government to ban an application thatâs used by millions to communicate every day?
On this episode of Arbiters of Truth, our series on the information ecosystem, Matt Perault, director of the Center on Technology Policy at the University of North Carolina at Chapel Hill, and Alan Z. Rozenshtein, Lawfare Senior Editor and Associate Professor of Law at the University of Minnesota, spoke with Ramya Krishnan, a staff attorney at the Knight First Amendment Institute at Columbia University, and Mary-Rose Papendrea, the Samuel Ashe Distinguished Professor of Constitutional Law at the University of North Carolina School of Law, to think through the legal and policy implications of a TikTok ban.
Hosted on Acast. See acast.com/privacy for more information.
-
On the latest episode of Arbiters of Truth, Lawfare's series on the information ecosystem, Quinta Jurecic and Alan Rozenshtein spoke with Ravi Iyer, the Managing Director of the Psychology of Technology Institute at the University of Southern California's Neely Center.
Earlier in his career, Ravi held a number of positions at Meta, where he worked to make Facebook's algorithm provide actual value, not just "engagement," to users. Quinta and Alan spoke with Ravi about why he thinks that content moderation is a dead-end and why thinking about the design of technology is the way forward to make sure that technology serves us and not the other way around.
Hosted on Acast. See acast.com/privacy for more information.
-
During recent oral arguments in Gonzalez v. Google, a Supreme Court case concerning the scope of liability protections for internet platforms, Justice Neil Gorsuch asked a thought-provoking question. Does Section 230, the statute that shields websites from liability for third-party content, apply to a generative AI model like ChatGPT?
Luckily, Matt Perault of the Center on Technology Policy at the University of North Carolina at Chapel Hill had already been thinking about this question and published a Lawfare article arguing that 230âs protections wouldnât extend to content generated by AI. Lawfare Senior Editors Quinta Jurecic and Alan Rozenshtein sat down with Matt and Jess Miers, legal advocacy counsel at the Chamber of Progress, to debate whether ChatGPTâs output constitutes third-party content, whether companies like OpenAI should be immune for the output of their products, and why you might want to sue a chatbot in the first place.
Hosted on Acast. See acast.com/privacy for more information.
-
You've likely heard of ChatGPT, the chatbot from OpenAI. But youâve likely never heard an interview with ChatGPT, much less an interview in which ChatGPT reflects on its own impact on the information ecosystem. Nor is it likely that youâve ever heard ChatGPT promising to stop producing racist and misogynistic content.
But, on this episode of Arbiters of Truth, Lawfareâs occasional series on the information ecosystem, Lawfare editor-in-chief Benjamin Wittes sat down with ChatGPT to talk about a range of things: the pronouns it prefers; academic integrity and the chatbotâs likely impact on that; and importantly, the experiments performed by a scholar name Eve Gaumond, who has been on a one-woman campaign to get ChatGPT to write offensive content. ChatGPT made some pretty solid representations that this kind of thing may be in its past, but wouldn't ever be in its future again.
So, following Benâs interview with ChatGPT, he sat down with Eve Gaumond, an AI scholar at the Public Law Center of the University of MontrĂ©al, who fact-checked ChatGPT's claims. Can you still get it to write a poem entitled, âShe Was Smart for a Womanâ? Can you get it to write a speech by Heinrich Himmler about Jews? And can you get ChatGPT to write a story belittling the Holocaust?
Hosted on Acast. See acast.com/privacy for more information.
-
Tech policy reform occupies a strange place in Washington, D.C. Everyone seems to agree that the government should change how it regulates the technology industry, on issues from content moderation to privacyâand yet, reform never actually seems to happen. But while the federal government continues to stall, state governments are taking action. More and more, state-level officials are proposing and implementing changes in technology policy. Most prominently, Texas and Florida recently passed laws restricting how platforms can moderate content, which will likely be considered by the Supreme Court later this year.
On this episode of Arbiters of Truth, our occasional series on the information ecosystem, Lawfare senior editor Quinta Jurecic spoke with J. Scott Babwah Brennen and Matt Perault of the Center on Technology Policy at UNC-Chapel Hill. In recent months, theyâve put together two reports on state-level tech regulation. They talked about whatâs driving this trend, why and how state-level policymaking differsâand doesnâtâfrom policymaking at the federal level, and what opportunities and complications this could create.
Hosted on Acast. See acast.com/privacy for more information.
-
On November 19, Twitterâs new owner Elon Musk announced that he would be reinstating former President Donald Trumpâs account on the platformâthough so far, Trump hasnât taken Musk up on the offer, preferring instead to stay on his bespoke website Truth Social. Meanwhile, Metaâs Oversight Board has set a January 2023 deadline for the platform to decide whether or not to return Trump to Facebook following his suspension after the Jan. 6 insurrection. How should we think through the difficult question of how social media platforms should handle the presence of a political leader who delights in spreading falsehoods and ginning up violence?
Luckily for us, Stanford and UCLA recently held a conference on just that. On this episode of Arbiters of Truth, our series on the online information ecosystem, Lawfare senior editors Alan Rozenshtein and Quinta Jurecic sat down with the conferenceâs organizers, election law experts Rick Hasen and Nate Persily, to talk about whether Trump should be returned to social media. They debated the tangled issues of Trumpâs deplatforming and replatforming ⊠and discussed whether, and when, Trump will break the seal and start tweeting again.
Hosted on Acast. See acast.com/privacy for more information.
-
When Facebook whistleblower Frances Haugen shared a trove of internal company documents to the Wall Street Journal in 2021, some of the most dramatic revelations concerned the companyâs use of a so-called âcross-checkâ system that, according to the Journal, essentially exempted certain high-profile users from the platformâs usual rules. After the Journal published its report, Facebookâwhich has since changed its name to Metaâasked the platformâs independent Oversight Board to weigh in on the program. And now, a year later, the Board has finally released its opinion.
On this episode of Arbiters of Truth, our series on the online information ecosystem, Lawfare senior editors Alan Rozenshtein and Quinta Jurecic sat down with Suzanne Nossel, a member of the Oversight Board and the CEO of PEN America. She talked us through the Boardâs findings, its criticisms of cross-check, and its recommendations for Meta going forward.
Hosted on Acast. See acast.com/privacy for more information.
-
Itâs Election Day in the United Statesâso while you wait for the results to come in, why not listen to a podcast about the other biggest story obsessing the political commentariat right now? Weâre talking, of course, about Elon Muskâs purchase of Twitter and the billionaireâs dramatic and erratic changes to the platform. In response to Muskâs takeover, a great number of Twitter users have made the leap to Mastodon, a decentralized platform that offers a very different vision of what social media could look like.
What exactly is decentralized social media, and how does it work? Lawfare senior editor Alan Rozenshtein has a paper on just that, and he sat down with Lawfare senior editor Quinta Jurecic on the podcast to discuss for an episode of our Arbiters of Truth series on the online information ecosystem. They were also joined by Kate Klonick, associate professor of law at St. Johnâs University, to hash out the many, many questions about content moderation and the future of the internet sparked by Muskâs reign and the new popularity of Mastodon.
Among the works mentioned in this episode:
âWelcome to hell, Elon. You break it, you buy it,â by Nilay Patel on The VergeâHey Elon: Let Me Help You Speed Run The Content Moderation Learning Curve,â by Mike Masnick on TechdirtHosted on Acast. See acast.com/privacy for more information.
- Vis mere