Episódios
-
Plus, “Circuit Breakers” for AI systems, and updates on China's AI industry.
Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.
Supreme Court Decision Could Limit Federal Ability to Regulate AI
In a recent decision, the Supreme Court overruled the 1984 precedent Chevron v. Natural Resources Defence Council. In this story, we discuss the decision's implications for regulating AI.
Chevron allowed agencies to flexibly apply expertise when regulating. The “Chevron doctrine” had required courts to defer to a federal agency's interpretation of a statute in the case that that statute was ambiguous and the agency's interpretation was reasonable. Its elimination curtails federal agencies’ ability to regulate—including, as this article from LawAI explains, their ability to regulate AI.
The Chevron doctrine expanded federal agencies’ ability to regulate in at least two ways. First, agencies could draw on their technical expertise to interpret ambiguous statutes [...]
---
Outline:
(00:16) Supreme Court Decision Could Limit Federal Ability to Regulate AI
(02:18) “Circuit Breakers” for AI Systems
(04:45) Updates on China's AI Industry
(07:32) Links
The original text contained 1 image which was described by AI.
---
First published:
July 9th, 2024Source:
---
https://newsletter.safe.ai/p/ai-safety-newsletter-38-supreme-courtWant more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
-
US Launches Antitrust Investigations
The U.S. Government has launched antitrust investigations into Nvidia, OpenAI, and Microsoft. The U.S. Department of Justice (DOJ) and Federal Trade Commission (FTC) have agreed to investigate potential antitrust violations by the three companies, the New York Times reported. The DOJ will lead the investigation into Nvidia while the FTC will focus on OpenAI and Microsoft.
Antitrust investigations are conducted by government agencies to determine whether companies are engaging in anticompetitive practices that may harm consumers and stifle competition.
Nvidia investigated for GPU dominance. The New York Times reports that concerns have been raised about Nvidia's dominance in the GPU market, “including how the company's software locks [...]
---
Outline:
(00:10) US Launches Antitrust Investigations
(02:58) Recent Criticisms of OpenAI and Anthropic
(05:40) Situational Awareness
(09:14) Links
---
First published:
June 18th, 2024Source:
---
https://newsletter.safe.ai/p/ai-safety-newsletter-37-us-launchesWant more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
-
Estão a faltar episódios?
-
Voluntary Commitments are Insufficient
AI companies agree to RSPs in Seoul. Following the second AI Global Summit held in Seoul, the UK and Republic of Korea governments announced that 16 major technology organizations, including Amazon, Google, Meta, Microsoft, OpenAI, and xAI have agreed to a new set of Frontier AI Safety Commitments.
Some commitments from the agreement include:
Assessing risks posed by AI models and systems throughout the AI lifecycle.
Setting thresholds for severe risks, defining when a model or system would pose intolerable risk if not adequately mitigated.
Keeping risks within defined thresholds, such as by modifying system behaviors and implementing robust security controls.
Potentially halting development or deployment if risks cannot be sufficiently mitigated.
These commitments [...]
---
Outline:
(00:03) Voluntary Commitments are Insufficient
(02:45) Senate AI Policy Roadmap
(05:18) Chapter 1: Overview of Catastrophic Risks
(07:56) Links
---
First published:
May 30th, 2024Source:
---
https://newsletter.safe.ai/p/ai-safety-newsletter-35-voluntaryWant more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
-
OpenAI and Google Announce New Multimodal Models
In the current paradigm of AI development, there are long delays between the release of successive models. Progress is largely driven by increases in computing power, and training models with more computing power requires building large new data centers.
More than a year after the release of GPT-4, OpenAI has yet to release GPT-4.5 or GPT-5, which would presumably be trained on 10x or 100x more compute than GPT-4, respectively. These models might be released over the next year or two, and could represent large spikes in AI capabilities.
But OpenAI did announce a new model last week, called GPT-4o. The “o” stands for “omni,” referring to the fact that the model can use text, images, videos [...]
---
Outline:
(00:03) OpenAI and Google Announce New Multimodal Models
(02:36) The Surge in AI Lobbying
(05:29) How Should Copyright Law Apply to AI Training Data?
(10:10) Links
---
First published:
May 16th, 2024Source:
---
https://newsletter.safe.ai/p/ai-safety-newsletter-35-lobbyingWant more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
-
Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.
AI Labs Fail to Uphold Safety Commitments to UK AI Safety Institute
In November, leading AI labs committed to sharing their models before deployment to be tested by the UK AI Safety Institute. But reporting from Politico shows that these commitments have fallen through.
OpenAI, Anthropic, and Meta have all failed to share their models with the UK AISI before deployment. Only Google DeepMind, headquartered in London, has given pre-deployment access to UK AISI.
Anthropic released the most powerful publicly available language model, Claude 3, without any window for pre-release testing by the UK AISI. When asked for comment, Anthropic co-founder Jack Clark said, “Pre-deployment testing is a nice idea but very difficult to implement.”
When asked about their concerns with pre-deployment testing [...]
---
Outline:
(00:03) AI Labs Fail to Uphold Safety Commitments to UK AI Safety Institute
(02:17) New Bipartisan AI Policy Proposals in the US Senate
(06:35) Military AI in Israel and the US
(11:44) New Online Course on AI Safety from CAIS
(12:38) Links
---
First published:
May 1st, 2024Source:
---
https://newsletter.safe.ai/p/ai-safety-newsletter-34-new-militaryWant more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
-
Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.
This week, we cover:
Consolidation in the corporate AI landscape, as smaller startups join forces with larger funders.
Several countries have announced new investments in AI, including Singapore, Canada, and Saudi Arabia.
Congress's budget for 2024 provides some but not all of the requested funding for AI policy. The White House's 2025 proposal makes more ambitious requests for AI funding.
How will AI affect biological weapons risk? We reexamine this question in light of new experiments from RAND, OpenAI, and others.
AI Startups Seek Support From Large Financial Backers
As AI development demands ever-increasing compute resources, only well-resourced developers can compete at the frontier. In practice, this means that AI startups must either partner with the world's [...]
---
Outline:
(00:45) AI Startups Seek Support From Large Financial Backers
(03:47) National AI Investments
(05:16) Federal Spending on AI
(08:35) An Updated Assessment of AI and Biorisk
(15:35) $250K in Prizes: SafeBench Competition Announcement
(16:08) Links
---
First published:
April 11th, 2024Source:
---
https://newsletter.safe.ai/p/ai-safety-newsletter-33-reassessingWant more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
-
Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.
Measuring and Reducing Hazardous Knowledge
The recent White House Executive Order on Artificial Intelligence highlights risks of LLMs in facilitating the development of bioweapons, chemical weapons, and cyberweapons.
To help measure these dangerous capabilities, CAIS has partnered with Scale AI to create WMDP: the Weapons of Mass Destruction Proxy, an open source benchmark with more than 4,000 multiple choice questions that serve as proxies for hazardous knowledge across biology, chemistry, and cyber.
This benchmark not only helps the world understand the relative dual-use capabilities of different LLMs, but it also creates a path forward for model builders to remove harmful information from their models through machine unlearning techniques.
Measuring hazardous knowledge in bio, chem, and cyber. Current evaluations of dangerous AI capabilities have [...]
---
Outline:
(00:03) Measuring and Reducing Hazardous Knowledge
(04:35) Language models are getting better at forecasting
(07:51) Proposals for Private Regulatory Markets
(14:25) Links
---
First published:
March 7th, 2024Source:
---
https://newsletter.safe.ai/p/ai-safety-newsletter-32-measuringWant more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
-
Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.
This week, we’ll discuss:
A new proposed AI bill in California which requires frontier AI developers to adopt safety and security protocols, and clarifies that developers bear legal liability if their AI systems cause unreasonable risks or critical harms to public safety.
Precedents for AI governance from healthcare and biosecurity.
The EU AI Act and job opportunities at their enforcement agency, the AI Office.
A New Bill on AI Policy in California
Several leading AI companies have public plans for how they’ll invest in safety and security as they develop more dangerous AI systems. A new bill in California's state legislature would codify this practice as a legal requirement, and clarify the legal liability faced by developers [...]
---
Outline:
(00:33) A New Bill on AI Policy in California
(04:38) Precedents for AI Policy: Healthcare and Biosecurity
(07:56) Enforcing the EU AI Act
(08:55) Links
---
First published:
February 21st, 2024Source:
---
https://newsletter.safe.ai/p/aisn-31-a-new-ai-policy-bill-in-californiaWant more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
-
Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.
Compute Investments Continue To Grow
Pausing AI development has been proposed as a policy for ensuring safety. For example, an open letter last year from the Future of Life Institute called for a six-month pause on training AI systems more powerful than GPT-4.
But one interesting fact about frontier AI development is that it comes with natural pauses that can last many months or years. After releasing a frontier model, it takes time for AI developers to construct new compute clusters with larger numbers of more advanced computer chips. The supply of compute is currently unable to keep up with demand, meaning some AI developers cannot buy enough chips for their needs.
This explains why OpenAI was reportedly limited by GPUs last year. [...]
---
Outline:
(00:06) Compute Investments Continue To Grow
(03:48) Developments in Military AI
(07:19) Japan and Singapore Support AI Safety
(08:57) Links
---
First published:
January 24th, 2024Source:
---
https://newsletter.safe.ai/p/aisn-30-investments-in-compute-andWant more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
-
Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.
A Provisional Agreement on the EU AI Act
On December 8th, the EU Parliament, Council, and Commission reached a provisional agreement on the EU AI Act. The agreement regulates the deployment of AI in high risk applications such as hiring and credit pricing, and it bans private companies from building and deploying AI for unacceptable applications such as social credit scoring and individualized predictive policing.
Despite lobbying by some AI startups against regulation of foundation models, the agreement contains risk assessment and mitigation requirements for all general purpose AI systems. Specific requirements apply to AI systems trained with >1025 FLOP such as Google's Gemini and OpenAI's GPT-4.
Minimum basic transparency requirements for all GPAI. The provisional agreement regulates foundation models — using the [...]
---
Outline:
(00:06) A Provisional Agreement on the EU AI Act
(04:55) Questions about Research Standards in AI Safety
(06:48) The New York Times sues OpenAI and Microsoft for Copyright Infringement
(10:34) Links
---
First published:
January 4th, 2024Source:
---
https://newsletter.safe.ai/p/aisn-29-progress-on-the-eu-ai-actWant more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
-
Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.
This week we’re looking closely at AI legislative efforts in the United States, including:
Senator Schumer's AI Insight Forum
The Blumenthal-Hawley framework for AI governance
Agencies proposed to govern digital platforms
State and local laws against AI surveillance
The National AI Research Resource (NAIRR)
Senator Schumer's AI Insight Forum
The CEOs of more than a dozen major AI companies gathered in Washington on Wednesday for a hearing with the Senate. Organized by Democratic Majority Leader Chuck Schumer and a bipartisan group of Senators, this was the first of many hearings in their AI Insight Forum.
After the hearing, Senator Schumer said, “I asked everyone in the room, ‘Is government needed to play a role in regulating AI?’ and [...]
---
Outline:
(00:30) Senator Schumer's AI Insight Forum
(01:20) The Blumenthal-Hawley Framework
(03:09) Agencies Proposed to Govern Digital Platforms
(04:46) Deepfakes and Watermarking Legislation
(06:12) State and Local Laws Against AI Surveillance
(06:52) National AI Research Resource (NAIRR)
(08:18) Links
---
First published:
September 19th, 2023Source:
---
https://newsletter.safe.ai/p/the-landscape-of-us-ai-legislationWant more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
-
As 2023 comes to a close, we want to thank you for your continued support for AI safety. This has been a big year for AI and for the Center for AI Safety. In this special-edition newsletter, we highlight some of our most important projects from the year. Thank you for being part of our community and our work.
Center for AI Safety's 2023 Year in Review
The Center for AI Safety (CAIS) is on a mission to reduce societal-scale risks from AI. We believe this requires research and regulation. These both need to happen quickly (due to unknown timelines on AI progress) and in tandem (because either one is insufficient on its own). To achieve this, we pursue three pillars of work: research, field-building, and advocacy.
Research
CAIS conducts both technical and conceptual research on AI safety. We pursue multiple overlapping strategies which can be layered together [...]
---
Outline:
(00:27) Center for AI Safety's 2023 Year in Review
(00:56) Research
(03:37) Field-Building
(07:35) Advocacy
(10:04) Looking Ahead
(10:23) Support Our Work
---
First published:
December 21st, 2023Source:
---
https://newsletter.safe.ai/p/aisn-28-center-for-ai-safety-2023Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
-
Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.
Defensive Accelerationism
Vitalik Buterin, the creator of Ethereum, recently wrote an essay on the risks and opportunities of AI and other technologies. He responds to Marc Andreessen's manifesto on techno-optimism and the growth of the effective accelerationism (e/acc) movement, and offers a more nuanced perspective.
Technology is often great for humanity, the essay argues, but AI could be an exception to that rule. Rather than giving governments control of AI so they can protect us, Buterin argues that we should build defensive technologies that provide security against catastrophic risks in a decentralized society. Cybersecurity, biosecurity, resilient physical infrastructure, and a robust information ecosystem are some of the technologies Buterin believes we should build to protect ourselves from AI risks.
Technology has risks, but [...]
---
Outline:
(00:06) Defensive Accelerationism
(03:55) Retrospective on the OpenAI Board Saga
(07:58) Klobuchar and Thune's “light-touch” Senate bill
(10:23) Links
---
First published:
December 7th, 2023Source:
---
https://newsletter.safe.ai/p/aisn-27-defensive-accelerationismWant more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
-
Also, Results From the UK Summit, and New Releases From OpenAI and xAI.
Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.
This week's key stories include:
The UK, US, and Singapore have announced national AI safety institutions.
The UK AI Safety Summit concluded with a consensus statement, the creation of an expert panel to study AI risks, and a commitment to meet again in six months.
xAI, OpenAI, and a new Chinese startup released new models this week.
UK, US, and Singapore Establish National AI Safety Institutions
Before regulating a new technology, governments often need time to gather information and consider their policy options. But during that time, the technology may diffuse through society, making it more difficult for governments to intervene. This process, termed the Collingridge Dilemma, is a fundamental challenge in technology policy.
But recently [...]
---
Outline:
(00:36) UK, US, and Singapore Establish National AI Safety Institutions
(03:53) UK Summit Ends with Consensus Statement and Future Commitments
(05:39) New Models From xAI, OpenAI, and a New Chinese Startup
(09:28) Links
---
First published:
November 15th, 2023Source:
---
https://newsletter.safe.ai/p/national-institutions-for-ai-safetyWant more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.