Episodi
-
In the fall of 2023, the US Bipartisan Senate AI Working Group held insight forms with global leaders. Participants included the leaders of major AI labs, tech companies, major organizations adopting and implementing AI throughout the wider economy, union leaders, academics, advocacy groups, and civil society organizations. This document, released on March 15, 2024, is the culmination of those discussions. It provides a roadmap that US policy is likely to follow as the US Senate begins to create legislation.
Original text:
https://www.politico.com/f/?id=0000018f-79a9-d62d-ab9f-f9af975d0000
Author(s):
Majority Leader Chuck Schumer, Senator Mike Rounds, Senator Martin Heinrich, and Senator Todd YoungA podcast by BlueDot Impact.
Learn more on the AI Safety Fundamentals website. -
In this paper from CSET, Ben Buchanan outlines a framework for understanding the inputs that power machine learning. Called "the AI Triad", it focuses on three inputs: algorithms, data, and compute.
Original text:
https://cset.georgetown.edu/wp-content/uploads/CSET-AI-Triad-Report.pdf
Author(s):
Ben BuchananA podcast by BlueDot Impact.
Learn more on the AI Safety Fundamentals website. -
Episodi mancanti?
-
This paper explores the under-discussed strategies of adaptation and resilience to mitigate the risks of advanced AI systems. The authors present arguments supporting the need for societal AI adaptation, create a framework for adaptation, offer examples of adapting to AI risks, outline the concept of resilience, and provide concrete recommendations for policymakers.
Original text:
https://drive.google.com/file/d/1k3uqK0dR9hVyG20-eBkR75_eYP2efolS/view?usp=sharing
Author(s):
Jamie Bernardi, Gabriel Mukobi, Hilary Greaves, Lennart Heim, and Markus AnderljungA podcast by BlueDot Impact.
Learn more on the AI Safety Fundamentals website. -
This document from the OECD is split into two sections: principles for responsible stewardship of trustworthy AI & national policies and international co-operation for trustworthy AI. 43 governments around the world have agreed to adhere to the document. While originally written in 2019, updates were made in 2024 which are reflected in this version.
Original text:
https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449
Author(s):
The Organization for Economic Cooperation and DevelopmentA podcast by BlueDot Impact.
Learn more on the AI Safety Fundamentals website. -
This summary of UNESCO's Recommendation on the Ethics of AI outlines four core values, ten core principles, and eleven actionable policies for responsible AI governance. The full text was agreed to by all 193 member states of the United Nations.
Original text:
https://unesdoc.unesco.org/ark:/48223/pf0000385082
Author(s):
The United Nations Educational, Scientific, and Cultural OrganziationA podcast by BlueDot Impact.
Learn more on the AI Safety Fundamentals website. -
This statement was released by the UK Government as part of their Global AI Safety Summit from November 2023. It notes that frontier models pose unique risks and calls for international cooperation, finding that "many risks arising from AI are inherently international in nature, and so are best addressed through international cooperation." It was signed by multiple governments, including the US, EU, India, and China.
Original text:
https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023
Author(s):
UK GovernmentA podcast by BlueDot Impact.
Learn more on the AI Safety Fundamentals website. -
This report by the UK's Department for Science, Technology, and Innovation outlines a regulatory framework for UK AI policy. Per the report, "AI is helping to make our jobs safer and more satisfying, conserve our wildlife and fight climate change, and make our public services more efficient. Not only do we need to plan for the capabilities and uses of the AI systems we have today, but we must also prepare for a near future where the most powerful systems are broadly accessible and significantly more capable"
Original text: https://www.gov.uk/government/consultations/ai-regulation-a-pro-innovation-approach-policy-proposals/outcome/a-pro-innovation-approach-to-ai-regulation-government-response#executive-summary
Author(s): UK Department of Science, Technology, and InnovationA podcast by BlueDot Impact.
Learn more on the AI Safety Fundamentals website. -
This report from the Carnegie Endowment for International Peace summarizes Chinese AI policy as of mid-2023. It also provides analysis of the factors motivating Chinese AI Governance. We're providing a more structured analysis to Chinese AI policy relative to other governments because we expect learners will be less familiar with the Chinese policy process.
Original text:
https://carnegieendowment.org/2023/07/10/china-s-ai-regulations-and-how-they-get-made-pub-90117
Author(s):
Matt SheehanA podcast by BlueDot Impact.
Learn more on the AI Safety Fundamentals website. -
This primer by the Future of Life Institute highlights core elements of the EU AI Act. It includes a high level summary alongside explanations of different restrictions on prohibited AI systems, high-risk AI systems, and general purpose AI.
Original text:
https://artificialintelligenceact.eu/high-level-summary/
Author(s):
The Future of Life InstituteA podcast by BlueDot Impact.
Learn more on the AI Safety Fundamentals website. -
This fact sheet from The White House summarizes President Biden's AI Executive Order from October 2023. The President's AI EO represents the most aggressive approach to date from the US executive branch on AI policy.
Original text:
https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/
Author(s):
The White HouseA podcast by BlueDot Impact.
Learn more on the AI Safety Fundamentals website. -
This high-level overview by CISA summarizes major US policies on AI at the federal level. Important items worth further investigation include Executive Order 14110, the voluntary commitments, the AI Bill of Rights, and Executive Order 13859.
Original text:
https://www.cisa.gov/ai/recent-efforts
Author(s):
The US Cybersecurity and Infrastructure Security AgencyA podcast by BlueDot Impact.
Learn more on the AI Safety Fundamentals website. -
This yearly report from Stanford’s Center for Humane AI tracks AI governance actions and broader trends in policies and legislation by governments around the world in 2023. It includes a summary of major policy actions taken by different governments, as well as analyses of regulatory trends, the volume of AI legislation, and different focus areas governments are prioritizing in their interventions.
Original text:
https://aiindex.stanford.edu/wp-content/uploads/2024/04/HAI_AI-Index-Report-2024_Chapter_7.pdf
Authors:
Nestor Maslej et al.A podcast by BlueDot Impact.
Learn more on the AI Safety Fundamentals website. -
This report by the Center for Security and Emerging Technology first analyzes the tensions and tradeoffs between three strategic technology and national security goals: driving technological innovation, impeding adversaries’ progress, and promoting safe deployment. It then identifies different direct and enabling policy levers, assessing each based on the tradeoffs they make.
While this document is designed for US policymakers, most of its findings are broadly applicable.
Original text:
https://cset.georgetown.edu/wp-content/uploads/The-Policy-Playbook.pdf
Authors:
Jack Corrigan, Melissa Flagg, and Dewi MurdickA podcast by BlueDot Impact.
Learn more on the AI Safety Fundamentals website. -
This report from the Centre for Emerging Technology and Security and the Centre for Long-Term Resilience identifies different levers as they apply to different stages of the AI life cycle. They split the AI lifecycle into three stages: design, training, and testing; deployment and usage; and longer-term deployment and diffusion. It also introduces a risk mitigation hierarchy to rank different approaches in decreasing preference, arguing that “policy interventions will be most effective if they intervene at the point in the lifecycle where risk first arises.”
While this document is designed for UK policymakers, most of its findings are broadly applicable.
Original text:
https://cetas.turing.ac.uk/sites/default/files/2023-08/cetas-cltr_ai_risk_briefing_paper.pdf
Authors:
Ardi Janjeva, Nikhil Mulani, Rosamund Powell, Jess Whittlestone, and Shahar AviA podcast by BlueDot Impact.
Learn more on the AI Safety Fundamentals website. -
This report by the Nuclear Threat Initiative primarily focuses on how AI's integration into biosciences could advance biotechnology but also poses potentially catastrophic biosecurity risks. It’s included as a core resource this week because the assigned pages offer a valuable case study into an under-discussed lever for AI risk mitigation: building resilience.
Resilience in a risk reduction context is defined by the UN as “the ability of a system, community or society exposed to hazards to resist, absorb, accommodate, adapt to, transform and recover from the effects of a hazard in a timely and efficient manner, including through the preservation and restoration of its essential basic structures and functions through risk management.” As you’re reading, consider other areas where policymakers might be able to build a more resilient society to mitigate AI risk.
Original text:
https://www.nti.org/wp-content/uploads/2023/10/NTIBIO_AI_FINAL.pdf
Authors:
Sarah R. Carter, Nicole E. Wheeler, Sabrina Chwalek, Christopher R. Isaac, and Jaime YassifA podcast by BlueDot Impact.
Learn more on the AI Safety Fundamentals website. -
To solve rogue AIs, we’ll have to align them. In this article by Adam Jones of BlueDot Impact, Jones introduces the concept of aligning AIs. He defines alignment as “making AI systems try to do what their creators intend them to do.”
Original text:
https://aisafetyfundamentals.com/blog/what-is-ai-alignment/
Author:
Adam JonesA podcast by BlueDot Impact.
Learn more on the AI Safety Fundamentals website. -
This excerpt from CAIS’s AI Safety, Ethics, and Society textbook provides a deep dive into the CAIS resource from session three, focusing specifically on the challenges of controlling advanced AI systems.
Original Text:
https://www.aisafetybook.com/textbook/1-5
Author:
The Center for AI SafetyA podcast by BlueDot Impact.
Learn more on the AI Safety Fundamentals website. -
This article from the Center for AI Safety provides an overview of ways that advanced AI could cause catastrophe. It groups catastrophic risks into four categories: malicious use, AI race, organizational risk, and rogue AIs. The article is a summary of a larger paper that you can read by clicking here.
Original text:
https://www.safe.ai/ai-risk
Authors:
Dan Hendrycks, Thomas Woodside, Mantas MazeikaA podcast by BlueDot Impact.
Learn more on the AI Safety Fundamentals website. -
This report from the UK’s Government Office for Science envisions five potential risk scenarios from frontier AI. Each scenario includes information on the AI system’s capabilities, ownership and access, safety, level and distribution of use, and geopolitical context. It provides key policy issues for each scenario and concludes with an overview of existential risk. If you have extra time, we’d recommend you read the entire document.
Original text:
https://assets.publishing.service.gov.uk/media/653bc393d10f3500139a6ac5/future-risks-of-frontier-ai-annex-a.pdf
Author:
The UK Government Office for ScienceA podcast by BlueDot Impact.
Learn more on the AI Safety Fundamentals website. -
This resource, written by Adam Jones at BlueDot Impact, provides a comprehensive overview of the existing and anticipated risks of AI. As you're going through the reading, consider what different futures might look like should different combinations of risks materialize.
Original text:
https://aisafetyfundamentals.com/blog/ai-risks/
Author:
Adam JonesA podcast by BlueDot Impact.
Learn more on the AI Safety Fundamentals website. - Mostra di più