Episodes
-
In this post, Larks argues that the proposal to make AI firms promise to donate a large fraction of profits if they become extremely profitable will primarily benefitting the management of those firms and thereby give managers an incentive to move fast, aggravating race dynamics and in turn increasing existential risk.
https://forum.effectivealtruism.org/posts/ewroS7tsqhTsstJ44/a-windfall-clause-for-ceo-could-worsen-ai-race-dynamics
-
Missing episodes?
-
This is Otto Barten's summary of 'The effectiveness of AI existential risk communication to the American and Dutch public' by Alexia Georgiadis. In this paper Alexia measures changes in participants' awareness of AGI risks after consuming various media interventions.
Summary: https://forum.effectivealtruism.org/posts/fqXLT7NHZGsLmjH4o/paper-summary-the-effectiveness-of-ai-existential-risk
Original paper: https://existentialriskobservatory.org/papers_and_reports/The_Effectiveness_of_AI_Existential_Risk_Communication_to_the_American_and_Dutch_Public.pdf
Note: Some tables in the summary have been omitted in this audio version.
-
Carl Shulman & Elliott Thornley argue that the goal of longtermists should be to get governments to adopt global catastrophic risk policies based on standard cost-benefit analysis rather than arguments that stress the overwhelming importance of the future.
https://philpapers.org/archive/SHUHMS.pdf
Note: Tables, notes and references in the original article have been omitted.
-
"The field of biosecurity is more complicated, sensitive and nuanced, especially in the policy space, than what impressions you might get based on publicly available information. As a result, say / write / do things with caution (especially if you are a non-technical person or more junior, or talking to a new (non-EA) expert). This might help make more headway on safer biosecurity policy."
https://forum.effectivealtruism.org/posts/HCuoMQj4Y5iAZpWGH/advice-on-communicating-in-and-around-the-biosecurity-policy
Note: Some footnotes in the original article have been omitted.
-
The Global Priorities Institute has released Hayden Wilkinson's presentation on global priorities research. (The talk was given in mid-September last year but remained unlisted until now.)
https://globalprioritiesinstitute.org/hayden-wilkinson-global-priorities-research-why-how-and-what-have-we-learned/
-
New rules around gain-of-function research make progress in striking a balance between reward — and catastrophic risk.
https://www.vox.com/future-perfect/2023/2/1/23580528/gain-of-function-virology-covid-monkeypox-catastrophic-risk-pandemic-lab-accident
-
"One of two things must happen. Humanity needs to accelerate its adaptation to these technologies or a collective, enforceable decision must be made to slow the development of these technologies. Even doing both may not be enough."
https://www.nytimes.com/2023/03/12/opinion/chatbots-artificial-intelligence-future-weirdness.html