Эпизоды
-
Max Smeets is a Senior Researcher at ETH Zurich's Center for Security Studies and Co-Director of Virtual Routes
You can find links and a transcript at www.hearthisidea.com/episodes/smeets
In this episode we talk about:
The different types of cyber operations that a nation state might launchHow international norms formed around what kind of cyber attacks are “allowed”The challenges that even elite cyber forces faceWhat capabilities future AI systems would need to drastically change the spaceYou can get in touch through our website or on Twitter. Consider leaving us an honest review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!
-
Tom Kalil is the CEO of Renaissance Philanthropy.
He also served in the White House for two presidents (under Obama and Clinton); where he helped establish incentive prizes in government through challenge.gov; in addition to dozens of science and tech program. More recently Tom served as the Chief Innovation Officer at Schmidt Futures, where he helped launch Convergent Research.
Matt Clancy is an economist and a research fellow at Open Philanthropy. He writes ‘New Things Under the Sun’, which is a living literature review on academic research about science and innovation.
We talked about:
What is ‘influence without authority’?Should public funders sponsor more innovation prizes?Can policy entrepreneurship be taught formally?Why isn't ultra-wealthy philanthropy much more ambitious?What's the optimistic case for increasing US state capacity?What was it like being principal staffer to Gordon Moore?What is Renaissance Philanthropy?You can get in touch through our website or on Twitter. Consider leaving us an honest review wherever you're listening to this — it's the best way to support the show. Thanks for listening!
-
Пропущенные эпизоды?
-
Dr Cynthia Schuck-Paim is the Scientific Director of the Welfare Footprint Project, a scientific effort to quantify animal welfare to inform practice, policy, investing and purchasing decisions.
You can find links and a transcript at www.hearthisidea.com/episodes/schuck.
We discuss:
How to begin thinking about quantifying animal experiences in a cross-comparable wayWhether the ability to feel pain is unique to big brained animals, or more widespread in the tree of lifeHow fish farming compares to poultry and livestock farmingHow worried to be about bird flu zoonosisWhether different animal species experience time differentlyWhether positive experiences like joy could make life worth living for some farmed animalsHow animal welfare advocates can learn from anti-corruption nonprofitsYou can get in touch through our website or on Twitter. Consider leaving us an honest review wherever you're listening to this — it's the best way to support the show. Thanks for listening!
-
Dan Williams is a Lecturer in Philosophy at the University of Sussex and an Associate Fellow at the Leverhulme Centre for the Future of Intelligence (CFI) at the University of Cambridge.
You can find links and a transcript at www.hearthisidea.com/episodes/williams.
We discuss:
If reasoning is so useful, why are we so bad at it?Do some bad ideas really work like ‘mind viruses’? Is the ‘luxury beliefs’ concept useful?What's up with the idea of a ‘marketplace for ideas’? Are people shopping for new beliefs, or to rationalise their existing attitudes?How dangerous is misinformation, really? Can we ‘vaccinate’ or ‘inoculate’ against it?Will AI help us form more accurate beliefs, or will it persuade more people of unhinged ideas?Does fact-checking work?Under transformative AI, should we worry more about the suppression or the proliferation of counter-establishment ideas?You can get in touch through our website or on Twitter. Consider leaving us an honest review wherever you're listening to this — it's the best way to support the show. Thanks for listening!
-
Tamay Besiroglu is a researcher working on the intersection of economics and AI. He is currently the Associate Director of Epoch AI, a research institute investigating key trends and questions that will shape the trajectory and governance of AI.
You can find links and a transcript at www.hearthisidea.com/episodes/besiroglu
In this episode we talked about open source the risks and benefits of open source AI models. We talk about:
The argument for explosive growth from ‘increasing returns to scale’Does AI need to be able to automate R&D to cause rapid growth?Which theories of growth best explain the Industrial Revolution; and what do they predict from AI?What happens to human incomes under near-total job automation?Are regulations likely to slow down frontier AI progress enough to prevent this? Might AI go the way of nuclear power?Will AI hit on resource or power limits before explosive growth? Won't it run out of data first?Why aren't academic economists more interested in the prospect of explosive growth, if indeed it is so plausible?You can get in touch through our website or on Twitter. Consider leaving us an honest review wherever you're listening to this — it's the best way to support the show. Thanks for listening!
-
Jacob Trefethen oversees Open Philanthropy’s science and science policy programs. He was a Henry Fellow at Harvard University, and has a B.A. from the University of Cambridge.
You can find links and a transcript at www.hearthisidea.com/episodes/trefethen
In this episode we talked about open source the risks and benefits of open source AI models. We talk about:
Life-saving health technologies which probably won't exist in 5 years (without a concerted effort) — like a widely available TB vaccine, and bugs which stop malaria spreadingHow R&D for neglected diseases works —How much does the world spend on it?How do drugs for neglected diseases go from design to distribution?No-brainer policy ideas for speeding up global health R&DComparing health R&D to public health interventions (like bed nets)Comparing the social returns to frontier (‘Progress Studies’) to global health R&DWhy is there no GiveWell-equivalent for global health R&D?Won't AI do all the R&D for us soon?You can get in touch through our website or on Twitter. Consider leaving us an honest review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!
-
Elizabeth Seger is the Director of Technology Policy at Demos, a cross-party UK think tank with a program on trustworthy AI.
You can find links and a transcript at www.hearthisidea.com/episodes/seger In this episode we talked about open source the risks and benefits of open source AI models. We talk about:
What ‘open source’ really meansWhat is (and isn’t) open about ‘open source’ AI modelsHow open source weights and code are useful for AI safety researchHow and when the costs of open sourcing frontier model weights might outweigh the benefitsAnalogies to ‘open sourcing nuclear designs’ and the open science movementYou can get in touch through our website or on Twitter. Consider leaving us an honest review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!
Note that this episode was recorded before the release of Meta’s Llama 3.1 family of models. Note also that in the episode Elizabeth referenced an older version of the definition maintained by OSI (roughly version 0.0.3). The current OSI definition (0.0.8) now does a much better job of delineating between different model components.
-
Joe Carlsmith is a writer, researcher, and philosopher. He works as a senior research analyst at Open Philanthropy, where he focuses on existential risk from advanced artificial intelligence. He also writes independently about various topics in philosophy and futurism, and holds a doctorate in philosophy from the University of Oxford.
You can find links and a transcript at www.hearthisidea.com/episodes/carlsmith
In this episode we talked about a report Joe recently authored, titled ‘Scheming AIs: Will AIs fake alignment during training in order to get power?’. The report “examines whether advanced AIs that perform well in training will be doing so in order to gain power later”; a behaviour Carlsmith calls scheming.
We talk about:
Distinguishing ways AI systems can be deceptive and misalignedWhy powerful AI systems might acquire goals that go beyond what they’re trained to do, and how those goals could lead to schemingWhy scheming goals might perform better (or worse) in training than less worrying goalsThe ‘counting argument’ for scheming AIWhy goals that lead to scheming might be simpler than the goals we intendThings Joe is still confused about, and research project ideasYou can get in touch through our website or on Twitter. Consider leaving us an honest review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!
-
Eric Schwitzgebel is a professor of philosophy at the University of California, Riverside. His main interests include connections between empirical psychology and philosophy of mind and the nature of belief. His book The Weirdness of the World can be found here.
We talk about:
The possibility of digital consciousnessPolicy ideas for avoiding major moral mistakes around digital consciousnessProspects for the science of consciousness, and why we likely won't have clear answers in timeWhy introspection is much less reliable than most people thinkHow and why we invent false stories about our own choices without realisingWhat randomly sampling people's experiences reveals about what we're doing with most of our attentionThe possibility of 'overlapping minds'How and why our actions might have infinite effects, both good and badWhether it would be good news to learn that our actions have infinite effects, or that the universe is infinite in extentThe best science fiction on digital minds and AIYou can get in touch through our website or on Twitter. Consider leaving us an honest review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!
-
Sonia Ben Ouagrham-Gormley is an associate professor at George Mason University and Deputy Director of their Biodefence Programme
In this episode we talk about:
Where the belief that 'bioweapons are easy to make' came from and why it has been difficult to changeWhy transferring tacit knowledge is so difficult -- and the particular challenges that rogue actors faceAs well as lastly what Sonia makes of the AI-Bio risk discourse and what types of advances in technology would cause her concernYou can get in touch through our website or on Twitter. Consider leaving us an honest review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!
-
In this bonus episode we are sharing an episode by another podcast: How I Learned To Love Shrimp. It is co-hosted by Amy Odene and James Ozden, who together are "showcasing innovative and impactful ways to help animals".
In this interview they speak to David Coman-Hidy, who is the former President of The Humane –League, one of the largest farm animal advocacy organisations in the world. He now works as a Partner at Sharpen Strategy working to coach animal advocacy organisations.
-
Michelle Lavery is a Program Associate with Open Philanthropy’s Farm Animal Welfare team, with a focus on the science and study of animal behaviour & welfare.
You can see more links and a full transcript at hearthisidea.com/episodes/lavery
In this episode we talk about:
How do scientists study animal emotions in the first place? How is a "science" of animal emotion even feasible?When is it useful to anthropomorphise animals to understand them?How can you study the preferences of animals? How can you measure the “strength” of preferences?How do farmed animal welfare advocates relate to animal welfare science? Are their perceptions fair?How can listeners get involved with the study of animal emotions?You can get in touch through our website or on Twitter. Consider leaving us an honest review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!
-
Dr Richard Bruns is a Senior Scholar at the Johns Hopkins Center for Health Security, and before that was a Senior Economist at the US Food and Drug Administration (the FDA).
In this episode we talk about the importance of indoor air quality (IAQ), and how to improve it. Including:
Estimating the DALY cost of unclean indoor air from pathogens and particulate matterHow much pandemic risk could be reduced from improving IAQ?How economists convert health losses into dollar figures — and how not to put a price on lifeKey interventions to improve IAQAir filtrationGermicidal UV light (especially Far-UVC light)Barriers to adoption, including UV smog and empirical studies needed mostNational and state-level policy changes to get these interventions adopted widelyYou can get in touch through our website or on Twitter. Consider leaving us an honest review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!
-
Saloni Dattani is a Researcher at Our World in Data, and a founder & editor at the online magazine Works in Progress. She holds a PhD in psychiatric genetics from King’s College London.
You can see more links and a full transcript at hearthisidea.com/episodes/dattani.
In this episode we talk about:
The history of malaria and attempts to eradicate itThe role of DDT and insecticide spraying campaigns — and why they were scaled downWhy we didn’t get a malaria vaccine soonerWhat comes after vaccine discovery — rolling out the RTS,S vaccineNew funding models to accelerate similar life-saving research, like vaccines for TB and HIVWhy so much global health data is missing, and why that mattersHow the ‘million deaths study’ revealed that about 50,000 deaths per year from snakebites in India went uncounted by health agenciesYou can get in touch through our website or on Twitter. Consider leaving us an honest review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!
-
Liv Boeree is a former poker champion turned science communicator and podcaster, with a background in astrophysics. In 2014, she founded the nonprofit Raising for Effective Giving, which has raised more than $14 million for effective charities. Before retiring from professional poker in 2019, Liv was the Female Player of the Year for three years running. Currently she hosts the Win-Win podcast (you’ll enjoy it if you enjoy this podcast).
You can see more links and a full transcript at hearthisidea.com/episodes/boeree.
In this episode we talk about:
Is the ‘poker mindset’ valuable? Is it learnable?How and why to bet on your beliefs — and whether there are outcomes you shouldn’t make bets onWould cities be better without public advertisements?What is Moloch, and why is it a useful abstraction?How do we escape multipolar traps?Why might advanced AI (not) act like profit-seeking companies?What’s so important about complexity? What is complexity, for that matter?You can get in touch through our website or on Twitter. Consider leaving us an honest review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!
-
Jon Y is the creator of the Asianometry YouTube channel and accompanying newsletter. He describes his channel as making "video essays on business, economics, and history. Sometimes about Asia, but not always."
You can see more links and a full transcript at hearthisidea.com/episodes/asianometry
In this episode we talk about:
Compute trends driving recent progress in Artificial Intelligence;The semiconductor supply chain and its geopolitics;The buzz around LK-99 and superconductivity.If you have any feedback, you can get a free book for filling out our new feedback form. You can also get in touch through our website or on Twitter. Consider leaving us a review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!
-
Steven Teles s is a Professor of Political Science at Johns Hopkins University and a Senior Fellow at the Niskanen Center. His work focuses on American politics and he written several books on topics such as elite politics, the judiciary, and mass incarceration.
You can see more links and a full transcript at hearthisidea.com/teles
In this episode we talk about:
The rise of the conservative legal movement;How ideas can come to be entrenched in American politics;Challenges in building a new academic field like "law and economics";The limitations of doing quantitative evaluations of advocacy groups.If you have any feedback, you can get a free book for filling out our new feedback form. You can also get in touch through our website or on Twitter. Consider leaving us a review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!Key links:
-
Guive Assadi is a Research Scholar at the Center for the Governance of AI. Guive’s research focuses on the conceptual clarification of, and prioritisation among, potential risks posed by emerging technologies. He holds a master’s in history from Cambridge University, and a bachelor’s from UC Berkeley.
In this episode, we discuss Guive's paper, Will Humanity Choose Its Future?.
What is an 'evolutionary future', and would it count as an existential catastrophe?How did the agricultural revolution deliver a world which few people would have chosen?What does it mean to say that we are living in the dreamtime? Will it last?What competitive pressures in the future could drive the world to undesired outcomes?Digital mindsSpace settlementWhat measures could prevent an evolutionary future, and allow humanity to more deliberately choose its future?World governmentStrong global coordinationDefensive advantageShould this all make us more or less hopeful about humanity's future?Ideas for further researchGuive's recommended reading:
Rationalist Explanations for War by James D. FearonMeditations on Moloch by Scott AlexanderThe Age of Em by Robin HansonWhat is a Singleton? By Nick BostromOther key links:
Will Humanity Choose Its Future? by Guive AssadiColder Wars by GwernThe Secret of Our Success: How Culture Is Driving Human Evolution, Domesticating Our Species, and Making Us Smarter by Joseph Henrich (and a review by Scott Alexander) -
Michael Cohen is is a DPhil student at the University of Oxford with Mike Osborne. He will be starting a postdoc with Professor Stuart Russell at UC Berkeley, with the Center for Human-Compatible AI. His research considers the expected behaviour of generally intelligent artificial agents, with a view to designing agents that we can expect to behave safely.
You can see more links and a full transcript at www.hearthisidea.com/episodes/cohen.
We discuss:
What is reinforcement learning, and how is it different from supervised and unsupervised learning?Michael's recently co-authored paper titled 'Advanced artificial agents intervene in the provision of reward'Why might it be hard to convey what we really want to RL learners — even when we know exactly what we want?Why might advanced RL systems might tamper with their sources of input, and why could this be very bad?What assumptions need to hold for this "input tampering" outcome?Is reward really the optimisation target? Do models "get reward"?What's wrong with the analogy between RL systems and evolution?Key links:
Michael's personal website'Advanced artificial agents intervene in the provision of reward' by Michael K. Cohen, Marcus Hutter, and Michael A. Osborne'Pessimism About Unknown Unknowns Inspires Conservatism' by Michael Cohen and Marcus Hutter'Intelligence and Unambitiousness Using Algorithmic Information Theory' by Michael Cohen, Badri Vallambi, and Marcus Hutter'Quantilizers: A Safer Alternative to Maximizers for Limited Optimization' by Jessica Taylor'RAMBO-RL: Robust Adversarial Model-Based Offline Reinforcement Learning' by Marc Rigter, Bruno Lacerda, and Nick Hawes'Quantilizers: A Safer Alternative to Maximizers for Limited Optimization' by Jessica TaylorSeason 40 of Survivor -
Katja Grace is a researcher and writer. She runs AI Impacts, a research project trying to incrementally answer decision-relevant questions about the future of artificial intelligence (AI). Katja blogs primarily at worldspiritsockpuppet, and indirectly at Meteuphoric, Worldly Positions, LessWrong and the EA Forum.
We discuss:
What is AI Impacts working on?Counterarguments to the basic AI x-risk caseReasons to doubt that superhuman AI systems will be strongly goal-directedReasons to doubt that if goal-directed superhuman AI systems are built, their goals will be bad by human lightsAren't deep learning systems fairly good at understanding our 'true' intentions?Reasons to doubt that (misaligned) superhuman AI would overpower humanityThe case for slowing down AIIs AI really an arms race?Are there examples from history of valuable technologies being limited or slowed down?What does Katja think about the recent open letter on pausing giant AI experiments?Why read George Saunders?Key links:
World Spirit Sock Puppet (Katja's main blog)Counterarguments to the basic AI x-risk caseLet's think about slowing down AIWe don't trade with antsThank You, Esther Forbes (George Saunders)You can see more links and a full transcript at hearthisidea.com/episodes/grace.
- Показать больше