Episoder

  • In our final episode for the year, we explore Project Astra, a research prototype exploring future capabilities of a universal AI assistant that can understand the world around you. Host Hannah Fry is joined by Greg Wayne, Director in Research at Google DeepMind. They discuss the inspiration behind the research prototype, its current strengths and limitations, as well as potential future use cases. Hannah even gets the chance to put Project Astra's multilingual skills to the test.

    Further reading / listening:

    Gemini 2.0Project Astra Decoding Google Gemini with Jeff DeanGaming, Goats & General Intelligence with Frederic Besse

    Thanks to everyone who made this possible, including but not limited to:

    Presenter: Professor Hannah FrySeries Producer: Dan HardoonEditor: Rami Tzabar, TellTale StudiosCommissioner & Producer: Emma YousifMusic composition: Eleni ShawCamera Director and Video Editor: Bernardo ResendeAudio Engineer: Perry RogantinVideo Studio Production: Nicholas DukeVideo Editor: Bilal MerhiVideo Production Design: James BartonVisual Identity and Design: Eleanor TomlinsonCommissioned by Google DeepMind

    Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes. 

     

  • In this episode, Hannah is joined by Oriol Vinyals, VP of Drastic Research and Gemini co-lead. They discuss the evolution of agents from single-task models to more general-purpose models capable of broader applications, like Gemini. Vinyals guides Hannah through the two-step process behind multi modal models: pre-training (imitation learning) and post-training (reinforcement learning). They discuss the complexities of scaling and the importance of innovation in architecture and training processes. They close on a quick whirlwind tour of some of the new agentic capabilities recently released by Google DeepMind.

    Note: To see all of the full length demos, including unedited versions, and other videos related to Gemini 2.0 head to YouTube.

    Future reading/watching:

    Gemini 2.0 Decoding Google Gemini with Jeff DeanGaming, Goats & General Intelligence with Frederic Besse

    Thanks to everyone who made this possible, including but not limited to:

    Presenter: Professor Hannah Fry

    Series Producer: Dan Hardoon

    Editor: Rami Tzabar, TellTale Studios

    Commissioner & Producer: Emma Yousif

    Music composition: Eleni Shaw

    Camera Director and Video Editor: Bernardo Resende

    Audio Engineer: Perry Rogantin

    Video Studio Production: Nicholas Duke

    Video Editor: Bilal Merhi

    Video Production Design: James Barton

    Visual Identity and Design: Eleanor Tomlinson

    Commissioned by Google DeepMind

    Subscribe to our YouTube channel

    Find us on X

    Follow us on Instagram

    Add us on Linkedin

    Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes. 

     

  • Mangler du episoder?

    Klikk her for å oppdatere manuelt.

  • There is broad consensus across the tech industry, governments and society, that as artificial intelligence becomes more embedded in every aspect of our world, regulation will be essential. But what does this look like? Can it be adopted without stifling innovation? Are current frameworks presented by government leaders headed in the right direction?

    Join host Hannah Fry as she discusses these questions and more with Nicklas Lundblad, Director of Public Policy at Google DeepMind. Nicklas emphasises the importance of a nuanced approach to regulation, focusing on adaptability and evidence-based policymaking. He highlights the complexities of assessing risk and reward in emerging technologies, advocating for a focus on harm reduction.

    Further reading/watching:

    AI Principles: https://ai.google/responsibility/principles/Frontier Model Forum: https://blog.google/outreach-initiatives/public-policy/google-microsoft-openai-anthropic-frontier-model-forum/Ethics of AI assistants with Iason Gabriel https://youtu.be/aaZc-as-soA?si=0ThbYY30FlO31kKQ

    Thanks to everyone who made this possible, including but not limited to:

    Presenter: Professor Hannah FrySeries Producer: Dan HardoonEditor: Rami Tzabar, TellTale StudiosCommissioner & Producer: Emma YousifMusic composition: Eleni ShawCamera Director and Video Editor: Bernardo ResendeAudio Engineer: Perry RogantinVideo Studio Production: Nicholas DukeVideo Editor: Bilal MerhiVideo Production Design: James BartonVisual Identity and Design: Eleanor TomlinsonCommissioned by Google DeepMind

    Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes. 

     

  • NotebookLM is a research assistant powered by Gemini that draws on expertise from storytelling to present information in an engaging way. It allows users to upload their own documents and generate insights, explanations, and—more recently—podcasts. This feature, also known as audio overviews, has captured the imagination of millions of people worldwide, who have created thousands of engaging podcasts ranging from personal narratives to educational explainers using source materials like CVs, personal journals, sales decks, and more.

    Join Raiza Martin and Steven Johnson from Google Labs, Google’s testing ground for products, as they guide host Hannah Fry through the technical advancements that have made NotebookLM possible. In this episode they'll explore what it means to be interesting, the challenges of generating natural-sounding speech, as well as exciting new modalities on the horizon.

    Further reading

    Try NotebookLM hereRead about the speech generation technology behind Audio Overveiws: https://deepmind.google/discover/blog/pushing-the-frontiers-of-audio-generation/

    Thanks to everyone who made this possible, including but not limited to:

    Presenter: Professor Hannah Fry

    Series Producer: Dan Hardoon

    Editor: Rami Tzabar, TellTale Studios

    Commissioner & Producer: Emma Yousif

    Music composition: Eleni Shaw

    Camera Director and Video Editor: Daniel Lazard

    Audio Engineer: Perry Rogantin

    Video Studio Production: Nicholas Duke

    Video Editor: Alex Baro Cayetano, Daniel Lazard

    Video Production Design: James Barton

    Visual Identity and Design: Eleanor Tomlinson

    Commissioned by Google DeepMind

    Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes. 

     

  • Join Professor Hannah Fry at the AI for Science Forum for a fascinating conversation with Google DeepMind CEO Demis Hassabis. They explore how AI is revolutionizing scientific discovery, delving into topics like the nuclear pore complex, plastic-eating enzymes, quantum computing, and the surprising power of Turing machines. The episode also features a special 'ask me anything' session with Nobel Laureates Sir Paul Nurse, Jennifer Doudna, and John Jumper, who answer audience questions about the future of AI in science.

    Watch the episode here, and catch up on all of the sessions from the AI for Science Forum here.

    Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes. 

     

  • Imagine a future where we interact regularly with a range of advanced artificial intelligence (AI) assistants — and where millions of assistants interact with each other on our behalf. These experiences and interactions may soon become part of our everyday reality.

    In this episode, host Hannah Fry and Google DeepMind Senior Staff Research Scientist Iason Gabriel discuss the ethical implications of advanced AI assistants. Drawing from Iason's recent paper, they examine value alignment, anthropomorphism, safety concerns, and the potential societal impact of these technologies.

    Timecodes:

    00:00 Intro01:13 Definition of AI assistants04:05 A utopic view06:25 Iason’s background07:45 The Ethics of Advanced AI Assistants paper13:06 Anthropomorphism14:07 Turing perspective15:25 Anthropomorphism continued20:02 The value alignment question24:54 Deception27:07 Deployed at scale28:32 Agentic inequality31:02 Unfair outcomes34:10 Coordinated systems37:10 A new paradigm38:23 Tetradic value alignment41:10 The future42:41 Reflections from Hannah

    Thanks to everyone who made this possible, including but not limited to:

    Presenter: Professor Hannah FrySeries Producer: Dan HardoonEditor: Rami Tzabar, TellTale StudiosCommissioner & Producer: Emma YousifMusic composition: Eleni ShawCamera Director and Video Editor: Daniel LazardAudio Engineer: Perry RogantinVideo Studio Production: Nicholas DukeVideo Editor: Bilal MerhiVideo Production Design: James BartonVisual Identity and Design: Eleanor TomlinsonProduction support: Mo DawoudCommissioned by Google DeepMind

    Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes. 

     

  • How human should an AI tutor be? What does ‘good’ teaching look like? Will AI lead in the classroom, or take a back seat to human instruction? Will everyone have their own personalized AI tutor? Join research lead, Irina Jurenka, and Professor Hannah Fry as they explore the complicated yet exciting world of AI in education.

    Further reading:

    Towards Responsible Development of Generative AI for Education: An Evaluation-Driven Approach

    Thanks to everyone who made this possible, including but not limited to:

    Presenter: Professor Hannah FrySeries Producer: Dan HardoonEditor: Rami Tzabar, TellTale Studios Commissioner & Producer: Emma YousifMusic composition: Eleni ShawCamera Director and Video Editor: Daniel LazardAudio Engineer: Perry RogantinVideo Studio Production: Nicholas DukeVideo Editor: Bilal MerhiVideo Production Design: James BartonVisual Identity and Design: Eleanor TomlinsonProduction support: Mo Dawoud Commissioned by Google DeepMind

    Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes. 

     

  • In this episode, Professor Hannah Fry sits down with Pushmeet Kohli, VP of Research at Google DeepMind to discuss AI’s impact on scientific discovery. They go on a whirlwind tour of scientific projects, touching on recent breakthroughs in AlphaFold, material science, weather forecasting, and mathematics to better understand how AI can enhance our scientific understanding of the world.

    Further reading:

    Millions of new materials discovered with deep learningGraphCast: AI model for faster and more accurate global weather forecastingAlphaFold: A breakthrough unfolds (S2,E1)AlphaGeometry: An Olympiad-level AI system for geometryAI achieves silver-medal standard solving International Mathematical Olympiad problems

    Thanks to everyone who made this possible, including but not limited to:

    Presenter: Professor Hannah FrySeries Producer: Dan HardoonEditor: Rami Tzabar, TellTale Studios Commissioner & Producer: Emma YousifMusic composition: Eleni ShawCamera Director and Video Editor: Tommy BruceAudio Engineer: Perry RogantinVideo Studio Production: Nicholas DukeVideo Editor: Bilal MerhiVideo Production Design: James BartonVisual Identity and Design: Eleanor TomlinsonProduction support: Mo Dawoud Commissioned by Google DeepMind

    Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes. 

     

  • Games are a very good training ground for agents. Think about it. Perfectly packaged, neatly constrained environments where agents can run wild, work out the rules for themselves, and learn how to handle autonomy. In this episode, Research Engineering Team Lead, Frederic Besse, joins Hannah as they discuss important research like SIMA (Scalable Instructable Multiworld Agent) and what we can expect from future agents that can understand and safely carry out a wide range of tasks - online and in the real world.

    Further reading:

    SIMARTX & RT2Interactive Agents

    Thanks to everyone who made this possible, including but not limited to:

    Presenter: Professor Hannah FrySeries Producer: Dan HardoonEditor: Rami Tzabar, TellTale Studios Commissioner & Producer: Emma YousifProduction support: Mo Dawoud Music composition: Eleni ShawCamera Director and Video Editor: Tommy BruceAudio Engineer: Perry RogantinVideo Studio Production: Nicholas DukeVideo Editor: Bilal MerhiVideo Production Design: James BartonVisual Identity and Design: Eleanor Tomlinson Commissioned by Google DeepMind

    Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes. 

     

  • Professor Hannah Fry is joined by Jeff Dean, one of the most legendary figures in computer science and chief scientist of Google DeepMind and Google Research. Jeff was instrumental to the field in the late 1990s, writing the code that transformed Google from a small startup into the multinational company it is today. Hannah and Jeff discuss it all - from the early days of Google and neural networks, to the long term potential of multi-modal models like Gemini.

    Thanks to everyone who made this possible, including but not limited to:

    Presenter: Professor Hannah FrySeries Producer: Dan HardoonEditor: Rami Tzabar, TellTale Studios Commissioner & Producer: Emma YousifProduction support: Mo DawoudMusic composition: Eleni ShawCamera Director and Video Editor: Tommy BruceAudio Engineer: Perry RogantinVideo Studio Production: Nicholas DukeVideo Editor: Bilal MerhiVideo Production Design: James BartonVisual Identity and Design: Eleanor TomlinsonCommissioned by Google DeepMind

    Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes. 

     

  • Building safe and capable models is one of the greatest challenges of our time. Can we make AI work for everyone? How do we prevent existential threats? Why is alignment so important? Join Professor Hannah Fry as she delves into these critical questions with Anca Dragan, lead for AI safety and alignment at Google DeepMind.

    For further reading, search "Introducing the Frontier Safety Framework" and "Evaluating Frontier Models for Dangerous Capabilities".

    Thanks to everyone who made this possible, including but not limited to:

    Presenter: Professor Hannah FrySeries Producer: Dan HardoonEditor: Rami Tzabar, TellTale Studios Commissioner & Producer: Emma YousifProduction support: Mo DawoudMusic composition: Eleni ShawCamera Director and Video Editor: Tommy BruceAudio Engineer: Perry RogantinVideo Studio Production: Nicholas DukeVideo Editor: Bilal MerhiVideo Production Design: James BartonVisual Identity and Design: Eleanor TomlinsonCommissioned by Google DeepMind

    Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes. 

     

  • Professor Hannah Fry is joined by Google DeepMind's senior research director Douglas Eck to explore AI's capacity for true creativity. They delve into the complexities of defining creativity, the challenges of AI generated content and attribution, and whether AI can help us to connect with each other in new and meaningful ways.

    Want to watch the full episode? Subscribe to Google DeepMind's YouTube page and stay tuned for new episodes.

    Further reading:

    VeoImagen SynthIDAn update on web publisher controls (Google-Extended)

    Social channels to follow for new content:

    InstagramXLinkedin

    Thanks to everyone who made this possible, including but not limited to:

    Presenter: Professor Hannah FrySeries Producer: Dan HardoonSeries Editor: Rami Tzabar, TellTale Studios Commissioner and Producer: Emma YousifProduction support: Mo DawoudMusic composition: Eleni ShawCamera Director and Video Editor: Tommy BruceAudio Engineer: Darren Carikas Video Studio Production: Nicholas DukeVideo Editor: Bilal MerhiVideo Production Design: James BartonVisual Identity and Design: Eleanor Tomlinson Commissioned by Google DeepMind

    Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes. 

     

  • It has been a few years since Google DeepMind CEO and co-founder, Demis Hassabis, and Professor Hannah Fry caught up.

    In that time, the world has caught on to artificial intelligence—in a big way. Listen as they discuss the recent explosion of interest in AI, what Demis means when he describes chatbots as ‘unreasonably effective’, and the unexpected emergence of capabilities like conceptual understanding and abstraction in recent generative models.

    Demis and Hannah also explore the need for rigorous AI safety measures, the importance of responsible AI development, and what he hopes for as we move closer towards artificial general intelligence.

    Want to watch the full episode? Subscribe to Google DeepMind's YouTube page and stay tuned for new episodes.

    Further reading:

    GeminiProject Astra Google I/O 2024Scaling Language Models: Methods, Analysis & Insights from Training GopherLaMDA: our breakthrough conversation technology

    Social channels to follow for new content:

    InstagramXLinkedin

    Thanks to everyone who made this possible, including but not limited to:

    Presenter: Professor Hannah FrySeries Producer: Dan HardoonSeries Editor: Rami Tzabar, TellTale Studios Commissioner and Producer: Emma YousifProduction support: Mo DawoudMusic composition: Eleni ShawCamera Director and Video Editor: Tommy BruceAudio Engineer: Darren Carikas Video Studio Production: Nicholas DukeVideo Editor: Bilal MerhiVideo Production Design: James BartonVisual Identity and Design: Eleanor Tomlinson Commissioned by Google DeepMind

    Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes. 

     

  • Hannah wraps up the series by meeting DeepMind co-founder and CEO, Demis Hassabis. In an extended interview, Demis describes why he believes AGI is possible, how we can get there, and the problems he hopes it will solve. Along the way, he highlights the important role of consciousness and why he’s so optimistic that AI can help solve many of the world’s major challenges. As a final note, Demis shares the story of a personal meeting with Stephen Hawking to discuss the future of AI and discloses Hawking’s parting message.

    For questions or feedback on the series, message us on Twitter @DeepMind or email [email protected].

    Interviewee: Deepmind co-founder and CEO, Demis Hassabis

    Credits

    Presenter: Hannah Fry

    Series Producer: Dan Hardoon

    Production support: Jill Achineku

    Sounds design: Emma Barnaby

    Music composition: Eleni Shaw

    Sound Engineer: Nigel Appleton

    Editor: David Prest

    Commissioned by DeepMind

    Thank you to everyone who made this season possible!

    Further reading:

    DeepMind, The Podcast: https://deepmind.com/blog/article/welcome-to-the-deepmind-podcast

    DeepMind’s Demis Hassabis on its breakthrough scientific discoveries, WIRED: https://www.youtube.com/watch?v=2WRow9FqUbw

    Riemann hypothesis, Wikipedia: https://en.wikipedia.org/wiki/Riemann_hypothesis

    Using AI to accelerate scientific discovery by Demis Hassabis, Kendrew Lecture 2021: https://www.youtube.com/watch?v=sm-VkgVX-2o

    Protein Folding & the Next Technological Revolution by Demis Hassabis, Bloomberg: https://www.youtube.com/watch?v=vhd4ENh5ON4

    The Algorithm, MIT Technology Review: https://forms.technologyreview.com/newsletters/ai-the-algorithm/

    Machine learning resources, The Royal Society: https://royalsociety.org/topics-policy/education-skills/teacher-resources-and-opportunities/resources-for-teachers/resources-machine-learning/

    How to get empowered, not overpowered, by AI, TED: https://www.youtube.com/watch?v=2LRwvU6gEbA

    Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes. 

     

  • AI needs to benefit everyone, not just those who build it. But fulfilling this promise requires careful thought before new technologies are built and released into the world. In this episode, Hannah delves into some of the most pressing and difficult ethical and social questions surrounding AI today. She explores complex issues like racial and gender bias and the misuse of AI technologies, and hears why diversity and representation is vital for building technology that works for all.

    For questions or feedback on the series, message us on Twitter @DeepMind or email [email protected].

    Interviewees: DeepMind’s Sasha Brown, William Isaac, Shakir Mohamed, Kevin Mckee & Obum Ekeke

    Credits

    Presenter: Hannah Fry

    Series Producer: Dan Hardoon

    Production support: Jill Achineku

    Sounds design: Emma Barnaby

    Music composition: Eleni Shaw

    Sound Engineer: Nigel Appleton

    Editor: David Prest

    Commissioned by DeepMind

    Thank you to everyone who made this season possible!

    Further reading:

    What a machine learning tool that turns Obama white can (and can’t) tell us about AI bias, The Verge: https://www.theverge.com/21298762/face-depixelizer-ai-machine-learning-tool-pulse-stylegan-obama-bias

    Tuskegee Syphilis Study, Wikipedia: https://en.wikipedia.org/wiki/Tuskegee_Syphilis_Study

    Ethics & Society, DeepMind: https://deepmind.com/about/ethics-and-society

    Row over AI that 'identifies gay faces', BBC: https://www.bbc.co.uk/news/technology-41188560

    The Trevor Project: https://www.thetrevorproject.org/

    AI takes root, helping farmers identify diseased plants, Google: https://www.blog.google/technology/ai/ai-takes-root-helping-farmers-identity-diseased-plants/

    How Can You Use Technology to Support a Culture of Inclusion and Diversity?, myHRfuture: https://www.myhrfuture.com/blog/2019/7/16/how-can-you-use-technology-to-support-a-culture-of-inclusion-and-diversity

    Scholarships at DeepMind: https://www.deepmind.com/scholarships

    AI, Ain’t I a Woman? Joy Buolamwini, YouTube: https://www.youtube.com/watch?v=QxuyfWoVV98

    How to be Human in the Age of the Machine, Hannah Fry: https://royalsociety.org/grants-schemes-awards/book-prizes/science-book-prize/2018/hello-world/

    Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes. 

     

  • AI doesn’t just exist in the lab, it’s already solving a range of problems in the real world. In this episode, Hannah encounters a realistic recreation of her voice by WaveNet, the voice synthesising system that powers the Google Assistant and helps people with speech difficulties and illnesses regain their voices. Hannah also discovers how ‘deepfake’ technology can be used to improve weather forecasting and how DeepMind researchers are collaborating with Liverpool Football Club, aiming to take sports to the next level.

    For questions or feedback on the series, message us on Twitter @DeepMind or email [email protected].

    Interviewees: DeepMind’s Demis Hassabis, Raia Hadsell, Karl Tuyls, Zach Gleicher & Jackson Broshear; Niall Robinson of the UK Met Office

    Credits

    Presenter: Hannah Fry

    Series Producer: Dan Hardoon

    Production support: Jill Achineku

    Sounds design: Emma Barnaby

    Music composition: Eleni Shaw

    Sound Engineer: Nigel Appleton

    Editor: David Prest

    Commissioned by DeepMind

    Thank you to everyone who made this season possible!

    Further reading:

    A generative model for raw audio, DeepMind: https://deepmind.com/blog/article/wavenet-generative-model-raw-audio

    WaveNet case study, DeepMind: https://deepmind.com/research/case-studies/wavenet

    Using WaveNet technology to reunite speech-impaired users with their original voices, DeepMind:| https://deepmind.com/blog/article/Using-WaveNet-technology-to-reunite-speech-impaired-users-with-their-original-voices

    Project Euphonia, Google Research: https://sites.research.google/euphonia/about/

    Nowcasting the next hour of rain, DeepMind: https://deepmind.com/blog/article/nowcasting

    Now DeepMind is using AI to transform football, WIRED: https://www.wired.co.uk/article/deepmind-football-liverpool-ai

    Advancing sports analytics through AI, DeepMind: https://deepmind.com/blog/article/advancing-sports-analytics-through-ai

    MetOffice, BBC: https://www.metoffice.gov.uk/

    The village ‘washed on to the map’, BBC: https://www.bbc.co.uk/news/uk-england-cornwall-28523053

    Michael Fish got the storm of 1987 wrong, Sky News:

    https://news.sky.com/story/michael-fish-got-the-storm-of-1987-wrong-but-modern-supercomputers-may-have-missed-it-too-11076659#:~:text=In%20a%20lunchtime%20broadcast%20on,%2C%22%20he%20confidently%20told%20viewers

    .

    Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes. 

     

  • Step inside DeepMind's laboratories and you'll find researchers studying DNA to understand the mysteries of life, seeking new ways to use nuclear energy, or putting AI to the test in mind-bending areas of maths. In this episode, Hannah meets Pushmeet Kohli, the head of science at DeepMind, to understand how AI is accelerating scientific progress. Listeners also join Hannah on a [virtual] safari in the Serengeti in East Africa to find out how researchers are using AI to conserve wildlife in one of the world’s most spectacular ecosystems.

    For questions or feedback on the series, message us on Twitter @DeepMind or email [email protected].

    Interviewees: DeepMind’s Demis Hassabis, Pushmeet Kohli & Sarah Jane Dunn; Meredith Palmer of the Princeton University

    Credits

    Presenter: Hannah Fry

    Series Producer: Dan Hardoon

    Production support: Jill Achineku

    Sounds design: Emma Barnaby

    Music composition: Eleni Shaw

    Sound Engineer: Nigel Appleton

    Editor: David Prest

    Commissioned by DeepMind

    Thank you to everyone who made this season possible!

    Further reading:

    Using AI for scientific discovery, DeepMind: https://deepmind.com/blog/article/AlphaFold-Using-AI-for-scientific-discovery

    DeepMind’s Demis Hassabis on its breakthrough scientific discoveries, WIRED: https://www.youtube.com/watch?v=2WRow9FqUbw

    The AI revolution in scientific research, The Royal Society: https://royalsociety.org/-/media/policy/projects/ai-and-society/AI-revolution-in-science.pdf

    DOE Explains...Tokamaks, Office of Science: https://www.energy.gov/science/doe-explainstokamaks

    How AI Accidentally Learned Ecology by Playing StarCraft, Discover: https://www.discovermagazine.com/technology/how-ai-accidentally-learned-ecology-by-playing-starcraft

    Google AI can identify wildlife from trap-camera footage, VentureBeat:
    https://venturebeat.com/2019/12/17/googles-ai-can-identify-wildlife-from-trap-camera-footage-with-up-to-98-6-accuracy/

    Snapshot Serengeti, Zooniverse:
    https://www.zooniverse.org/projects/zooniverse/snapshot-serengeti

    The Human Genome Project, National Human Genome Research Institute: https://www.genome.gov/human-genome-project

    Exploring the beauty of pure mathematics in novel ways, DeepMind: https://deepmind.com/blog/article/exploring-the-beauty-of-pure-mathematics-in-novel-ways

    Predicting gene expression with AI, DeepMind: https://deepmind.com/blog/article/enformer

    Using machine learning to accelerate ecological research, DeepMind: https://deepmind.com/blog/article/using-machine-learning-to-accelerate-ecological-research

    Accelerating fusion science through learned plasma control, DeepMind: https://deepmind.com/blog/article/Accelerating-fusion-science-through-learned-plasma-control

    Simulating matter on the quantum scale with AI, DeepMind: https://deepmind.com/blog/article/Simulating-matter-on-the-quantum-scale-with-AI

    How AI is helping the natural sciences, Nature: https://www.nature.com/articles/d41586-021-02762-6

    Inside DeepMind's epic mission to solve science's trickiest problem, WIRED: https://www.wired.co.uk/article/deepmind-protein-folding

    How Artificial Intelligence Is Changing Science, Quanta:

    https://www.quantamagazine.org/how-artificial-intelligence-is-changing-science-20190311/

    Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes. 

     

  • Hannah meets DeepMind co-founder and chief scientist Shane Legg, the man who coined the phrase ‘artificial general intelligence’, and explores how it might be built. Why does Shane think AGI is possible? When will it be realised? And what could it look like? Hannah also explores a simple theory of using trial and error to reach AGI and takes a deep dive into MuZero, an AI system which mastered complex board games from chess to Go, and is now generalising to solve a range of important tasks in the real world.

    For questions or feedback on the series, message us on Twitter @DeepMind or email [email protected].

    Interviewees: DeepMind’s Shane Legg, Doina Precup, Dave Silver & Jackson Broshear

    Credits

    Presenter: Hannah Fry

    Series Producer: Dan Hardoon

    Production support: Jill Achineku

    Sounds design: Emma Barnaby

    Music composition: Eleni Shaw

    Sound Engineer: Nigel Appleton

    Editor: David Prest

    Commissioned by DeepMind

    Thank you to everyone who made this season possible!

    Further reading:

    Real-world challenges for AGI, DeepMind: https://deepmind.com/blog/article/real-world-challenges-for-agi

    An executive primer on artificial general intelligence, McKinsey: https://www.mckinsey.com/business-functions/operations/our-insights/an-executive-primer-on-artificial-general-intelligence

    Mastering Go, chess, shogi and Atari without rules, DeepMind: https://deepmind.com/blog/article/muzero-mastering-go-chess-shogi-and-atari-without-rules

    What is AGI?, Medium: https://medium.com/intuitionmachine/what-is-agi-99cdb671c88e

    A Definition of Machine Intelligence by Shane Legg, arXiv: https://arxiv.org/abs/0712.3329

    Reward is enough by David Silver, ScienceDirect: https://www.sciencedirect.com/science/article/pii/S0004370221000862

    Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes. 

     

  • Do you need a body to have intelligence? And can one exist without the other? Hannah takes listeners behind the scenes of DeepMind's robotics lab in London where she meets robots that are trying to independently learn new skills, and explores why physical intelligence is a necessary part of intelligence. Along the way, she finds out how researchers trained their robots at home during lockdown, uncovers why so many robotics demonstrations are faking it, and what it takes to train a robotic football team.

    For questions or feedback on the series, message us on Twitter @DeepMind or email [email protected].

    Interviewees: DeepMind’s Raia Hadsell, Viorica Patraucean, Jan Humplik, Akhil Raju & Doina Precup

    Credits

    Presenter: Hannah Fry

    Series Producer: Dan Hardoon

    Production support: Jill Achineku

    Sounds design: Emma Barnaby

    Music composition: Eleni Shaw

    Sound Engineer: Nigel Appleton

    Editor: David Prest

    Commissioned by DeepMind

    Thank you to everyone who made this season possible!

    Further reading:

    Stacking our way to more general robots, DeepMind: https://deepmind.com/blog/article/stacking-our-way-to-more-general-robots

    Researchers Propose Physical AI As Key To Lifelike Robots, Forbes: ​​https://www.forbes.com/sites/simonchandler/2020/11/11/researchers-propose-physical-ai-as-key-to-lifelike-robots/

    The robots going where no human can, BBC: https://www.bbc.co.uk/news/av/technology-41584738

    The Robot Assault On Fukushima, WIRED: https://www.wired.com/story/fukushima-robot-cleanup/

    Leaps, Bounds, and Backflips, Boston Dynamics: http://blog.bostondynamics.com/atlas-leaps-bounds-and-backflips

    Now DeepMind is using AI to transform football, WIRED:

    https://www.wired.co.uk/article/deepmind-football-liverpool-ai

    Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes. 

     

  • Cooperation is at the heart of our society. Inventing the railway, giving birth to the Renaissance, and creating the Covid-19 vaccine all required people to combine efforts. But cooperation is so much more. It governs our education systems, healthcare, and food production. In this episode, Hannah meets the researchers working on cooperative AI, and hears about their work and influences from the famous American psychologist - and pigeon trainer - BF Skinner to the strategic board game Diplomacy.

    For questions or feedback on the series, message us on Twitter @DeepMind or email [email protected]

    Interviewees: DeepMind’s Thore Graepel, Kevin Mckee, Doina Precup & Laura Weidinger

    Credits

    Presenter: Hannah Fry

    Series Producer: Dan Hardoon

    Production support: Jill Achineku

    Sounds design: Emma Barnaby

    Music composition: Eleni Shaw

    Sound Engineer: Nigel Appleton

    Editor: David Prest

    Commissioned by DeepMind

    Thank you to everyone who made this season possible!

    Further reading:

    Machines must learn to find common ground, Nature: https://www.nature.com/articles/d41586-021-01170-0

    Introduction to Reinforcement Learning, DeepMind: https://www.youtube.com/watch?v=2pWv7GOvuf0

    B.F. Skinner, Wikipedia: https://en.wikipedia.org/wiki/B._F._Skinner

    The Tragedy of the Commons, Wikipedia: https://en.wikipedia.org/wiki/Tragedy_of_the_commons

    Staving Off The Ultimate Tragedy Of The Commons, Forbes: https://www.forbes.com/sites/georgebradt/2021/11/02/staving-off-the-ultimate-tragedy-of-the-commons-by-making-better-complex-decisions-cooperatively-in-glasgow/

    Understanding Agent Cooperation, DeepMind: https://deepmind.com/blog/article/understanding-agent-cooperation

    The emergence of complex cooperative agents, DeepMind: https://deepmind.com/blog/article/capture-the-flag-science

    Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.