Episodi

  • Episode 138

    I spoke with Meredith Morris about:

    * The intersection of AI and HCI and why we need more cross-pollination between AI and adjacent fields

    * Disability studies and AI

    * Generative ghosts and technological determinism

    * Developing a useful definition of AGI

    I didn’t get to record an intro for this episode since I’ve been sick.

    Enjoy!

    Meredith is Director for Human-AI Interaction Research for Google DeepMind and an Affiliate Professor in The Paul G. Allen School of Computer Science & Engineering and in The Information School at the University of Washington, where she participates in the dub research consortium. Her work spans the areas of human-computer interaction (HCI), human-centered AI, human-AI interaction, computer-supported cooperative work (CSCW), social computing, and accessibility. She has been recognized as an ACM Fellow and ACM SIGCHI Academy member for her contributions to HCI.

    Find me on Twitter for updates on new episodes, and reach me at [email protected] for feedback, ideas, guest suggestions.

    Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter

    Outline:

    * (00:00) Meredith’s influences and earlier work

    * (03:00) Distinctions between AI and HCI

    * (05:56) Maturity of fields and cross-disciplinary work

    * (09:03) Technology and ends

    * (10:37) Unique aspects of Meredith’s research direction

    * (12:55) Forms of knowledge production in interdisciplinary work

    * (14:08) Disability, Bias, and AI

    * (18:32) LaMPost and using LMs for writing

    * (20:12) Accessibility approaches for dyslexia

    * (22:15) Awareness of AI and perceptions of autonomy

    * (24:43) The software model of personhood

    * (28:07) Notions of intelligence, normative visions and disability studies

    * (32:41) Disability categories and learning systems

    * (37:24) Bringing more perspectives into CS research and re-defining what counts as CS research

    * (39:36) Training interdisciplinary researchers, blurring boundaries in academia and industry

    * (43:25) Generative Agents and public imagination

    * (45:13) The state of ML conferences, the need for more cross-pollination

    * (46:42) Prestige in conferences, the move towards more cross-disciplinary work

    * (48:52) Joon Park Appreciation

    * (49:51) Training interdisciplinary researchers

    * (53:20) Generative Ghosts and technological determinism

    * (57:06) Examples of generative ghosts and clones, relationships to agentic systems

    * (1:00:39) Reasons for wanting generative ghosts

    * (1:02:25) Questions of consent for generative clones and ghosts

    * (1:05:01) Labor involved in maintaining generative ghosts, psychological tolls

    * (1:06:25) Potential religious and spiritual significance of generative systems

    * (1:10:19) Anthropomorphization

    * (1:12:14) User experience and cognitive biases

    * (1:15:24) Levels of AGI

    * (1:16:13) Defining AGI

    * (1:23:20) World models and AGI

    * (1:26:16) Metacognitive abilities in AGI

    * (1:30:06) Towards Bidirectional Human-AI Alignment

    * (1:30:55) Pluralistic value alignment

    * (1:32:43) Meredith’s perspective on deploying AI systems

    * (1:36:09) Meredith’s advice for younger interdisciplinary researchers

    Links:

    * Meredith’s homepage, Twitter, and Google Scholar

    * Papers

    * Mediating Group Dynamics through Tabletop Interface Design

    * SearchTogether: An Interface for Collaborative Web Search

    * AI and Accessibility: A Discussion of Ethical Considerations

    * Disability, Bias, and AI

    * LaMPost: Design and Evaluation of an AI-assisted Email Writing Prototype for Adults with Dyslexia

    * Generative Ghosts

    * Levels of AGI



    Get full access to The Gradient at thegradientpub.substack.com/subscribe
  • Episode 137

    I spoke with Davidad Dalrymple about:

    * His perspectives on AI risk

    * ARIA (the UK’s Advanced Research and Invention Agency) and its Safeguarded AI Programme

    Enjoy—and let me know what you think!

    Davidad is a Programme Director at ARIA. He was most recently a Research Fellow in technical AI safety at Oxford. He co-invented the top-40 cryptocurrency Filecoin, led an international neuroscience collaboration, and was a senior software engineer at Twitter and multiple startups.

    Find me on Twitter for updates on new episodes, and reach me at [email protected] for feedback, ideas, guest suggestions.

    Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter

    Outline:

    * (00:00) Intro

    * (00:36) Calibration and optimism about breakthroughs

    * (03:35) Calibration and AGI timelines, effects of AGI on humanity

    * (07:10) Davidad’s thoughts on the Orthogonality Thesis

    * (10:30) Understanding how our current direction relates to AGI and breakthroughs

    * (13:33) What Davidad thinks is needed for AGI

    * (17:00) Extracting knowledge

    * (19:01) Cyber-physical systems and modeling frameworks

    * (20:00) Continuities between Davidad’s earlier work and ARIA

    * (22:56) Path dependence in technology, race dynamics

    * (26:40) More on Davidad’s perspective on what might go wrong with AGI

    * (28:57) Vulnerable world, interconnectedness of computers and control

    * (34:52) Formal verification and world modeling, Open Agency Architecture

    * (35:25) The Semantic Sufficiency Hypothesis

    * (39:31) Challenges for modeling

    * (43:44) The Deontic Sufficiency Hypothesis and mathematical formalization

    * (49:25) Oversimplification and quantitative knowledge

    * (53:42) Collective deliberation in expressing values for AI

    * (55:56) ARIA’s Safeguarded AI Programme

    * (59:40) Anthropic’s ASL levels

    * (1:03:12) Guaranteed Safe AI —

    * (1:03:38) AI risk and (in)accurate world models

    * (1:09:59) Levels of safety specifications for world models and verifiers — steps to achieve high safety

    * (1:12:00) Davidad’s portfolio research approach and funding at ARIA

    * (1:15:46) Earlier concerns about ARIA — Davidad’s perspective

    * (1:19:26) Where to find more information on ARIA and the Safeguarded AI Programme

    * (1:20:44) Outro

    Links:

    * Davidad’s Twitter

    * ARIA homepage

    * Safeguarded AI Programme

    * Papers

    * Guaranteed Safe AI

    * Davidad’s Open Agency Architecture for Safe Transformative AI

    * Dioptics: a Common Generalization of Open Games and Gradient-Based Learners (2019)

    * Asynchronous Logic Automata (2008)



    Get full access to The Gradient at thegradientpub.substack.com/subscribe
  • Episodi mancanti?

    Fai clic qui per aggiornare il feed.

  • Episode 136

    I spoke with Clive Thompson about:

    * How he writes

    * Writing about the climate and biking across the US

    * Technology culture and persistent debates in AI

    * Poetry

    Enjoy—and let me know what you think!

    Clive is a journalist who writes about science and technology. He is a contributing writer forWired magazine, and is currently writing his next book about micromobility and cycling across the US.

    Find me on Twitter for updates on new episodes, and reach me at [email protected] for feedback, ideas, guest suggestions.

    Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter

    Outline:

    * (00:00) Intro

    * (01:07) Clive’s life as a Tarantino movie

    * (03:07) Boring life and interesting art, life as material for art

    * (10:25) Cycling across the US — Clive’s new book on mobility and decarbonization

    * (15:07) Turning inward in writing

    * (27:21) Including personal experience in writing

    * (31:53) Personal and less personal writing

    * (36:08) Conveying uncertainty and the “voice from nowhere” in traditional journalism

    * (41:10) Finding the natural end of a piece

    * (1:02:10) Writing routine

    * (1:05:08) Theories of change in Clive’s writing

    * (1:12:33) How Clive saw things before the rest of us

    * (1:27:00) Automation in software engineering

    * (1:31:40) The anthropology of coders, poetry as a framework

    * (1:43:50) Proust discourse

    * (1:45:00) Technology culture in NYC + interaction between the tech world and other worlds

    * (1:50:30) Technological developments Clive wants to see happen (free ideas)

    * (2:01:11) Clive’s argument for memorizing poetry

    * (2:09:24) How Clive finds poetry

    * (2:18:03) Clive’s pursuit of freelance writing and making compromises

    * (2:27:25) Outro

    Links:

    * Clive’s Twitter and website

    * Selected writing

    * The Attack of the Incredible Grading Machine (Lingua Franca, 1999)

    * The Know-It-All Machine (Lingua Franca, 2001)

    * How to teach AI some common sense (Wired, 2018)

    * Blogs to Riches (NY Mag, 2006)

    * Clive vs. Jonathan Franzen on whether the internet is good for writing (The Chronicle of Higher Education, 2013)

    * The Minecraft Generation (New York Times, 2016)

    * What AI College Exam Proctors are Really Teaching Our Kids (Wired, 2020)

    * Companies Don’t Need to Be Creepy to Make Money (Wired, 2021)

    * Is Sucking Carbon Out of the Air the Solution to Our Climate Crisis? (Mother Jones, 2021)

    * AI Shouldn’t Compete with Workers—It Should Supercharge Them (Wired, 2022)

    * Back to BASIC—the Most Consequential Programming Language in the History of Computing Wired, 2024)



    Get full access to The Gradient at thegradientpub.substack.com/subscribe
  • Episode 136

    I spoke with Judy Fan about:

    * Our use of physical artifacts for sensemaking

    * Why cognitive tools can be a double-edged sword

    * Her approach to scientific inquiry and how that approach has developed

    Enjoy—and let me know what you think!

    Judy is Assistant Professor of Psychology at Stanford and director of the Cognitive Tools Lab. Her lab employs converging approaches from cognitive science, computational neuroscience, and artificial intelligence to reverse engineer the human cognitive toolkit, especially how people use physical representations of thought — such as sketches and prototypes — to learn, communicate, and solve problems.

    Find me on Twitter for updates on new episodes, and reach me at [email protected] for feedback, ideas, guest suggestions.

    I spend a lot of time on this podcast—if you like my work, you can support me on Patreon :) You can also support upkeep for the full Gradient team/project through a paid subscription on Substack!

    Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter

    Outline:

    * (00:00) Intro

    * (00:49) Throughlines and discontinuities in Judy’s research

    * (06:26) “Meaning” in Judy’s research

    * (08:05) Production and consumption of artifacts

    * (13:03) Explanatory questions, why we develop visual artifacts, science as a social enterprise

    * (15:46) Unifying principles

    * (17:45) “Hard limits” to knowledge and optimism

    * (21:47) Tensions in different fields’ forms of sensemaking and establishing truth claims

    * (30:55) Dichotomies and carving up the space of possible hypotheses, conceptual tools

    * (33:22) Cognitive tools and projectivism, simplified models vs. nature

    * (40:28) Scientific training and science as process and habit

    * (45:51) Developing mental clarity about hypotheses

    * (51:45) Clarifying and expressing ideas

    * (1:03:21) Cognitive tools as double-edged

    * (1:14:21) Historical and social embeddedness of tools

    * (1:18:34) How cognitive tools impact our imagination

    * (1:23:30) Normative commitments and the role of cognitive science outside the academy

    * (1:32:31) Outro

    Links:

    * Judy’s Twitter and lab page

    * Selected papers (there are lots!)

    * Overviews

    * Drawing as a versatile cognitive tool (2023)

    * Using games to understand the mind (2024)

    * Socially intelligent machines that learn from humans and help humans learn (2024)

    * Research papers

    * Communicating design intent using drawing and text (2024)

    * Creating ad hoc graphical representations of number (2024)

    * Visual resemblance and interaction history jointly constrain pictorial meaning (2023)

    * Explanatory drawings prioritize functional properties at the expense of visual fidelity (2023)

    * SEVA: Leveraging sketches to evaluate alignment between human and machine visual abstraction (2023)

    * Parallel developmental changes in children’s production and recognition of line drawings of visual concepts (2023)

    * Learning to communicate about shared procedural abstractions (2021)

    * Visual communication of object concepts at different levels of abstraction (2021)

    * Relating visual production and recognition of objects in the human visual cortex (2020)

    * Collabdraw: an environment for collaborative sketching with an artificial agent (2019)

    * Pragmatic inference and visual abstraction enable contextual flexibility in visual communication (2019)

    * Common object representations for visual production and recognition (2018)



    Get full access to The Gradient at thegradientpub.substack.com/subscribe
  • Episode 135

    I spoke with L. M. Sacasas about:

    * His writing and intellectual influences

    * The value of asking hard questions about technology and our relationship to it

    * What happens when we decide to outsource skills and competency

    * Evolving notions of what it means to be human and questions about how to live a good life

    Enjoy—and let me know what you think!

    Michael is Executive Director of the Christian Study Center of Gainesville, Florida and author of The Convivial Society, a newsletter about technology and society.

    He does some of the best writing on technology I’ve had the pleasure to read, and I highly recommend his newsletter.

    Find me on Twitter for updates on new episodes, and reach me at [email protected] for feedback, ideas, guest suggestions.

    I spend a lot of time on this podcast—if you like my work, you can support me on Patreon :) You can also support upkeep for the full Gradient team/project through a paid subscription on Substack!

    Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter

    Outline:

    * (00:00) Intro

    * (01:12) On podcasts as a medium

    * (06:12) Michael’s writing

    * (12:38) Michael’s intellectual influences, contingency

    * (18:48) Moral seriousness

    * (22:00) Michael’s ambitions for his work

    * (26:17) The value of asking the right questions (about technology)

    * (34:18) Technology use and the “natural” pace of human life

    * (46:40) Outsourcing of skills and competency, engagement with others

    * (55:33) Inevitability narratives and technological determinism, the “Borg Complex”

    * (1:05:10) Notions of what it is to be human, embodiment

    * (1:12:37) Higher cognition vs. the body, dichotomies

    * (1:22:10) The body as a starting point for philosophy, questions about the adoption of new technologies

    * (1:30:01) Enthusiasm about technology and the cultural milieu

    * (1:35:30) Projectivism, desire for knowledge about and control of the world

    * (1:41:22) Positive visions for the future

    * (1:47:11) Outro

    Links:

    * Michael’s Substack: The Convivial Society and his book, The Frailest Thing: Ten Years of Thinking about the Meaning of Technology

    * Michael’s Twitter

    * Essays

    * Humanist Technology Criticism

    * What Does the Critic Love?

    * The Ambling Mind

    * Waste Your Time, Your Life May Depend On It

    * The Work of Art

    * The Stuff of (a Well-Lived) Life



    Get full access to The Gradient at thegradientpub.substack.com/subscribe
  • Episode 134

    I spoke with Pete Wolfendale about:

    * The flaws in longtermist thinking

    * Selections from his new book, The Revenge of Reason

    * Metaphysics

    * What philosophy has to say about reason and AI

    Enjoy—and let me know what you think!

    Pete is an independent philosopher based in Newcastle. Dr. Wolfendale got both his undergraduate degree and his Ph.D in Philosophy at the University of Warwick. His Ph.D thesis offered a re-examination of the Heideggerian Seinsfrage, arguing that Heideggerian scholarship has failed to fully do justice to its philosophical significance, and supplementing the shortcomings in Heidegger’s thought about Being with an alternative formulation of the question. He is the author of Object-Oriented Philosophy: The Noumenon's New Clothes and The Revenge of Reason. His blog is Deontologistics.

    Find me on Twitter for updates on new episodes, and reach me at [email protected] for feedback, ideas, guest suggestions.

    I spend a lot of time on this podcast—if you like my work, you can support me on Patreon :) You can also support upkeep for the full Gradient team/project through a paid subscription on Substack!

    Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter

    Outline:

    * (00:00) Intro

    * (01:30) Pete’s experience with (para-)academia, incentive structures

    * (10:00) Progress in philosophy and the analytic tradition

    * (17:57) Thinking through metaphysical questions

    * (26:46) Philosophy of science, uncovering categorical properties vs. dispositions

    * (31:55) Structure of thought and the world, epistemological excess

    * (49:31) What reason is, relation to language models, semantic fragmentation of AGI

    * (1:00:55) Neural net interpretability and intervention

    * (1:08:16) World models, architecture and behavior of AI systems

    * (1:12:35) Language acquisition in humans and LMs

    * (1:15:30) Pretraining vs. evolution

    * (1:16:50) Technological determinism

    * (1:18:19) Pete’s thinking on e/acc

    * (1:27:45) Prometheanism vs. e/acc

    * (1:29:39) The Weight of Forever — Pete’s critique of What We Owe the Future

    * (1:30:15) Our rich deontological language and longtermism’s limits

    * (1:43:33) Longtermism and the opacity of desire

    * (1:44:41) Longtermism’s historical narrative and technological determinism, theories of power

    * (1:48:10) The “posthuman” condition, language and techno-linguistic infrastructure

    * (2:00:15) Type-checking and universal infrastructure

    * (2:09:23) Multitudes and selfhood

    * (2:21:12) Definitions of the self and (non-)circularity

    * (2:32:55) Freedom and aesthetics, aesthetic exploration and selfhood

    * (2:52:46) Outro

    Links:

    * Pete’s blog and Twitter

    * Book: The Revenge of Reason

    * Writings / References

    * The Weight of Forever

    * On Neorationalism

    * So, Accelerationism, what’s that all about?



    Get full access to The Gradient at thegradientpub.substack.com/subscribe
  • Episode 133

    I spoke with Peter Lee about:

    * His early work on compiler generation, metacircularity, and type theory

    * Paradoxical problems

    * GPT-4s impact, Microsoft’s “Sparks of AGI” paper, and responses and criticism

    Enjoy—and let me know what you think!

    Peter is President of Microsoft Research. He leads Microsoft Research and incubates new research-powered products and lines of business in areas such as artificial intelligence, computing foundations, health, and life sciences. Before joining Microsoft in 2010, he was at DARPA, where he established a new technology office that created operational capabilities in machine learning, data science, and computational social science. Prior to that, he was a professor and the head of the computer science department at Carnegie Mellon University. Peter is a member of the National Academy of Medicine and serves on the boards of the Allen Institute for Artificial Intelligence, the Brotman Baty Institute for Precision Medicine, and the Kaiser Permanente Bernard J. Tyson School of Medicine. He served on President Obama’s Commission on Enhancing National Cybersecurity. He has testified before both the US House Science and Technology Committee and the US Senate Commerce Committee. With Carey Goldberg and Dr. Isaac Kohane, he is the coauthor of the best-selling book, “The AI Revolution in Medicine: GPT-4 and Beyond.” In 2024, Peter was named by Time magazine as one of the 100 most influential people in health and life sciences.

    Find me on Twitter for updates on new episodes, and reach me at [email protected] for feedback, ideas, guest suggestions.

    I spend a lot of time on this podcast—if you like my work, you can support me on Patreon :) You can also support upkeep for the full Gradient team/project through a paid subscription on Substack!

    Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter

    Outline:

    * (00:00) Intro

    * (00:50) Basic vs. applied research

    * (05:20) Theory and practice in computing

    * (10:28) Traditional denotational semantics and semantics engineering in modern-day systems

    * (16:47) Beauty and practicality

    * (20:40) Metacircularity in the polymorphic lambda calculus: research directions

    * (24:31) Understanding the nature of difficulties with metacircularity

    * (26:30) Difficulties with reflection, classic paradoxes

    * (31:02) Sparks of AGI

    * (31:41) Reproducibility

    * (38:04) Confirming and disconfirming theories, foundational work

    * (42:00) Back and forth between commitments and experimentation

    * (51:01) Dealing with responsibility

    * (56:30) Peter’s picture of AGI

    * (1:01:38) Outro

    Links:

    * Peter’s Twitter, LinkedIn, and Microsoft Research pages

    * Papers and references

    * The automatic generation of realistic compilers from high-level semantic descriptions

    * Metacircularity in the polymorphic lambda calculus

    * A Fresh Look at Combinator Graph Reduction

    * Sparks of AGI

    * Re-envisioning DARPA

    * Fundamental Research in Engineering



    Get full access to The Gradient at thegradientpub.substack.com/subscribe
  • Episode 132

    I spoke with Manuel and Lenore Blum about:

    * Their early influences and mentors

    * The Conscious Turing Machine and what theoretical computer science can tell us about consciousness

    Enjoy—and let me know what you think!

    Manuel is a pioneer in the field of theoretical computer science and the winner of the 1995 Turing Award in recognition of his contributions to the foundations of computational complexity theory and its applications to cryptography and program checking, a mathematical approach to writing programs that check their work. He worked as a professor of computer science at the University of California, Berkeley until 2001. From 2001 to 2018, he was the Bruce Nelson Professor of Computer Science at Carnegie Mellon University.

    Lenore is a Distinguished Career Professor of Computer Science, Emeritus at Carnegie Mellon University and former Professor-in-Residence in EECS at UC Berkeley. She is president of the Association for Mathematical Consciousness Science and newly elected member of the American Academy of Arts and Sciences. Lenore is internationally recognized for her work in increasing the participation of girls and women in Science, Technology, Engineering, and Math (STEM) fields. She was a founder of the Association for Women in Mathematics, and founding Co-Director (with Nancy Kreinberg) of the Math/Science Network and its Expanding Your Horizons conferences for middle- and high-school girls.

    Find me on Twitter for updates on new episodes, and reach me at [email protected] for feedback, ideas, guest suggestions.

    I spend a lot of time on this podcast—if you like my work, you can support me on Patreon :) You can also support upkeep for the full Gradient team/project through a paid subscription on Substack!

    Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter

    Outline:

    * (00:00) Intro

    * (03:09) Manuel’s interest in consciousness

    * (05:55) More of the story — from memorization to derivation

    * (11:15) Warren McCulloch’s mentorship

    * (14:00) McCulloch’s anti-Freudianism

    * (15:57) More on McCulloch’s influence

    * (27:10) On McCulloch and telling stories

    * (32:35) The Conscious Turing Machine (CTM)

    * (33:55) A last word on McCulloch

    * (35:20) Components of the CTM

    * (39:55) Advantages of the CTM model

    * (50:20) The problem of free will

    * (52:20) On pain

    * (1:01:10) Brainish / CTM’s multimodal inner language, language and thinking

    * (1:13:55) The CTM’s lack of a “central executive”

    * (1:18:10) Empiricism and a self, tournaments in the CTM

    * (1:26:30) Mental causation

    * (1:36:20) Expertise and the CTM model, role of TCS

    * (1:46:30) Dreams and dream experience

    * (1:50:15) Disentangling components of experience from multimodal language

    * (1:56:10) CTM Robot, meaning and symbols, embodiment and consciousness

    * (2:00:35) AGI, CTM and AI processors, capabilities

    * (2:09:30) CTM implications, potential worries

    * (2:17:15) Advice for younger (computer) scientists

    * (2:22:57) Outro

    Links:

    * Manuel’s homepage

    * Lenore’s homepage; find Lenore on Twitter (https://x.com/blumlenore) and Linkedin (https://www.linkedin.com/in/lenore-blum-1a47224)

    * Articles

    * “The ‘Accidental Activist’ Who Changed the Face of Mathematics” — Ben Brubaker’s Q&A with Lenore

    * “How this Turing-Award-winning researcher became a legendary academic advisor” — Sheon Han’s profile of Manuel

    * Papers (Manuel and Lenore)

    * AI Consciousness is Inevitable: A Theoretical Computer Science Perspective

    * A Theory of Consciousness from a Theoretical Computer Science Perspective: Insights from the Conscious Turing Machine

    * A Theoretical Computer Science Perspective on Consciousness and Artificial General Intelligence

    * References (McCulloch)

    * Embodiments of Mind

    * Rebel Genius



    Get full access to The Gradient at thegradientpub.substack.com/subscribe
  • Episode 131

    I spoke with Professor Kevin Dorst about:

    * Subjective Bayesianism and epistemology foundations

    * What happens when you’re uncertain about your evidence

    * Why it’s rational for people to polarize on political matters

    Enjoy—and let me know what you think!

    Kevin is an Associate Professor in the Department of Linguistics and Philosophy at MIT. He works at the border between philosophy and social science, focusing on rationality.

    Find me on Twitter for updates on new episodes, and reach me at [email protected] for feedback, ideas, guest suggestions.

    I spend a lot of time on this podcast—if you like my work, you can support me on Patreon :) You can also support upkeep for the full Gradient team/project through a paid subscription on Substack!

    Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter

    Outline:

    * (00:00) Intro

    * (01:15) When do Bayesians need theorems?

    * (05:52) Foundations of epistemology, metaethics, formal models, error theory

    * (09:35) Extreme views and error theory, arguing for/against opposing positions

    * (13:35) Changing focuses in philosophy — pragmatic pressures

    * (19:00) Kevin’s goals through his research and work

    * (25:10) Structural factors in coming to certain (political) beliefs

    * (30:30) Acknowledging limited resources, heuristics, imperfect rationality

    * (32:51) Hindsight Bias is Not a Bias

    * (33:30) The argument

    * (35:15) On eating cereal and symmetric properties of evidence

    * (39:45) Colloquial notions of hindsight bias, time and evidential support

    * (42:45) An example

    * (48:02) Higher-order uncertainty

    * (48:30) Explicitly modeling higher-order uncertainty

    * (52:50) Another example (spoons)

    * (54:55) Game theory, iterated knowledge, even higher order uncertainty

    * (58:00) Uncertainty and philosophy of mind

    * (1:01:20) Higher-order evidence about reliability and rationality

    * (1:06:45) Being Rational and Being Wrong

    * (1:09:00) Setup on calibration and overconfidence

    * (1:12:30) The need for average rational credence — normative judgments about confidence and realism/anti-realism

    * (1:15:25) Quasi-realism about average rational credence?

    * (1:19:00) Classic epistemological paradoxes/problems — lottery paradox, epistemic luck

    * (1:25:05) Deference in rational belief formation, uniqueness and permissivism

    * (1:39:50) Rational Polarization

    * (1:40:00) Setup

    * (1:37:05) Epistemic nihilism, expanded confidence akrasia

    * (1:40:55) Ambiguous evidence and confidence akrasia

    * (1:46:25) Ambiguity in understanding and notions of rational belief

    * (1:50:00) Claims about rational sensitivity — what stories we can tell given evidence

    * (1:54:00) Evidence vs presentation of evidence

    * (2:01:20) ChatGPT and the case for human irrationality

    * (2:02:00) Is ChatGPT replicating human biases?

    * (2:05:15) Simple instruction tuning and an alternate story

    * (2:10:22) Kevin’s aspirations with his work

    * (2:15:13) Outro

    Links:

    * Professor Dorst’s homepage and Twitter

    * Papers

    * Modest Epistemology

    * Hedden: Hindsight bias is not a bias

    * Higher-order evidence + (Almost) all evidence is higher-order evidence

    * Being Rational and Being Wrong

    * Rational Polarization

    * ChatGPT and human irrationality



    Get full access to The Gradient at thegradientpub.substack.com/subscribe
  • Episode 130

    I spoke with David Pfau about:

    * Spectral learning and ML

    * Learning to disentangle manifolds and (projective) representation theory

    * Deep learning for computational quantum mechanics

    * Picking and pursuing research problems and directions

    David’s work is really (times k for some very large value of k) interesting—I’ve been inspired to descend a number of rabbit holes because of it.

    (if you listen to this episode, you might become as cool as this guy)

    While I’m at it — I’m still hovering around 40 ratings on Apple Podcasts. It’d mean a lot if you’d consider helping me bump that up!

    Enjoy—and let me know what you think!

    David is a staff research scientist at Google DeepMind. He is also a visiting professor at Imperial College London in the Department of Physics, where he supervises work on applications of deep learning to computational quantum mechanics. His research interests span artificial intelligence, machine learning and scientific computing.

    Find me on Twitter for updates on new episodes, and reach me at [email protected] for feedback, ideas, guest suggestions.

    I spend a lot of time on this podcast—if you like my work, you can support me on Patreon :) You can also support upkeep for the full Gradient team/project through a paid subscription on Substack!

    Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter

    Outline:

    * (00:00) Intro

    * (00:52) David Pfau the “critic”

    * (02:05) Scientific applications of deep learning — David’s interests

    * (04:57) Brain / neural network analogies

    * (09:40) Modern ML systems and theories of the brain

    * (14:19) Desirable properties of theories

    * (18:07) Spectral Inference Networks

    * (19:15) Connections to FermiNet / computational physics, a series of papers

    * (33:52) Deep slow feature analysis — interpretability and findings on eigenfunctions

    * (39:07) Following up on eigenfunctions (there are indeed only so many hours in a day; I have been asking the Substack people if they can ship 40-hour days, but I don’t think they’ve gotten to it yet)

    * (42:17) Power iteration and intuitions

    * (45:23) Projective representation theory

    * (46:00) ???

    * (46:54) Geomancer and learning to decompose a manifold from data

    * (47:45) we consider the question of whether you will spend 90 more minutes of this podcast episode (there are not 90 more minutes left in this podcast episode, but there could have been)

    * (1:08:47) Learning embeddings

    * (1:11:12) The “unexpected emergent property” of Geomancer

    * (1:14:43) Learned embeddings and disentangling and preservation of topology

    * n/b I still haven’t managed to do this in colab because I keep crashing my instance when I use s3o4d :(

    * (1:21:07) What’s missing from the ~ current (deep learning) paradigm ~

    * (1:29:04) LLMs as swiss-army knives

    * (1:32:05) RL and human learning — TD learning in the brain

    * (1:37:43) Models that cover the Pareto Front (image below)

    * (1:46:54) AI accelerators and doubling down on transformers

    * (1:48:27) On Slow Research — chasing big questions and what makes problems attractive

    * (1:53:50) Future work on Geomancer

    * (1:55:35) Finding balance in pursuing interesting and lucrative work

    * (2:00:40) Outro

    Links:

    * Papers

    * Natural Quantum Monte Carlo Computation of Excited States (2023)

    * Making sense of raw input (2021)

    * Integrable Nonparametric Flows (2020)

    * Disentangling by Subspace Diffusion (2020)

    * Ab initio solution of the many-electron Schrödinger equation with deep neural networks (2020)

    * Spectral Inference Networks (2018)

    * Connecting GANs and Actor-Critic Methods (2016)

    * Learning Structure in Time Series for Neuroscience and Beyond (2015, dissertation)

    * Robust learning of low-dimensional dynamics from large neural ensembles (2013)

    * Probabilistic Deterministic Infinite Automata (2010)

    * Other

    * On Slow Research

    * “I just want to put this out here so that no one ever says ‘we can just get around the data limitations of LLMs with self-play’ ever again.”



    Get full access to The Gradient at thegradientpub.substack.com/subscribe
  • Episode 129

    I spoke with Dan Hart and Michelle Michael about:

    * Developing NSWEduChat, an AI-powered chatbot designed and delivered by the NSW Department of Education for students and teachers.

    * The challenges in effectively teaching students as technology develops

    * Understanding and defining the importance of the classroom

    Enjoy—and let me know what you think!

    Dan Hart is Head of AI, and Michelle Michael is Director of Educational Support and Rural Initiatives at the New South Wales (NSW) Department of Education.

    Find me on Twitter for updates on new episodes, and reach me at [email protected] for feedback, ideas, guest suggestions.

    I spend a lot of time on this podcast—if you like my work, you can support me on Patreon :) You can also support upkeep for the full Gradient team/project through a paid subscription on Substack!

    Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter

    Outline:

    * (00:00) Intro

    * (00:48) How NSWEduChat came to be, educational principles for AI use

    * (02:37) Educational environment in New South Wales

    * (04:41) How educators have adapted to new challenges for teaching and assessment

    * (07:47) Considering technology advancement while teaching and assessing students

    * (12:14) Educating teachers and students about how to use AI tools

    * (15:03) AI in the classroom and enabling teachers

    * (19:44) Product-first thinking for educational AI

    * (22:15) Red teaming and testing

    * (24:02) Benchmarking, chatbots as an assistant

    * (26:35) The importance of the classroom

    * (28:10) Media coverage and hype

    * (30:35) Measurement and the benchmarking process/methodology

    * (34:50) Principles for how chatbots should interact with students

    * (44:29) Producing good educational outcomes at scale

    * (46:41) Operating with speed and effectiveness while implementing governance

    * (49:03) How the experience of building technologies evolves

    * (51:45) Identifying good technologists and educators for development and use

    * (55:07) Teaching standards and how AI impacts teachers

    * (57:01) How technologists incorporate teaching standards and expertise in their work

    * (1:00:03) NSWEduChat model details

    * (1:02:55) Value alignment for NSWEduChat

    * (1:05:40) Practicing caution in filtering chatbot responses

    * (1:07:35) Equity and personalized instruction — how NSWEduChat can help

    * (1:10:19) Helping students become “the students they could be”

    * (1:13:39) Outro

    Links:

    * NSWEduChat

    * Guardian article on NSWEduChat



    Get full access to The Gradient at thegradientpub.substack.com/subscribe
  • Episode 129

    I spoke with Kristin Lauter about:

    * Elliptic curve cryptography and homomorphic encryption

    * Standardizing cryptographic protocols

    * Machine Learning on encrypted data

    * Attacking post-quantum cryptography with AI

    Enjoy—and let me know what you think!

    Kristin is Senior Director of FAIR Labs North America (2022—present), based in Seattle. Her current research areas are AI4Crypto and Private AI. She joined FAIR (Facebook AI Research) in 2021, after 22 years at Microsoft Research (MSR). At MSR she was Partner Research Manager on the senior leadership team of MSR Redmond. Before joining Microsoft in 1999, she was Hildebrandt Assistant Professor of Mathematics at the University of Michigan (1996-1999). She is an Affiliate Professor of Mathematics at the University of Washington (2008—present). She received all her advanced degrees from the University of Chicago, BA (1990), MS (1991), PhD (1996) in Mathematics. She is best known for her work on Elliptic Curve Cryptography, Supersingular Isogeny Graphs in Cryptography, Homomorphic Encryption (SEALcrypto.org), Private AI, and AI4Crypto. She served as President of the Association for Women in Mathematics from 2015-2017 and on the Council of the American Mathematical Society from 2014-2017.

    Find me on Twitter for updates on new episodes, and reach me at [email protected] for feedback, ideas, guest suggestions.

    I spend a lot of time on this podcast—if you like my work, you can support me on Patreon :) You can also support upkeep for the full Gradient team/project through a paid subscription on Substack!

    Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter

    Outline:

    * (00:00) Intro

    * (01:10) Llama 3 and encrypted data — where do we want to be?

    * (04:20) Tradeoffs: individual privacy vs. aggregated value in e.g. social media forums

    * (07:48) Kristin’s shift in views on privacy

    * (09:40) Earlier work on elliptic curve cryptography — applications and theory

    * (10:50) Inspirations from algebra, number theory, and algebraic geometry

    * (15:40) On algebra vs. analysis and on clear thinking

    * (18:38) Elliptic curve cryptography and security, algorithms and concrete running time

    * (21:31) Cryptographic protocols and setting standards

    * (26:36) Supersingular isogeny graphs (and higher-dimensional supersingular isogeny graphs)

    * (32:26) Hard problems for cryptography and finding new problems

    * (36:42) Guaranteeing security for cryptographic protocols and mathematical foundations

    * (40:15) Private AI: Crypto-Nets / running neural nets on homomorphically encrypted data

    * (42:10) Polynomial approximations, activation functions, and expressivity

    * (44:32) Scaling up, Llama 2 inference on encrypted data

    * (46:10) Transitioning between MSR and FAIR, industry research

    * (52:45) An efficient algorithm for integer lattice reduction (AI4Crypto)

    * (56:23) Local minima, convergence and limit guarantees, scaling

    * (58:27) SALSA: Attacking Lattice Cryptography with Transformers

    * (58:38) Learning With Errors (LWE) vs. standard ML assumptions

    * (1:02:25) Powers of small primes and faster learning

    * (1:04:35) LWE and linear regression on a torus

    * (1:07:30) Secret recovery algorithms and transformer accuracy

    * (1:09:10) Interpretability / encoding information about secrets

    * (1:09:45) Future work / scaling up

    * (1:12:08) Reflections on working as a mathematician among technologists

    Links:

    * Kristin’s Meta, Wikipedia, Google Scholar, and Twitter pages

    * Papers and sources mentioned/referenced:

    * The Advantages of Elliptic Curve Cryptography for Wireless Security (2004)

    * Cryptographic Hash Functions from Expander Graphs (2007, introducing Supersingular Isogeny Graphs)

    * Families of Ramanujan Graphs and Quaternion Algebras (2008 — the higher-dimensional analogues of Supersingular Isogeny Graphs)

    * Cryptographic Cloud Storage (2010)

    * Can homomorphic encryption be practical? (2011)

    * ML Confidential: Machine Learning on Encrypted Data (2012)

    * CryptoNets: Applying neural networks to encrypted data with high throughput and accuracy (2016)

    * A community effort to protect genomic data sharing, collaboration and outsourcing (2017)

    * The Homomorphic Encryption Standard (2022)

    * Private AI: Machine Learning on Encrypted Data (2022)

    * SALSA: Attacking Lattice Cryptography with Transformers (2022)

    * SalsaPicante: A Machine Learning Attack on LWE with Binary Secrets

    * SALSA VERDE: a machine learning attack on LWE with sparse small secrets

    * Salsa Fresca: Angular Embeddings and Pre-Training for ML Attacks on Learning With Errors

    * The cool and the cruel: separating hard parts of LWE secrets

    * An efficient algorithm for integer lattice reduction (2023)



    Get full access to The Gradient at thegradientpub.substack.com/subscribe
  • Episode 128

    I spoke with Sergiy Nesterenko about:

    * Developing an automated system for designing PCBs

    * Difficulties in human and automated PCB design

    * Building a startup at the intersection of different areas of expertise

    By the way — I hit 40 ratings on Apple Podcasts (and am at 66 on Spotify). It’d mean a lot (really, a lot) if you’d consider leaving a rating or a review. I read everything, and it’s very heartening and helpful to hear what you think.

    Enjoy, and let me know what you think!

    Sergiy is founder and CEO of Quilter. Sergiy spent 5 years at SpaceX developing radiation-hardened avionics for SpaceX's Falcon 9 and Falcon Heavy's second stage rockets, before discovering a big problem: designing printed circuit boards for all the electronics in these rockets was tedious, manual and error prone. So in 2019, he founded Quilter to build the next generation of AI-powered tooling for electrical engineers.

    I spend a lot of time on this podcast—if you like my work, you can support me on Patreon :)

    Reach me at [email protected] for feedback, ideas, guest suggestions.

    Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter

    Outline:

    * (00:00) Intro

    * (00:45) Quilter origins and difficulties in designing PCBs

    * (04:12) PCBs and schematic implementations

    * (06:40) Iteration cycles and simulations

    * (08:35) Octilinear traces and first-principles design for PCBs

    * (12:38) The design space of PCBs

    * (15:27) Benchmarks for PCB design

    * (20:05) RL and PCB design

    * (22:48) PCB details, track widths

    * (25:09) Board functionality and aesthetics

    * (27:53) PCB designers and automation

    * (30:24) Quilter as a compiler

    * (33:56) Gluing social worlds and bringing together expertise

    * (36:00) Process knowledge vs. first-principles thinking

    * (42:05) Example boards

    * (44:45) Auto-routers for PCBs

    * (48:43) Difficulties for scaling to larger boards

    * (50:42) Customers and skepticism

    * (53:42) On experiencing negative feedback

    * (56:42) Maintaining stamina while building Quilter

    * (1:00:00) Endgame for Quilter and future directions

    * (1:03:24) Outro

    Links:

    * Quilter homepage

    * Other pages/features mentioned:

    * Thin-to-thick traces

    * Octilinear trace routing

    * Comment from Tom Fleet



    Get full access to The Gradient at thegradientpub.substack.com/subscribe
  • Episode 127

    I spoke with Christopher Thi Nguyen about:

    * How we lose control of our values

    * The tradeoffs of legibility, aggregation, and simplification

    * Gamification and its risks

    Enjoy—and let me know what you think!

    C. Thi Nguyen as of July 2020 is Associate Professor of Philosophy at the University of Utah. His research focuses on how social structures and technology can shape our rationality and our agency. He has published on trust, expertise, group agency, community art, cultural appropriation, aesthetic value, echo chambers, moral outrage porn, and games. He received his PhD from UCLA. Once, he was a food writer for the Los Angeles Times.

    I spend a lot of time on this podcast—if you like my work, you can support me on Patreon :)

    Reach me at [email protected] for feedback, ideas, guest suggestions.

    Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter

    Outline:

    * (00:00) Intro

    * (01:10) The ubiquity of James C. Scott

    * (06:03) Legibility and measurement

    * (12:50) Value capture, classes and measurement

    * (17:30) Political value choice in ML

    * (23:30) Why value collapse happens

    * (33:00) Blackburn, “Hume and Thick Connexions” — projectivism and legibility

    * (36:20) Heuristics and decision-making

    * (40:08) Institutional classification systems

    * (46:55) Back to Hume

    * (48:27) Epistemic arms races, stepping outside our conceptual architectures

    * (56:40) The “what to do” question

    * (1:04:00) Gamification, aesthetic engagement

    * (1:14:51) Echo chambers and defining utility

    * (1:22:10) Progress, AGI millenarianism

    * (disclaimer: I don’t know what’s going to happen with the world, either.)

    * (1:26:04) Parting visions

    * (1:30:02) Outro

    Links:

    * Chrisopher’s Twitter and homepage

    * Games: Agency as Art

    * Papers referenced

    * Transparency is Surveillance

    * Games and the art of agency

    * Autonomy and Aesthetic Engagement

    * Art as a Shelter from Science

    * Value Capture

    * Hostile Epistemology

    * Hume and Thick Connexions (Simon Blackburn)



    Get full access to The Gradient at thegradientpub.substack.com/subscribe
  • Episode 126

    I spoke with Vivek Natarajan about:

    * Improving access to medical knowledge with AI

    * How an LLM for medicine should behave

    * Aspects of training Med-PaLM and AMIE

    * How to facilitate appropriate amounts of trust in users of medical AI systems

    Vivek Natarajan is a Research Scientist at Google Health AI advancing biomedical AI to help scale world class healthcare to everyone. Vivek is particularly interested in building large language models and multimodal foundation models for biomedical applications and leads the Google Brain moonshot behind Med-PaLM, Google's flagship medical large language model. Med-PaLM has been featured in The Scientific American, The Economist, STAT News, CNBC, Forbes, New Scientist among others.

    I spend a lot of time on this podcast—if you like my work, you can support me on Patreon :)

    Reach me at [email protected] for feedback, ideas, guest suggestions.

    Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter

    Outline:

    * (00:00) Intro

    * (00:35) The concept of an “AI doctor”

    * (06:54) Accessibility to medical expertise

    * (10:31) Enabling doctors to do better/different work

    * (14:35) Med-PaLM

    * (15:30) Instruction tuning, desirable traits in LLMs for medicine

    * (23:41) Axes for evaluation of medical QA systems

    * (30:03) Medical LLMs and scientific consensus

    * (35:32) Demographic data and patient interventions

    * (40:14) Data contamination in Med-PaLM

    * (42:45) Grounded claims about capabilities

    * (45:48) Building trust

    * (50:54) Genetic Discovery enabled by a LLM

    * (51:33) Novel hypotheses in genetic discovery

    * (57:10) Levels of abstraction for hypotheses

    * (1:01:10) Directions for continued progress

    * (1:03:05) Conversational Diagnostic AI

    * (1:03:30) Objective Structures Clinical Examination as an evaluative framework

    * (1:09:08) Relative importance of different types of data

    * (1:13:52) Self-play — conversational dispositions and handling patients

    * (1:16:41) Chain of reasoning and information retention

    * (1:20:00) Performance in different areas of medical expertise

    * (1:22:35) Towards accurate differential diagnosis

    * (1:31:40) Feedback mechanisms and expertise, disagreement among clinicians

    * (1:35:26) Studying trust, user interfaces

    * (1:38:08) Self-trust in using medical AI models

    * (1:41:39) UI for medical AI systems

    * (1:43:50) Model reasoning in complex scenarios

    * (1:46:33) Prompting

    * (1:48:41) Future outlooks

    * (1:54:53) Outro

    Links:

    * Vivek’s Twitter and homepage

    * Papers

    * Towards Expert-Level Medical Question Answering with LLMs (2023)

    * LLMs encode clinical knowledge (2023)

    * Towards Generalist Biomedical AI (2024)

    * AMIE

    * Genetic Discovery enabled by a LLM (2023)



    Get full access to The Gradient at thegradientpub.substack.com/subscribe
  • Episode 125

    False universalism freaks me out. It doesn’t freak me out as a first principle because of epistemic violence; it freaks me out because it works.

    I spoke with Professor Thomas Mullaney about:

    * Telling stories about your work and balancing what feels meaningful with practical realities

    * Destabilizing our understandings of the technologies we feel familiar with, and the work of researching the history of the Chinese typewriter

    * The personal nature of research

    The Chinese Typewriter and The Chinese Computer are two of the best books I’ve read in a very long time. And they’re not just good and interesting, but important to read, for the history they tell and the ideas and arguments they present—I can’t recommend them and Professor Mullaney’s other work enough.

    Tom is Professor of History and Professor of East Asian Languages and Cultures, by courtesy. He is also the Kluge Chair in Technology and Society at the Library of Congress, and a Guggenheim Fellow. He is the author or lead editor of 8 books, including The Chinese Computer, The Chinese Typewriter (winner of the Fairbank prize), Your Computer is on Fire, and Coming to Terms with the Nation: Ethnic Classification in Modern China.

    I spend a lot of time on this podcast—if you like my work, you can support me on Patreon :)

    Reach me at [email protected] for feedback, ideas, guest suggestions.

    Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter

    Outline:

    * (00:00) Intro

    * (01:00) “In Their Own Words” interview: on telling stories about your work

    * (07:42) Clashing narratives and authenticity/inauthenticity in pursuing your work

    * (15:48) Why Professor Mullaney pursued studying the Chinese typewriter

    * (18:20) Worldmaking, transforming the physical world to fit our descriptive models

    * (30:07) Internal and illegible continuities/coherence in work

    * (31:45) The role of a “self”

    * (43:06) The 2008 Beijing Olympics and false (alphabetical) universalism, projectivism

    * (1:04:23) “Kicking the ladder” and the personal nature of research

    * (1:18:07) The “Technolinguistic Chinese Exclusion Act” — the situatedness of historians in their work

    * (1:33:00) Is the Chinese typewriter project finished? / on the resolution of problems

    * (1:43:35) Outro

    Links:

    * Professor Mullaney’s homepage and Twitter

    * In Their Own Words: Thomas Mullaney

    * Books

    * The Chinese Computer: A Global History of the Information Age

    * The Chinese Typewriter: A History

    * Coming to Terms with the Nation: Ethnic Classification in Modern China



    Get full access to The Gradient at thegradientpub.substack.com/subscribe
  • Episode 124

    You may think you’re doing a priori reasoning, but actually you’re just over-generalizing from your current experience of technology.

    I spoke with Professor Seth Lazar about:

    * Why managing near-term and long-term risks isn’t always zero-sum

    * How to think through axioms and systems in political philosphy

    * Coordination problems, economic incentives, and other difficulties in developing publicly beneficial AI

    Seth is Professor of Philosophy at the Australian National University, an Australian Research Council (ARC) Future Fellow, and a Distinguished Research Fellow of the University of Oxford Institute for Ethics in AI. He has worked on the ethics of war, self-defense, and risk, and now leads the Machine Intelligence and Normative Theory (MINT) Lab, where he directs research projects on the moral and political philosophy of AI.

    Reach me at [email protected] for feedback, ideas, guest suggestions.

    Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter

    Outline:

    * (00:00) Intro

    * (00:54) Ad read — MLOps conference

    * (01:32) The allocation of attention — attention, moral skill, and algorithmic recommendation

    * (03:53) Attention allocation as an independent good (or bad)

    * (08:22) Axioms in political philosophy

    * (11:55) Explaining judgments, multiplying entities, parsimony, intuitive disgust

    * (15:05) AI safety / catastrophic risk concerns

    * (22:10) Superintelligence arguments, reasoning about technology

    * (28:42) Attacking current and future harms from AI systems — does one draw resources from the other?

    * (35:55) GPT-2, model weights, related debates

    * (39:11) Power and economics—coordination problems, company incentives

    * (50:42) Morality tales, relationship between safety and capabilities

    * (55:44) Feasibility horizons, prediction uncertainty, and doing moral philosophy

    * (1:02:28) What is a feasibility horizon?

    * (1:08:36) Safety guarantees, speed of improvements, the “Pause AI” letter

    * (1:14:25) Sociotechnical lenses, narrowly technical solutions

    * (1:19:47) Experiments for responsibly integrating AI systems into society

    * (1:26:53) Helpful/honest/harmless and antagonistic AI systems

    * (1:33:35) Managing incentives conducive to developing technology in the public interest

    * (1:40:27) Interdisciplinary academic work, disciplinary purity, power in academia

    * (1:46:54) How we can help legitimize and support interdisciplinary work

    * (1:50:07) Outro

    Links:

    * Seth’s Linktree and Twitter

    * Resources

    * Attention, moral skill, and algorithmic recommendation

    * Catastrophic AI Risk slides



    Get full access to The Gradient at thegradientpub.substack.com/subscribe
  • Episode 123

    I spoke with Suhail Doshi about:

    * Why benchmarks aren’t prepared for tomorrow’s AI models

    * How he thinks about artists in a world with advanced AI tools

    * Building a unified computer vision model that can generate, edit, and understand pixels.

    Suhail is a software engineer and entrepreneur known for founding Mixpanel, Mighty Computing, and Playground AI (they’re hiring!).

    Reach me at [email protected] for feedback, ideas, guest suggestions.

    Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter

    Outline:

    * (00:00) Intro

    * (00:54) Ad read — MLOps conference

    * (01:30) Suhail is *not* in pivot hell but he *is* all-in on 50% AI-generated music

    * (03:45) AI and music, similarities to Playground

    * (07:50) Skill vs. creative capacity in art

    * (12:43) What we look for in music and art

    * (15:30) Enabling creative expression

    * (18:22) Building a unified computer vision model, underinvestment in computer vision

    * (23:14) Enhancing the aesthetic quality of images: color and contrast, benchmarks vs user desires

    * (29:05) “Benchmarks are not prepared for how powerful these models will become”

    * (31:56) Personalized models and personalized benchmarks

    * (36:39) Engaging users and benchmark development

    * (39:27) What a foundation model for graphics requires

    * (45:33) Text-to-image is insufficient

    * (46:38) DALL-E 2 and Imagen comparisons, FID

    * (49:40) Compositionality

    * (50:37) Why Playground focuses on images vs. 3d, video, etc.

    * (54:11) Open source and Playground’s strategy

    * (57:18) When to stop open-sourcing?

    * (1:03:38) Suhail’s thoughts on AGI discourse

    * (1:07:56) Outro

    Links:

    * Playground homepage

    * Suhail on Twitter



    Get full access to The Gradient at thegradientpub.substack.com/subscribe
  • Episode 122

    I spoke with Azeem Azhar about:

    * The speed of progress in AI

    * Historical context for some of the terminology we use and how we think about technology

    * What we might want our future to look like

    Azeem is an entrepreneur, investor, and adviser. He is the creator of Exponential View, a global platform for in-depth technology analysis, and the host of the Bloomberg Original series Exponentially.

    Reach me at [email protected] for feedback, ideas, guest suggestions.

    Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter

    Outline:

    * (00:00) Intro

    * (00:32) Ad read — MLOps conference

    * (01:05) Problematizing the term “exponential”

    * (07:35) Moore’s Law as social contract, speed of technological growth and impedances

    * (14:45) Academic incentives, interdisciplinary work, rational agents and historical context

    * (21:24) Monolithic scaling

    * (26:38) Investment in scaling

    * (31:22) On Sam Altman

    * (36:25) Uses of “AGI,” “intelligence”

    * (41:32) Historical context for terminology

    * (48:58) AI and teaching

    * (53:51) On the technology-human divide

    * (1:06:26) New technologies and the futures we want

    * (1:10:50) Inevitability narratives

    * (1:17:01) Rationality and objectivity

    * (1:21:13) Cultural affordances and intellectual history

    * (1:26:15) Centralized and decentralized AI systems

    * (1:32:54) Instruction tuning and helpful/honest/harmless

    * (1:39:18) Azeem’s future outlook

    * (1:46:15) Outro

    Links:

    * Azeem’s website and Twitter

    * Exponential View



    Get full access to The Gradient at thegradientpub.substack.com/subscribe
  • Episode 122

    I spoke with Professor David Thorstad about:

    * The practical difficulties of doing interdisciplinary work

    * Why theories of human rationality should account for boundedness, heuristics, and other cognitive limitations

    * why EA epistemics suck (ok, it’s a little more nuanced than that)

    Professor Thorstad is an Assistant Professor of Philosophy at Vanderbilt University, a Senior Research Affiliate at the Global Priorities Institute at Oxford, and a Research Affiliate at the MINT Lab at Australian National University. One strand of his research asks how cognitively limited agents should decide what to do and believe. A second strand asks how altruists should use limited funds to do good effectively.

    Reach me at [email protected] for feedback, ideas, guest suggestions.

    Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter

    Outline:

    * (00:00) Intro

    * (01:15) David’s interest in rationality

    * (02:45) David’s crisis of confidence, models abstracted from psychology

    * (05:00) Blending formal models with studies of the mind

    * (06:25) Interaction between academic communities

    * (08:24) Recognition of and incentives for interdisciplinary work

    * (09:40) Movement towards interdisciplinary work

    * (12:10) The Standard Picture of rationality

    * (14:11) Why the Standard Picture was attractive

    * (16:30) Violations of and rebellion against the Standard Picture

    * (19:32) Mistakes made by critics of the Standard Picture

    * (22:35) Other competing programs vs Standard Picture

    * (26:27) Characterizing Bounded Rationality

    * (27:00) A worry: faculties criticizing themselves

    * (29:28) Self-improving critique and longtermism

    * (30:25) Central claims in bounded rationality and controversies

    * (32:33) Heuristics and formal theorizing

    * (35:02) Violations of Standard Picture, vindicatory epistemology

    * (37:03) The Reason Responsive Consequentialist View (RRCV)

    * (38:30) Objective and subjective pictures

    * (41:35) Reason responsiveness

    * (43:37) There are no epistemic norms for inquiry

    * (44:00) Norms vs reasons

    * (45:15) Arguments against epistemic nihilism for belief

    * (47:30) Norms and self-delusion

    * (49:55) Difficulty of holding beliefs for pragmatic reasons

    * (50:50) The Gibbardian picture, inquiry as an action

    * (52:15) Thinking how to act and thinking how to live — the power of inquiry

    * (53:55) Overthinking and conducting inquiry

    * (56:30) Is thinking how to inquire as an all-things-considered matter?

    * (58:00) Arguments for the RRCV

    * (1:00:40) Deciding on minimal criteria for the view, stereotyping

    * (1:02:15) Eliminating stereotypes from the theory

    * (1:04:20) Theory construction in epistemology and moral intuition

    * (1:08:20) Refusing theories for moral reasons and disciplinary boundaries

    * (1:10:30) The argument from minimal criteria, evaluating against competing views

    * (1:13:45) Comparing to other theories

    * (1:15:00) The explanatory argument

    * (1:17:53) Parfit and Railton, norms of friendship vs utility

    * (1:20:00) Should you call out your friend for being a womanizer

    * (1:22:00) Vindicatory Epistemology

    * (1:23:05) Panglossianism and meliorative epistemology

    * (1:24:42) Heuristics and recognition-driven investigation

    * (1:26:33) Rational inquiry leading to irrational beliefs — metacognitive processing

    * (1:29:08) Stakes of inquiry and costs of metacognitive processing

    * (1:30:00) When agents are incoherent, focuses on inquiry

    * (1:32:05) Indirect normative assessment and its consequences

    * (1:37:47) Against the Singularity Hypothesis

    * (1:39:00) Superintelligence and the ontological argument

    * (1:41:50) Hardware growth and general intelligence growth, AGI definitions

    * (1:43:55) Difficulties in arguing for hyperbolic growth

    * (1:46:07) Chalmers and the proportionality argument

    * (1:47:53) Arguments for/against diminishing growth, research productivity, Moore’s Law

    * (1:50:08) On progress studies

    * (1:52:40) Improving research productivity and technology growth

    * (1:54:00) Mistakes in the moral mathematics of existential risk, longtermist epistemics

    * (1:55:30) Cumulative and per-unit risk

    * (1:57:37) Back and forth with longtermists, time of perils

    * (1:59:05) Background risk — risks we can and can’t intervene on, total existential risk

    * (2:00:56) The case for longtermism is inflated

    * (2:01:40) Epistemic humility and longtermism

    * (2:03:15) Knowledge production — reliable sources, blog posts vs peer review

    * (2:04:50) Compounding potential errors in knowledge

    * (2:06:38) Group deliberation dynamics, academic consensus

    * (2:08:30) The scope of longtermism

    * (2:08:30) Money in effective altruism and processes of inquiry

    * (2:10:15) Swamping longtermist options

    * (2:12:00) Washing out arguments and justified belief

    * (2:13:50) The difficulty of long-term forecasting and interventions

    * (2:15:50) Theory of change in the bounded rationality program

    * (2:18:45) Outro

    Links:

    * David’s homepage and Twitter and blog

    * Papers mentioned/read

    * Bounded rationality and inquiry

    * Why bounded rationality (in epistemology)?

    * Against the newer evidentialists

    * The accuracy-coherence tradeoff in cognition

    * There are no epistemic norms of inquiry

    * Permissive metaepistemology

    * Global priorities and effective altruism

    * What David likes about EA

    * Against the singularity hypothesis (+ blog posts)

    * Three mistakes in the moral mathematics of existential risk (+ blog posts)

    * The scope of longtermism

    * Epistemics



    Get full access to The Gradient at thegradientpub.substack.com/subscribe