Episodes

  • Check out my short video series about what's missing in AI and Neuroscience.

    Support the show to get full episodes and join the Discord community.

    Large language models, often now called "foundation models", are the model de jour in AI, based on the transformer architecture. In this episode, I bring together Evelina Fedorenko and Emily M. Bender to discuss how language models stack up to our own language processing and generation (models and brains both excel at next-word prediction), whether language evolved in humans for complex thoughts or for communication (communication, says Ev), whether language models grasp the meaning of the text they produce (Emily says no), and much more.

    Evelina Fedorenko is a cognitive scientist who runs the EvLab at MIT. She studies the neural basis of language. Her lab has amassed a large amount of data suggesting language did not evolve to help us think complex thoughts, as Noam Chomsky has argued, but rather for efficient communication. She has also recently been comparing the activity in language models to activity in our brain's language network, finding commonality in the ability to predict upcoming words.

    Emily M. Bender is a computational linguist at University of Washington. Recently she has been considering questions about whether language models understand the meaning of the language they produce (no), whether we should be scaling language models as is the current practice (not really), how linguistics can inform language models, and more.

    EvLab.Emily's website.Twitter: @ev_fedorenko; @emilymbender.Related papersLanguage and thought are not the same thing: Evidence from neuroimaging and neurological patients. (Fedorenko)The neural architecture of language: Integrative modeling converges on predictive processing. (Fedorenko)On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? (Bender)Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data (Bender)

    0:00 - Intro4:35 - Language and cognition15:38 - Grasping for meaning21:32 - Are large language models producing language?23:09 - Next-word prediction in brains and models32:09 - Interface between language and thought35:18 - Studying language in nonhuman animals41:54 - Do we understand language enough?45:51 - What do language models need?51:45 - Are LLMs teaching us about language?54:56 - Is meaning necessary, and does it matter how we learn language?1:00:04 - Is our biology important for language?1:04:59 - Future outlook

  • Support the show to get full episodes and join the Discord community.

    Support the show to get full episodes and join the Discord community.

    Rodolphe Sepulchre is a control engineer and theorist at Cambridge University. He focuses on applying feedback control engineering principles to build circuits that model neurons and neuronal circuits. We discuss his work on mixed feedback control - positive and negative - as an underlying principle of the mixed digital and analog brain signals,, the role of neuromodulation as a controller, applying these principles to Eve Marder's lobster/crab neural circuits, building mixed-feedback neuromorphics, some feedback control history, and how "If you wish to contribute original work, be prepared to face loneliness," among other topics.

    Rodolphe's website.Related papersSpiking Control Systems.Control Across Scales by Positive and Negative Feedback.Neuromorphic control. (arXiv version)Related episodes:BI 130 Eve Marder: Modulation of NetworksBI 119 Henry Yin: The Crisis in Neuroscience

    0:00 - Intro4:38 - Control engineer9:52 - Control vs. dynamical systems13:34 - Building vs. understanding17:38 - Mixed feedback signals26:00 - Robustness28:28 - Eve Marder32:00 - Loneliness37:35 - Across levels44:04 - Neuromorphics and neuromodulation52:15 - Barrier to adopting neuromorphics54:40 - Deep learning influence58:04 - Beyond energy efficiency1:02:02 - Deep learning for neuro1:14:15 - Role of philosophy1:16:43 - Doing it right

  • Missing episodes?

    Click here to refresh the feed.

  • Support the show to get full episodes and join the Discord community.

    Support the show to get full episodes and join the Discord community.

    Cameron Buckner is a philosopher and cognitive scientist at The University of Houston. He is writing a book about the age-old philosophical debate on how much of our knowledge is innate (nature, rationalism) versus how much is learned (nurture, empiricism). In the book and his other works, Cameron argues that modern AI can help settle the debate. In particular, he suggests we focus on what types of psychological "domain-general faculties" underlie our own intelligence, and how different kinds of deep learning models are revealing how those faculties may be implemented in our brains. The hope is that by building systems that possess the right handful of faculties, and putting those systems together in a way they can cooperate in a general and flexible manner, it will result in cognitive architectures we would call intelligent. Thus, what Cameron calls The New DoGMA: Domain-General Modular Architecture. We also discuss his work on mental representation and how representations get their content - how our thoughts connect to the natural external world. 

    Cameron's Website.Twitter: @cameronjbuckner.Related papersEmpiricism without Magic: Transformational Abstraction in Deep Convolutional Neural Networks.A Forward-Looking Theory of Content.Other sources Cameron mentions:Innateness, AlphaZero, and Artificial Intelligence (Gary Marcus).Radical Empiricism and Machine Learning Research (Judea Pearl).Fodor’s guide to the Humean mind (Tamás Demeter).

    0:00 - Intro4:55 - Interpreting old philosophy8:26 - AI and philosophy17:00 - Empiricism vs. rationalism27:09 - Domain-general faculties33:10 - Faculty psychology40:28 - New faculties?46:11 - Human faculties51:15 - Cognitive architectures56:26 - Language1:01:40 - Beyond dichotomous thinking1:04:08 - Lower-level faculties1:10:16 - Animal cognition1:14:31 - A Forward-Looking Theory of Content

  • Support the show to get full episodes and join the Discord community.

    Support the show to get full episodes and join the Discord community.

    Carina Curto is a professor in the Department of Mathematics at The Pennsylvania State University. She uses her background skills in mathematical physics/string theory to study networks of neurons. On this episode, we discuss the world of topology in neuroscience - the study of the geometrical structures mapped out by active populations of neurons. We also discuss her work on "combinatorial linear threshold networks" (CLTNs). Unlike the large deep learning models popular today as models of brain activity, the CLTNs Carina builds are relatively simple, abstracted graphical models. This property is important to Carina, whose goal is to develop mathematically tractable neural network models. Carina has worked out how the structure of many CLTNs allows prediction of the model's allowable dynamics, how motifs of model structure can be embedded in larger models while retaining their dynamical features, and more. The hope is that these elegant models can tell us more about the principles our messy brains employ to generate the robust and beautiful dynamics underlying our cognition.

    Carina's website.The Mathematical Neuroscience Lab.Related papersA major obstacle impeding progress in brain science is the lack of beautiful models.What can topology tells us about the neural code?Predicting neural network dynamics via graphical analysis

    0:00 - Intro4:25 - Background: Physics and math to study brains20:45 - Beautiful and ugly models35:40 - Topology43:14 - Topology in hippocampal navigation56:04 - Topology vs. dynamical systems theory59:10 - Combinatorial linear threshold networks1:25:26 - How much more math do we need to invent?

  • Support the show to get full episodes and join the Discord community.

    Support the show to get full episodes and join the Discord community.

    Jeff Schall is the director of the Center for Visual Neurophysiology at York University, where he runs the Schall Lab. His research centers around studying the mechanisms of our decisions, choices, movement control, and attention within the saccadic eye movement brain systems and in mathematical psychology models- in other words, how we decide where and when to look. Jeff was my postdoctoral advisor at Vanderbilt University, and I wanted to revisit a few guiding principles he instills in all his students. Linking Propositions by Davida Teller are a series of logical statements to ensure we rigorously connect the brain activity we record to the psychological functions we want to explain. Strong Inference by John Platt is the scientific method on steroids - a way to make our scientific practice most productive and efficient. We discuss both of these topics in the context of Jeff's eye movement and decision-making science. We also discuss how neurophysiology has changed over the past 30 years, we compare the relatively small models he employs with the huge deep learning models, many of his current projects, and plenty more. If you want to learn more about Jeff's work and approach, I recommend reading in order two of his review papers we discuss as well. One was written 20 years ago (On Building a Bridge Between Brain and Behavior), and the other 2-ish years ago (Accumulators, Neurons, and Response Time).

    Schall Lab.Twitter: @LabSchall.Related papersLinking Propositions.Strong Inference.On Building a Bridge Between Brain and Behavior.Accumulators, Neurons, and Response Time.

    0:00 - Intro6:51 - Neurophysiology old and new14:50 - Linking propositions24:18 - Psychology working with neurophysiology35:40 - Neuron doctrine, population doctrine40:28 - Strong Inference and deep learning46:37 - Model mimicry51:56 - Scientific fads57:07 - Current projects1:06:38 - On leaving academia1:13:51 - How academia has changed for better and worse

  • Support the show to get full episodes and join the Discord community.

    Support the show to get full episodes and join the Discord community.

    Marc Howard runs his Theoretical Cognitive Neuroscience Lab at Boston University, where he develops mathematical models of cognition, constrained by psychological and neural data. In this episode, we discuss the idea that a Laplace transform and its inverse may serve as a unified framework for memory. In short, our memories are compressed on a continuous log-scale: as memories get older, their representations "spread out" in time. It turns out this kind of representation seems ubiquitous in the brain and across cognitive functions, suggesting it is likely a canonical computation our brains use to represent a wide variety of cognitive functions. We also discuss some of the ways Marc is incorporating this mathematical operation in deep learning nets to improve their ability to handle information at different time scales.

    Theoretical Cognitive Neuroscience Lab. Related papers:Memory as perception of the past: Compressed time in mind and brain.Formal models of memory based on temporally-varying representations.Cognitive computation using neural representations of time and space in the Laplace domain.Time as a continuous dimension in natural and artificial networks.DeepSITH: Efficient learning via decomposition of what and when across time scales.

    0:00 - Intro4:57 - Main idea: Laplace transforms12:00 - Time cells20:08 - Laplace, compression, and time cells25:34 - Everywhere in the brain29:28 - Episodic memory35:11 - Randy Gallistel's memory idea40:37 - Adding Laplace to deep nets48:04 - Reinforcement learning1:00:52 - Brad Wyble Q: What gets filtered out?1:05:38 - Replay and complementary learning systems1:11:52 - Howard Goldowsky Q: Gyorgy Buzsaki1:15:10 - Obstacles

  • Support the show to get full episodes and join the Discord community.

    Support the show to get full episodes and join the Discord community.

    Matthew Larkum runs his lab at Humboldt University of Berlin, where his group studies how dendrites contribute to  computations within and across layers of the neocortex. Since the late 1990s, Matthew has continued to uncover key properties of the way pyramidal neurons stretch across layers of the cortex, their dendrites receiving inputs from those different layers - and thus different brain areas. For example, layer 5 pyramidal neurons have a set of basal dendrites near the cell body that receives feedforward-like input, and a set of apical dendrites all the way up in layer 1 that receives feedback--like input. Depending on which set of dendrites is receiving input, or neither or both, the neuron's output functions in different modes- silent, regular spiking, or burst spiking. Matthew realized the different sets of dendritic inputs could signal different operations, often pairing feedforward sensory--like signals and feedback context-like signals. His research has shown this kind of coincidence detection is important for cognitive functions like perception, memory, learning, and even wakefulness. We discuss many of his ideas and research findings, why dendrites have long been neglected in favor of neuron cell bodies, the possibility of learning about computations by studying implementation-level phenomena, and much more.

    Larkum Lab.Twitter: @mattlark.Related papersCellular Mechanisms of Conscious Processing.Perirhinal input to neocortical layer 1 controls learning. (bioRxiv link: https://www.biorxiv.org/content/10.1101/713883v1)Are dendrites conceptually useful?Memories off the top of your head.Do Action Potentials Cause Consciousness?Blake Richard's episode discussing back-propagation in the brain (based on Matthew's experiments)

    0:00 - Intro5:31 - Background: Dendrites23:20 - Cortical neuron bodies vs. branches25:47 - Theories of cortex30:49 - Feedforward and feedback hierarchy37:40 - Dendritic integration hypothesis44:32 - DIT vs. other consciousness theories51:30 - Mac Shine Q11:04:38 - Are dendrites conceptually useful?1:09:15 - Insights from implementation level1:24:44 - How detailed to model?1:28:15 - Do action potentials cause consciousness?1:40:33 - Mac Shine Q2

  • Support the show to get full episodes and join the Discord community.

    Support the show to get full episodes and join the Discord community.

    Brian Butterworth is Emeritus Professor of Cognitive Neuropsychology at University College London. In his book, Can Fish Count?: What Animals Reveal About Our Uniquely Mathematical Minds, he describes the counting and numerical abilities across many different species, suggesting our ability to count is evolutionarily very old (since many diverse species can count). We discuss many of the examples in his book, the mathematical disability dyscalculia and its relation to dyslexia, how to test counting abilities in various species, how counting may happen in brains, the promise of creating artificial networks that can do math, and many more topics.

    Brian's website: The Mathematical BrainTwitter: @b_butterworthThe book:Can Fish Count?: What Animals Reveal About Our Uniquely Mathematical Minds

    0:00 - Intro3:19 - Why Counting?5:31 - Dyscalculia12:06 - Dyslexia19:12 - Counting26:37 - Origins of counting vs. language34:48 - Counting vs. higher math46:46 - Counting some things and not others53:33 - How to test counting1:03:30 - How does the brain count?1:13:10 - Are numbers real?

  • Support the show to get full episodes and join the Discord community.

    Support the show to get full episodes and join the Discord community.

    Michel Bitbol is Director of Research at CNRS (Centre National de la Recherche Scientifique). Alex Gomez-Marin is a neuroscientist running his lab, The Behavior of Organisms Laboratory, at the Instituto de Neurociencias in Alicante. We discuss phenomenology as an alternative perspective on our scientific endeavors. Although we like to believe our science is objective and explains the reality of the world we inhabit, we can't escape the fact that all of our scientific knowledge comes through our perceptions and interpretations as conscious living beings. Michel has used phenomenology to resolve many of the paradoxes that quantum mechanics generates when it is understood as a description of reality, and more recently he has applied phenomenology to the philosophy of mind and consciousness. Alex is currently trying to apply the phenomenological approach to his research on brains and behavior. Much of our conversation revolves around how phenomenology and our "normal" scientific explorations can co-exist, including the study of minds, brains, and intelligence- our own and that of other organisms. We also discuss the "blind spot" of science, the history and practice of phenomenology, various kinds of explanation, the language we use to describe things, and more.

    Michel's websiteAlex's Lab: The Behavior of Organisms Laboratory.Twitter: @behaviOrganisms (Alex)Related papersThe Blind Spot of Neuroscience  The Life of BehaviorA Clash of Umwelts Related events:The Future Scientist (a conversation series)

    0:00 - Intro4:32 - The Blind Spot15:53 - Phenomenology and interpretation22:51 - Personal stories: appreciating phenomenology37:42 - Quantum physics example47:16 - Scientific explanation vs. phenomenological description59:39 - How can phenomenology and science complement each other?1:08:22 - Neurophenomenology1:17:34 - Use of language1:25:46 - Mutual constraints

  • Support the show to get full episodes and join the Discord community.

    Support the show to get full episodes and join the Discord community.

    Brains are often conceived as consisting of neurons and "everything else." As Elena discusses, the "everything else," including glial cells and in particular astrocytes, have largely been ignored in neuroscience. That's partly because the fast action potentials of neurons have been assumed to underlie computations in the brain, and because technology only recently afforded closer scrutiny of astrocyte activity. Now that we can record calcium signaling in astrocytes, it's possible to relate how astrocyte signaling with each other and with neurons may complement the cognitive roles once thought the sole domain of neurons. Although the computational role of astrocytes remains unclear, it is clear that astrocytes interact with neurons and neural circuits in dynamic and interesting ways. We talk about the historical story of astrocytes, the emerging modern story, and Elena shares her views on the path forward to understand astrocyte function in cognition, disease, homeostasis, and - Elena's favorite current hypothesis - their integrative role in negative feedback control.

    Elena's website.Twitter: @elenagalea1Related papersA roadmap to integrate astrocytes into Systems Neuroscience.Elena recommended this paper: Biological feedback control—Respect the loops.

    0:00 - Intro5:23 - The changing story of astrocytes14:58 - Astrocyte research lags neuroscience19:45 - Types of astrocytes23:06 - Astrocytes vs neurons26:08 - Computational roles of astrocytes35:45 - Feedback control43:37 - Energy efficiency46:25 - Current technology52:58 - Computational astroscience1:10:57 - Do names for things matter

  • Support the show to get full episodes and join the Discord community.

    Support the show to get full episodes and join the Discord community.

    Srini is Emeritus Professor at Queensland Brain Institute in Australia. In this episode, he shares his wide range of behavioral experiments elucidating the principles of flight and navigation in insects. We discuss how bees use optic flow signals to determine their speed, distance, proximity to objects, and to gracefully land. These abilities are largely governed via control systems, balancing incoming perceptual signals with internal reference signals. We also talk about a few of the aerial robotics projects his research has inspired, many of the other cognitive skills bees can learn, the possibility of their feeling pain , and the nature of their possible subjective conscious experience.

    Srini's Website.Related papersVision, perception, navigation and 'cognition' in honeybees and applications to aerial robotics.

    0:00 - Intro3:34 - Background8:20 - Bee experiments14:30 - Bee flight and navigation28:05 - Landing33:06 - Umwelt and perception37:26 - Bee-inspired aerial robotics49:10 - Motion camouflage51:52 - Cognition in bees1:03:10 - Small vs. big brains1:06:42 - Pain in bees1:12:50 - Subjective experience1:15:25 - Deep learning1:23:00 - Path forward

  • Support the show to get full episodes and join the Discord community.

    Support the show to get full episodes and join the Discord community.

    Ken discusses the recent work in his lab that allows communication with subjects while they experience lucid dreams. This new paradigm opens many avenues to study the neuroscience and psychology of consciousness, sleep, dreams, memory, and learning, and to improve and optimize sleep for cognition. Ken and his team are developing a Lucid Dreaming App which is freely available via his lab. We also discuss much of his work on memory and learning in general and specifically related to sleep, like reactivating specific memories during sleep to improve learning.

    Ken's Cognitive Neuroscience Laboratory.Twitter: @kap101.The Lucid Dreaming App.Related papersMemory and Sleep: How Sleep Cognition Can Change the Waking Mind for the Better.Does memory reactivation during sleep support generalization at the cost of memory specifics?Real-time dialogue between experimenters and dreamers during REM sleep.

    0:00 - Intro2:48 - Background and types of memory14:44 -Consciousness and memory23:32 - Phases and sleep and wakefulness28:19 - Sleep, memory, and learning33:50 - Targeted memory reactivation48:34 - Problem solving during sleep51:50 - 2-way communication with lucid dreamers1:01:43 - Confounds to the paradigm1:04:50 - Limitations and future studies1:09:35 - Lucid dreaming app1:13:47 - How sleep can inform AI1:20:18 - Advice for students

  • Announcement:

    I'm releasing my Neuro-AI course April 10-13, after which it will be closed for some time. Learn more here.

    Support the show to get full episodes and join the Discord community.

    Ila discusses her theoretical neuroscience work suggesting how our memories are formed within the cognitive maps we use to navigate the world and navigate our thoughts. The main idea is that grid cell networks in the entorhinal cortex internally generate a structured scaffold, which gets sent to the hippocampus. Neurons in the hippocampus, like the well-known place cells, receive that scaffolding and also receive external signals from the neocortex- signals about what's happening in the world and in our thoughts. Thus, the place cells act to "pin" what's happening in our neocortex to the scaffold, forming a memory. We also discuss her background as a physicist and her approach as a "neurophysicist", and a review she's publishing all about the many brain areas and cognitive functions being explained as attractor landscapes within a dynamical systems framework.

    The Fiete Lab.Related papersA structured scaffold underlies activity in the hippocampus.Attractor and integrator networks in the brain.

    0:00 - Intro3:36 - "Neurophysicist"9:30 - Bottom-up vs. top-down15:57 - Tool scavenging18:21 - Cognitive maps and hippocampus22:40 - Hopfield networks27:56 - Internal scaffold38:42 - Place cells43:44 - Grid cells54:22 - Grid cells encoding place cells59:39 - Scaffold model: stacked hopfield networks1:05:39 - Attractor landscapes1:09:22 - Landscapes across scales1:12:27 - Dimensionality of landscapes

  • Support the show to get full episodes and join the Discord community.

    Sri and Mei join me to discuss how including principles of neuromodulation in deep learning networks may improve network performance. It's an ever-present question how much detail to include in models, and we are in the early stages of learning how neuromodulators and their interactions shape biological brain function. But as we continue to learn more, Sri and Mei are interested in building "neuromodulation-aware DNNs".

    Neural Circuits Laboratory.Twitter: Sri: @srikipedia; Jie: @neuro_Mei.Related papersInforming deep neural networks by multiscale principles of neuromodulatory systems.

    0:00 - Intro3:10 - Background9:19 - Bottom-up vs. top-down14:42 - Levels of abstraction22:46 - Biological neuromodulation33:18 - Inventing neuromodulators41:10 - How far along are we?53:31 - Multiple realizability1:09:40 -Modeling dendrites1:15:24 - Across-species neuromodulation

  • Support the show to get full episodes and join the Discord community.

    Eve discusses many of the lessons she has learned studying a small nervous system, the crustacean stomatogastric nervous system (STG). The STG has only about 30 neurons and its connections and neurophysiology are well-understood. Yet Eve's work has shown it functions under a remarkable diversity of conditions, and does so is a remarkable variety of ways. We discuss her work on the STG specifically, and what her work implies about trying to study much larger nervous systems, like our human brains.

    The Marder Lab.Twitter: @MarderLab.Related to our conversation:Understanding Brains: Details, Intuition, and Big Data.Emerging principles governing the operation of neural networks (Eve mentions this regarding "building blocks" of neural networks).

    0:00 - Intro3:58 - Background8:00 - Levels of ambiguity9:47 - Stomatogastric nervous system17:13 - Structure vs. function26:08 - Role of theory34:56 - Technology vs. understanding38:25 - Higher cognitive function44:35 - Adaptability, resilience, evolution50:23 - Climate change56:11 - Deep learning57:12 - Dynamical systems

  • Support the show to get full episodes and join the Discord community.

    Patryk and I discuss his wide-ranging background working in both the neuroscience and AI worlds, and his resultant perspective on what's needed to move forward in AI, including some principles of brain processes that are more and less important. We also discuss his own work using some of those principles to help deep learning generalize to better capture how humans behave in and perceive the world.

    Patryk's homepage.Twitter: @paklnet.Related papersUnsupervised Learning from Continuous Video in a Scalable Predictive Recurrent Network.

    0:00 - Intro2:22 - Patryk's background8:37 - Importance of diverse skills16:14 - What is intelligence?20:34 - Important brain principles22:36 - Learning from the real world35:09 - Language models42:51 - AI contribution to neuroscience48:22 - Criteria for "real" AI53:11 - Neuroscience for AI1:01:20 - What can we ignore about brains?1:11:45 - Advice to past self

  • Support the show to get full episodes and join the Discord community.

    Hakwan and I discuss many of the topics in his new book, In Consciousness we Trust: The Cognitive Neuroscience of Subjective Experience. Hakwan describes his perceptual reality monitoring theory of consciousness, which suggests consciousness may act as a systems check between our sensory perceptions and higher cognitive functions. We also discuss his latest thoughts on mental quality space and how it relates to perceptual reality monitoring. Among many other topics, we chat about the many confounds and challenges to empirically studying consciousness, a topic featured heavily in the first half of his book. Hakwan was on a previous episode with Steve Fleming, BI 099 Hakwan Lau and Steve Fleming: Neuro-AI Consciousness.

    Hakwan's lab: Consciousness and Metacognition Lab.Twitter: @hakwanlau.Book:In Consciousness we Trust: The Cognitive Neuroscience of Subjective Experience.

    0:00 - Intro4:37 - In Consciousness We Trust12:19 - Too many consciousness theories?19:26 - Philosophy and neuroscience of consciousness29:00 - Local vs. global theories31:20 - Perceptual reality monitoring and GANs42:43 - Functions of consciousness47:17 - Mental quality space56:44 - Cognitive maps1:06:28 - Performance capacity confounds1:12:28 - Blindsight1:19:11 - Philosophy vs. empirical work

  • Support the show to get full episodes and join the Discord community.

    Tomás and I discuss his research and ideas on how memories are encoded (the engram), the role of forgetting, and the overlapping mechanisms of memory and instinct. Tomás uses otpogenetics and other techniques to label and control neurons involved in learning and memory, and has shown that forgotten memories can be restored by stimulating "engram cells" originally associated with the forgotten memory. This line of research has led Tomás to think forgetting might be a learning mechanism itself, a adaption our brains make based on the predictability and affordances of the environment. His work on engrams has also led Tomás to think our instincts (ingrams) may share the same mechanism of our memories (engrams), and that memories may transition to instincts across generations. We begin by addressing Randy Gallistel's engram ideas from the previous episode: BI 126 Randy Gallistel: Where Is the Engram?

    Ryan Lab.Twitter: @TJRyan_77.Related papersEngram cell connectivity: an evolving substrate for information storage.Forgetting as a form of adaptive engram cell plasticity.Memory and Instinct as a Continuum of Information Storage in The Cognitive Neurosciences.The Bandwagon by Claude Shannon.

    0:00 - Intro4:05 - Response to Randy Gallistel10:45 - Computation in the brain14:52 - Instinct and memory19:37 - Dynamics of memory21:55 - Wiring vs. connection strength plasticity24:16 - Changing one's mind33:09 - Optogenetics and memory experiments47:24 - Forgetting as learning1:06:35 - Folk psychological terms1:08:49 - Memory becoming instinct1:21:49 - Instinct across the lifetime1:25:52 - Boundaries of memories1:28:52 - Subjective experience of memory1:31:58 - Interdisciplinary research1:37:32 - Communicating science

  • Support the show to get full episodes and join the Discord community.

    Randy and I discuss his long-standing interest in how the brain stores information to compute. That is, where is the engram, the physical trace of memory in the brain? Modern neuroscience is dominated by the view that memories are stored among synaptic connections in populations of neurons. Randy believes a more reasonable and reliable way to store abstract symbols, like numbers, is to write them into code within individual neurons. Thus, the spiking code, whatever it is, functions to write and read memories into and out of intracellular substrates, like polynucleotides (DNA, RNA, e.g.). He lays out his case in detail in his book with Adam King, Memory and the Computational Brain: Why Cognitive Science will Transform Neuroscience. We also talk about some research and theoretical work since then that support his views.

    Randy's Rutger's website.Book:Memory and the Computational Brain: Why Cognitive Science will Transform Neuroscience.Related papers:The theoretical RNA paper Randy mentions: An RNA-based theory of natural universal computation.Evidence for intracellular engram in cerebellum: Memory trace and timing mechanism localized to cerebellar Purkinje cells.The exchange between Randy and John Lisman.The blog post Randy mentions about Universal function approximation:The Truth About the [Not So] Universal Approximation Theorem

    0:00 - Intro6:50 - Cognitive science vs. computational neuroscience13:23 - Brain as computing device15:45 - Noam Chomsky's influence17:58 - Memory must be stored within cells30:58 - Theoretical support for the idea34:15 - Cerebellum evidence supporting the idea40:56 - What is the write mechanism?51:11 - Thoughts on deep learning1:00:02 - Multiple memory mechanisms?1:10:56 - The role of plasticity1:12:06 - Trying to convince molecular biologists

  • Support the show to get full episodes and join the Discord community.

    Doris, Tony, and Blake are the organizers for this year's NAISys conference, From Neuroscience to Artificially Intelligent Systems (NAISys), at Cold Spring Harbor. We discuss the conference itself, some history of the neuroscience and AI interface, their current research interests, and a handful of topics around evolution, innateness, development, learning, and the current and future prospects for using neuroscience to inspire new ideas in artificial intelligence.

    From Neuroscience to Artificially Intelligent Systems (NAISys).Doris:@doristsao.Tsao Lab.Unsupervised deep learning identifies semantic disentanglement in single inferotemporal face patch neurons.Tony:@TonyZador.Zador Lab.A Critique of Pure Learning: What Artificial Neural Networks can Learn from Animal Brains.Blake:@tyrell_turing.The Learning in Neural Circuits Lab.The functional specialization of visual cortex emerges from training parallel pathways with self-supervised predictive learning.

    0:00 - Intro4:16 - Tony Zador5:38 - Doris Tsao10:44 - Blake Richards15:46 - Deductive, inductive, abductive inference16:32 - NAISys33:09 - Evolution, development, learning38:23 - Learning: plasticity vs. dynamical structures54:13 - Different kinds of understanding1:03:05 - Do we understand evolution well enough?1:04:03 - Neuro-AI fad?1:06:26 - Are your problems bigger or smaller now?