Episoder

  • Support the show to get full episodes and join the Discord community.

    Mazviita Chirimuuta is a philosopher at the University of Edinburgh. Today we discuss topics from her new book, The Brain Abstracted: Simplification in the History and Philosophy of Neuroscience.

    She largely argues that when we try to understand something complex, like the brain, using models, and math, and analogies, for example - we should keep in mind these are all ways of simplifying and abstracting away details to give us something we actually can understand. And, when we do science, every tool we use and perspective we bring, every way we try to attack a problem, these are all both necessary to do the science and limit the interpretation we can claim from our results. She does all this and more by exploring many topics in neuroscience and philosophy throughout the book, many of which we discuss today.

    Mazviita's University of Edinburgh page.The Brain Abstracted: Simplification in the History and Philosophy of Neuroscience.Previous Brain Inspired episodes:BI 072 Mazviita Chirimuuta: Understanding, Prediction, and RealityBI 114 Mark Sprevak and Mazviita Chirimuuta: Computation and the Mind

    0:00 - Intro5:28 - Neuroscience to philosophy13:39 - Big themes of the book27:44 - Simplifying by mathematics32:19 - Simplifying by reduction42:55 - Simplification by analogy46:33 - Technology precedes science55:04 - Theory, technology, and understanding58:04 - Cross-disciplinary progress58:45 - Complex vs. simple(r) systems1:08:07 - Is science bound to study stability?1:13:20 - 4E for philosophy but not neuroscience?1:28:50 - ANNs as models1:38:38 - Study of mind

  • Support the show to get full episodes and join the Discord community.

    As some of you know, I recently got back into the research world, and in particular I work in Eric Yttris' lab at Carnegie Mellon University.

    Eric's lab studies the relationship between various kinds of behaviors and the neural activity in a few areas known to be involved in enacting and shaping those behaviors, namely the motor cortex and basal ganglia.  And study that, he uses tools like optogentics, neuronal recordings, and stimulations, while mice perform certain tasks, or, in my case, while they freely behave wandering around an enclosed space.

    We talk about how Eric got here, how and why the motor cortex and basal ganglia are still mysteries despite lots of theories and experimental work, Eric's work on trying to solve those mysteries using both trained tasks and more naturalistic behavior. We talk about the valid question, "What is a behavior?", and lots more.

    Yttri Lab

    Twitter: @YttriLabRelated papersOpponent and bidirectional control of movement velocity in the basal ganglia.B-SOiD, an open-source unsupervised algorithm for identification and fast prediction of behaviors.

    0:00 - Intro2:36 - Eric's background14:47 - Different animal models17:59 - ANNs as models for animal brains24:34 - Main question25:43 - How circuits produce appropriate behaviors26:10 - Cerebellum27:49 - What do motor cortex and basal ganglia do?49:12 - Neuroethology1:06:09 - What is a behavior?1:11:18 - Categorize behavior (B-SOiD)1:22:01 - Real behavior vs. ANNs1:33:09 - Best era in neuroscience

  • Mangler du episoder?

    Klikk her for å oppdatere manuelt.

  • Support the show to get full episodes and join the Discord community.

    Peter Stratton is a research scientist at Queensland University of Technology.

    I was pointed toward Pete by a patreon supporter, who sent me a sort of perspective piece Pete wrote that is the main focus of our conversation, although we also talk about some of his work in particular - for example, he works with spiking neural networks, like my last guest, Dan Goodman.

    What Pete argues for is what he calls a sideways-in approach. So a bottom-up approach is to build things like we find them in the brain, put them together, and voila, we'll get cognition. A top-down approach, the current approach in AI, is to train a system to perform a task, give it some algorithms to run, and fiddle with the architecture and lower level details until you pass your favorite benchmark test. Pete is focused more on the principles of computation brains employ that current AI doesn't. If you're familiar with David Marr, this is akin to his so-called "algorithmic level", but it's between that and the "implementation level", I'd say. Because Pete is focused on the synthesis of different kinds of brain operations - how they intermingle to perform computations and produce emergent properties. So he thinks more like a systems neuroscientist in that respect. Figuring that out is figuring out how to make better AI, Pete says. So we discuss a handful of those principles, all through the lens of how challenging a task it is to synthesize multiple principles into a coherent functioning whole (as opposed to a collection of parts). Buy, hey, evolution did it, so I'm sure we can, too, right?

    Peter's website.Related papersConvolutionary, Evolutionary, and Revolutionary: What’s Next for Brains, Bodies, and AI?Making a Spiking Net Work: Robust brain-like unsupervised machine learning.Global segregation of cortical activity and metastable dynamics.Unlocking neural complexity with a robotic key

    0:00 - Intro3:50 - AI background, neuroscience principles8:00 - Overall view of modern AI14:14 - Moravec's paradox and robotics20:50 -Understanding movement to understand cognition30:01 - How close are we to understanding brains/minds?32:17 - Pete's goal34:43 - Principles from neuroscience to build AI42:39 - Levels of abstraction and implementation49:57 - Mental disorders and robustness55:58 - Function vs. implementation1:04:04 - Spiking networks1:07:57 - The roadmap1:19:10 - AGI1:23:48 - The terms AGI and AI1:26:12 - Consciousness

  • Support the show to get full episodes and join the Discord community.

    You may know my guest as the co-founder of Neuromatch, the excellent online computational neuroscience academy, or as the creator of the Brian spiking neural network simulator, which is freely available. I know him as a spiking neural network practitioner extraordinaire. Dan Goodman runs the Neural Reckoning Group at Imperial College London, where they use spiking neural networks to figure out how biological and artificial brains reckon, or compute.

    All of the current AI we use to do all the impressive things we do, essentially all of it, is built on artificial neural networks. Notice the word "neural" there. That word is meant to communicate that these artificial networks do stuff the way our brains do stuff. And indeed, if you take a few steps back, spin around 10 times, take a few shots of whiskey, and squint hard enough, there is a passing resemblance. One thing you'll probably still notice, in your drunken stupor, is that, among the thousand ways ANNs differ from brains, is that they don't use action potentials, or spikes. From the perspective of neuroscience, that can seem mighty curious. Because, for decades now, neuroscience has focused on spikes as the things that make our cognition tick.

    We count them and compare them in different conditions, and generally put a lot of stock in their usefulness in brains.

    So what does it mean that modern neural networks disregard spiking altogether?

    Maybe spiking really isn't important to process and transmit information as well as our brains do. Or maybe spiking is one among many ways for intelligent systems to function well. Dan shares some of what he's learned and how he thinks about spiking and SNNs and a host of other topics.

    Neural Reckoning Group.Twitter: @neuralreckoning.Related papersNeural heterogeneity promotes robust learning.Dynamics of specialization in neural modules under resource constraints.Multimodal units fuse-then-accumulate evidence across channels.Visualizing a joint future of neuroscience and neuromorphic engineering.

    0:00 - Intro3:47 - Why spiking neural networks, and a mathematical background13:16 - Efficiency17:36 - Machine learning for neuroscience19:38 - Why not jump ship from SNNs?23:35 - Hard and easy tasks29:20 - How brains and nets learn32:50 - Exploratory vs. theory-driven science37:32 - Static vs. dynamic39:06 - Heterogeneity46:01 - Unifying principles vs. a hodgepodge50:37 - Sparsity58:05 - Specialization and modularity1:00:51 - Naturalistic experiments1:03:41 - Projects for SNN research1:05:09 - The right level of abstraction1:07:58 - Obstacles to progress1:12:30 - Levels of explanation1:14:51 - What has AI taught neuroscience?1:22:06 - How has neuroscience helped AI?

  • Support the show to get full episodes and join the Discord community.

    Check out my free video series about what's missing in AI and Neuroscience

    John Krakauer has been on the podcast multiple times (see links below). Today we discuss some topics framed around what he's been working on and thinking about lately. Things like

    Whether brains actually reorganize after damageThe role of brain plasticity in generalThe path toward and the path not toward understanding higher cognitionHow to fix motor problems after strokesAGIFunctionalism, consciousness, and much more.

    Relevant links:

    John's Lab.Twitter: @blamlabRelated papersWhat are we talking about? Clarifying the fuzzy concept of representation in neuroscience and beyond.Against cortical reorganisation.Other episodes with John:BI 025 John Krakauer: Understanding CognitionBI 077 David and John Krakauer: Part 1BI 078 David and John Krakauer: Part 2BI 113 David Barack and John Krakauer: Two Views On Cognition

    Time stamps0:00 - Intro2:07 - It's a podcast episode!6:47 - Stroke and Sherrington neuroscience19:26 - Thinking vs. moving, representations34:15 - What's special about humans?56:35 - Does cortical reorganization happen?1:14:08 - Current era in neuroscience

  • Support the show to get full episodes and join the Discord community.

    Check out my free video series about what's missing in AI and Neuroscience

    By day, Max Bennett is an entrepreneur. He has cofounded and CEO'd multiple AI and technology companies. By many other countless hours, he has studied brain related sciences. Those long hours of research have payed off in the form of this book, A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains.

    Three lines of research formed the basis for how Max synthesized knowledge into the ideas in his current book: findings from comparative psychology (comparing brains and minds of different species), evolutionary neuroscience (how brains have evolved), and artificial intelligence, especially the algorithms developed to carry out functions. We go through I think all five of the breakthroughs in some capacity. A recurring theme is that each breakthrough may explain multiple new abilities. For example, the evolution of the neocortex may have endowed early mammals with the ability to simulate or imagine what isn't immediately present, and this ability might further explain mammals' capacity to engage in vicarious trial and error (imagining possible actions before trying them out), the capacity to engage in counterfactual learning (what would have happened if things went differently than they did), and the capacity for episodic memory and imagination.

    The book is filled with unifying accounts like that, and it makes for a great read. Strap in, because Max gives a sort of masterclass about many of the ideas in his book.

    Twitter:@maxsbennettBook:A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains.

    0:00 - Intro5:26 - Why evolution is important7:22 - Maclean's triune brain14:59 - Breakthrough 1: Steering29:06 - Fish intelligence40:38 - Breakthrough 3: Mentalizing52:44 - How could we improve the human brain?1:00:44 - What is intelligence?1:13:50 - Breakthrough 5: Speaking

  • Support the show to get full episodes and join the Discord community.

    Welcome to another special panel discussion episode.

    I was recently invited to moderate at discussion amongst 6 people at the annual Aspirational Neuroscience meetup. Aspirational Neuroscience is a nonprofit community run by Kenneth Hayworth. Ken has been on the podcast before on episode 103. Ken helps me introduce the meetup and panel discussion for a few minutes. The goal in general was to discuss how current and developing neuroscience technologies might be used to decode a nontrivial memory from a static connectome - what the obstacles are, how to surmount those obstacles, and so on.

    There isn't video of the event, just audio, and because we were all sharing microphones and they were being passed around, you'll hear some microphone type noise along the way - but I did my best to optimize the audio quality, and it turned out mostly quite listenable I believe.

    Aspirational NeurosciencePanelists:Anton Arkhipov, Allen Institute for Brain Science.@AntonSArkhipovKonrad Kording, University of Pennsylvania.@KordingLabTomás Ryan, Trinity College Dublin.@TJRyan_77Srinivas Turaga, Janelia Research Campus.Dong Song, University of Southern California.@dongsongZhihao Zheng, Princeton University.@zhihaozheng

    0:00 - Intro1:45 - Ken Hayworth14:09 - Panel Discussion

  • Support the show to get full episodes and join the Discord community.

    Check out my free video series about what's missing in AI and Neuroscience

    Laura Gradowski is a philosopher of science at the University of Pittsburgh. Pluralism is roughly the idea that there is no unified account of any scientific field, that we should be tolerant of and welcome a variety of theoretical and conceptual frameworks, and methods, and goals, when doing science. Pluralism is kind of a buzz word right now in my little neuroscience world, but it's an old and well-trodden notion... many philosophers have been calling for pluralism for many years. But how pluralistic should we be in our studies and explanations in science? Laura suggests we should be very, very pluralistic, and to make her case, she cites examples in the history of science of theories and theorists that were once considered "fringe" but went on to become mainstream accepted theoretical frameworks. I thought it would be fun to have her on to share her ideas about fringe theories, mainstream theories, pluralism, etc.

    We discuss a wide range of topics, but also discuss some specific to the brain and mind sciences. Laura goes through an example of something and someone going from fringe to mainstream - the Garcia effect, named after John Garcia, whose findings went agains the grain of behaviorism, the dominant dogma of the day in psychology. But this overturning only happened after Garcia had to endure a long scientific hell of his results being ignored and shunned. So, there are multiple examples like that, and we discuss a handful. This has led Laura to the conclusion we should accept almost all theoretical frameworks, We discuss her ideas about how to implement this, where to draw the line, and much more.

    Laura's page at the Center for the Philosophy of Science at the University of Pittsburgh.

    Facing the Fringe.

    Garcia's reflections on his troubles: Tilting at the Paper Mills of Academe

    0:00 - Intro3:57 - What is fringe?10:14 - What makes a theory fringe?14:31 - Fringe to mainstream17:23 - Garcia effect28:17 - Fringe to mainstream: other examples32:38 - Fringe and consciousness33:19 - Words meanings change over time40:24 - Pseudoscience43:25 - How fringe becomes mainstream47:19 - More fringe characteristics50:06 - Pluralism as a solution54:02 - Progress1:01:39 - Encyclopedia of theories1:09:20 - When to reject a theory1:20:07 - How fringe becomes fringe1:22:50 - Marginilization1:27:53 - Recipe for fringe theorist

  • Support the show to get full episodes and join the Discord community.

    Check out my free video series about what's missing in AI and Neuroscience

    Eric Shea-Brown is a theoretical neuroscientist and principle investigator of the working group on neural dynamics at the University of Washington. In this episode, we talk a lot about dynamics and dimensionality in neural networks... how to think about them, why they matter, how Eric's perspectives have changed through his career. We discuss a handful of his specific research findings about dynamics and dimensionality, like how dimensionality changes when one is performing a task versus when you're just sort of going about your day, what we can say about dynamics just by looking at different structural connection motifs, how different modes of learning can rely on different dimensionalities, and more.We also talk about how he goes about choosing what to work on and how to work on it. You'll hear in our discussion how much credit Eric gives to those surrounding him and those who came before him - he drops tons of references and names, so get ready if you want to follow up on some of the many lines of research he mentions.

    Eric's website.Related papersPredictive learning as a network mechanism for extracting low-dimensional latent space representations.A scale-dependent measure of system dimensionality.From lazy to rich to exclusive task representations in neural networks and neural codes.Feedback through graph motifs relates structure and function in complex networks.

    0:00 - Intro4:15 - Reflecting on the rise of dynamical systems in neuroscience11:15 - DST view on macro scale15:56 - Intuitions22:07 - Eric's approach31:13 - Are brains more or less impressive to you now?38:45 - Why is dimensionality important?50:03 - High-D in Low-D54:14 - Dynamical motifs1:14:56 - Theory for its own sake1:18:43 - Rich vs. lazy learning1:22:58 - Latent variables1:26:58 - What assumptions give you most pause?

  • Support the show to get full episodes and join the Discord community.

    I was recently invited to moderate a panel at the Annual Bernstein conference - this one was in Berlin Germany. The panel I moderated was at a satellite workshop at the conference called How can machine learning be used to generate insights and theories in neuroscience? Below are the panelists. I hope you enjoy the discussion!

    Program: How can machine learning be used to generate insights and theories in neuroscience?Panelists:Katrin FrankeLab website.Twitter: @kfrankelab.Ralf HaefnerHaefner lab.Twitter: @haefnerlab.Martin HebartHebart Lab.Twitter: @martin_hebart.Johannes JaegerYogi's website.Twitter: @yoginho.Fred WolfFred's university webpage.

    Organizers:

    Alexander Ecker | University of Göttingen, GermanyFabian Sinz | University of Göttingen, GermanyMohammad Bashiri, Pavithra Elumalai, Michaela Vystrcilová | University of Göttingen, Germany
  • Support the show to get full episodes and join the Discord community.

    David runs his lab at NYU, where they stud`y auditory cognition, speech perception, language, and music. On the heels of the episode with David Glanzman, we discuss the ongoing mystery regarding how memory works, how to study and think about brains and minds, and the reemergence (perhaps) of the language of thought hypothesis.

    David has been on the podcast a few times... once by himself, and again with Gyorgy Buzsaki.

    Poeppel labTwitter: @davidpoeppel.Related papersWe don’t know how the brain stores anything, let alone words.Memory in humans and deep language models: Linking hypotheses for model augmentation.The neural ingredients for a language of thought are available.

    0:00 - Intro11:17 - Across levels14:598 - Nature of memory24:12 - Using the right tools for the right question35:46 - LLMs, what they need, how they've shaped David's thoughts44:55 - Across levels54:07 - Speed of progress1:02:21 - Neuroethology and mental illness - patreon1:24:42 - Language of Thought

  • Support the show to get full episodes and join the Discord community.

    Check out my free video series about what's missing in AI and Neuroscience

    Kevin Mitchell is professor of genetics at Trinity College Dublin. He's been on the podcast before, and we talked a little about his previous book, Innate – How the Wiring of Our Brains Shapes Who We Are. He's back today to discuss his new book Free Agents: How Evolution Gave Us Free Will. The book is written very well and guides the reader through a wide range of scientific knowledge and reasoning that undergirds Kevin's main take home: our free will comes from the fact that we are biological organisms, biological organisms have agency, and as that agency evolved to become more complex and layered, so does our ability to exert free will. We touch on a handful of topics in the book, like the idea of agency, how it came about at the origin of life, and how the complexity of kinds of agency, the richness of our agency, evolved as organisms became more complex.

    We also discuss Kevin's reliance on the indeterminacy of the universe to tell his story, the underlying randomness at fundamental levels of physics. Although indeterminacy isn't necessary for ongoing free will, it is responsible for the capacity for free will to exist in the first place. We discuss the brain's ability to harness its own randomness when needed, creativity, whether and how it's possible to create something new, artificial free will, and lots more.

    Kevin's website.Twitter: @WiringtheBrainBook: Free Agents: How Evolution Gave Us Free Will

    4:27 - From Innate to Free Agents9:14 - Thinking of the whole organism15:11 - Who the book is for19:49 - What bothers Kevin27:00 - Indeterminacy30:08 - How it all began33:08 - How indeterminacy helps43:58 - Libet's free will experiments50:36 - Creativity59:16 - Selves, subjective experience, agency, and free will1:10:04 - Levels of agency and free will1:20:38 - How much free will can we have?1:28:03 - Hierarchy of mind constraints1:36:39 - Artificial agents and free will1:42:57 - Next book?

  • Check out my free video series about what's missing in AI and Neuroscience

    Support the show to get full episodes and join the Discord community.

    Alicia Juarrero is a philosopher and has been interested in complexity since before it was cool.

    In this episode, we discuss many of the topics and ideas in her new book, Context Changes Everything: How Constraints Create Coherence, which makes the thorough case that constraints should be given way more attention when trying to understand complex systems like brains and minds - how they're organized, how they operate, how they're formed and maintained, and so on. Modern science, thanks in large part to the success of physics, focuses on a single kind of causation - the kind involved when one billiard ball strikes another billiard ball. But that kind of causation neglects what Alicia argues are the most important features of complex systems the constraints that shape the dynamics and possibility spaces of systems. Much of Alicia's book describes the wide range of types of constraints we should be paying attention to, and how they interact and mutually influence each other. I highly recommend the book, and you may want to read it before, during, and after our conversation. That's partly because, if you're like me, the concepts she discusses still aren't comfortable to think about the way we're used to thinking about how things interact. Thinking across levels of organization turns out to be hard. You might also want her book handy because, hang on to your hats, we jump around a lot among those concepts. Context Changes everything comes about 25 years after her previous classic, Dynamics In Action, which we also discuss and which I also recommend if you want more of a primer to her newer more expansive work. Alicia's work touches on all things complex, from self-organizing systems like whirlpools, to ecologies, businesses, societies, and of course minds and brains.

    Book:Context Changes Everything: How Constraints Create Coherence

    0:00 - Intro3:37 - 25 years thinking about constraints8:45 - Dynamics in Action and eliminativism13:08 - Efficient and other kinds of causation19:04 - Complexity via context independent and dependent constraints25:53 - Enabling and limiting constraints30:55 - Across scales36:32 - Temporal constraints42:58 - A constraint cookbook?52:12 - Constraints in a mechanistic worldview53:42 - How to explain using constraints56:22 - Concepts and multiple realizabillity59:00 - Kevin Mitchell question1:08:07 - Mac Shine Question1:19:07 - 4E1:21:38 - Dimensionality across levels1:27:26 - AI and constraints1:33:08 - AI and life

  • Support the show to get full episodes and join the Discord community.

    In the intro, I mention the Bernstein conference workshop I'll participate in, called How can machine learning be used to generate insights and theories in neuroscience?. Follow that link to learn more, and register for the conference here. Hope to see you there in late September in Berlin!

    Justin Wood runs the Wood Lab at Indiana University, and his lab's tagline is "building newborn minds in virtual worlds." In this episode, we discuss his work comparing the visual cognition of newborn chicks and AI models. He uses a controlled-rearing technique with natural chicks, whereby the chicks are raised from birth in completely controlled visual environments. That way, Justin can present designed visual stimuli to test what kinds of visual abilities chicks have or can immediately learn. Then he can building models and AI agents that are trained on the same data as the newborn chicks. The goal is to use the models to better understand natural visual intelligence, and use what we know about natural visual intelligence to help build systems that better emulate biological organisms. We discuss some of the visual abilities of the chicks and what he's found using convolutional neural networks. Beyond vision, we discuss his work studying the development of collective behavior, which compares chicks to a model that uses CNNs, reinforcement learning, and an intrinsic curiosity reward function. All of this informs the age-old nature (nativist) vs. nurture (empiricist) debates, which Justin believes should give way to embrace both nature and nurture.

    Wood lab.Related papers:Controlled-rearing studies of newborn chicks and deep neural networks.Development of collective behavior in newborn artificial agents.A newborn embodied Turing test for view-invariant object recognition.Justin mentions these papers:Untangling invariant object recognition (Dicarlo & Cox 2007)

    0:00 - Intro5:39 - Origins of Justin's current research11:17 - Controlled rearing approach21:52 - Comparing newborns and AI models24:11 - Nativism vs. empiricism28:15 - CNNs and early visual cognition29:35 - Smoothness and slowness50:05 - Early biological development53:27 - Naturalistic vs. highly controlled56:30 - Collective behavior in animals and machines1:02:34 - Curiosity and critical periods1:09:05 - Controlled rearing vs. other developmental studies1:13:25 - Breaking natural rules1:16:33 - Deep RL collective behavior1:23:16 - Bottom-up and top-down

  • Support the show to get full episodes and join the Discord community.

    David runs his lab at UCLA where he's also a distinguished professor.  David used to believe what is currently the mainstream view, that our memories are stored in our synapses, those connections between our neurons.  So as we learn, the synaptic connections strengthen and weaken until their just right, and that serves to preserve the memory. That's been the dominant view in neuroscience for decades, and is the fundamental principle that underlies basically all of deep learning in AI. But because of his own and others experiments, which he describes in this episode, David has come to the conclusion that memory must be stored not at the synapse, but in the nucleus of neurons, likely by some epigenetic mechanism mediated by RNA molecules. If this sounds familiar, I had Randy Gallistel on the the podcast on episode 126 to discuss similar ideas, and David discusses where he and Randy differ in their thoughts. This episode starts out pretty technical as David describes the series of experiments that changed his mind, but after that we broaden our discussion to a lot of the surrounding issues regarding whether and if his story about memory is true. And we discuss meta-issues like how old discarded ideas in science often find their way back, what it's like studying non-mainstream topic, including challenges trying to get funded for it, and so on.

    David's Faculty Page.Related papersThe central importance of nuclear mechanisms in the storage of memory.David mentions Arc and virus-like transmission:The Neuronal Gene Arc Encodes a Repurposed Retrotransposon Gag Protein that Mediates Intercellular RNA Transfer.Structure of an Arc-ane virus-like capsid.David mentions many of the ideas from the Pushing the Boundaries: Neuroscience, Cognition, and Life  Symposium.Related episodes:BI 126 Randy Gallistel: Where Is the Engram?BI 127 Tomás Ryan: Memory, Instinct, and Forgetting
  • Support the show to get full episodes and join the Discord community.

    Check out my free video series about what's missing in AI and Neuroscience

    My guest is Michael C. Frank, better known as Mike Frank, who runs the Language and Cognition lab at Stanford. Mike's main interests center on how children learn language - in particular he focuses a lot on early word learning, and what that tells us about our other cognitive functions, like concept formation and social cognition.

    We discuss that, his love for developing open data sets that anyone can use,

    The dance he dances between bottom-up data-driven approaches in this big data era, traditional experimental approaches, and top-down theory-driven approaches

    How early language learning in children differs from LLM learning

    Mike's rational speech act model of language use, which considers the intentions or pragmatics of speakers and listeners in dialogue.

    Language & Cognition LabTwitter: @mcxfrank.I mentioned Mike's tweet thread about saying LLMs "have" cognitive functions:Related papers:Pragmatic language interpretation as probabilistic inference.Toward a “Standard Model” of Early Language Learning.The pervasive role of pragmatics in early language.The Structure of Developmental Variation in Early Childhood.Relational reasoning and generalization using non-symbolic neural networks.Unsupervised neural network models of the ventral visual stream.
  • Support the show to get full episodes and join the Discord community.

    Check out my free video series about what's missing in AI and Neuroscience

    In this episode I have a casual chat with Ali Mohebi about his new faculty position and his plans for the future.

    Ali's website.Twitter: @mohebial
  • Support the show to get full episodes and join the Discord community.

    Check out my free video series about what's missing in AI and Neuroscience

    My guest today is Andrea Martin, who is the Research Group Leader in the department of Language and Computation in Neural Systems at the Max Plank Institute and the Donders Institute. Andrea is deeply interested in understanding how our biological brains process and represent language. To this end, she is developing a theoretical model of language. The aim of the model is to account for the properties of language, like its structure, its compositionality, its infinite expressibility, while adhering to physiological data we can measure from human brains.

    Her theoretical model of language, among other things, brings in the idea of low-dimensional manifolds and neural dynamics along those manifolds. We've discussed manifolds a lot on the podcast, but they are a kind of abstract structure in the space of possible neural population activity - the neural dynamics. And that manifold structure defines the range of possible trajectories, or pathways, the neural dynamics can take over  time.

    One of Andrea's ideas is that manifolds might be a way for the brain to combine two properties of how we learn and use language. One of those properties is the statistical regularities found in language - a given word, for example, occurs more often near some words and less often near some other words. This statistical approach is the foundation of how large language models are trained. The other property is the more formal structure of language: how it's arranged and organized in such a way that gives it meaning to us. Perhaps these two properties of language can come together as a single trajectory along a neural manifold. But she has lots of ideas, and we discuss many of them. And of course we discuss large language models, and how Andrea thinks of them with respect to biological cognition. We talk about modeling in general and what models do and don't tell us, and much more.

    Andrea's website.Twitter: @andrea_e_martin.Related papersA Compositional Neural Architecture for LanguageAn oscillating computational model can track pseudo-rhythmic speech by using linguistic predictionsNeural dynamics differentially encode phrases and sentences during spoken language comprehensionHierarchical structure in language and action: A formal comparisonAndrea mentions this book: The Geometry of Biological Time.
  • Check out my free video series about what's missing in AI and Neuroscience

    Support the show to get full episodes and join the Discord community.

    This is one in a periodic series of episodes with Alex Gomez-Marin, exploring how the arts and humanities can impact (neuro)science. Artistic creations, like cinema, have the ability to momentarily lower our ever-critical scientific mindset and allow us to imagine alternate possibilities and experience emotions outside our normal scientific routines. Might this feature of art potentially change our scientific attitudes and perspectives?

    Frauke Sandig and Eric Black recently made the documentary film AWARE: Glimpses of Consciousness, which profiles six researchers studying consciousness from different perspectives. The film is filled with rich visual imagery and conveys a sense of wonder and awe in trying to understand subjective experience, while diving deep into the reflections of the scientists and thinkers approaching the topic from their various perspectives.

    This isn't a "normal" Brain Inspired episode, but I hope you enjoy the discussion!

    AWARE: Glimpses of ConsciousnessUmbrella Films

    0:00 - Intro19:42 - Mechanistic reductionism45:33 - Changing views during lifetime53:49 - Did making the film alter your views?57:49 - ChatGPT1:04:20 - Materialist assumption1:11:00 - Science of consciousness1:20:49 - Transhumanism1:32:01 - Integrity1:36:19 - Aesthetics1:39:50 - Response to the film

  • Support the show to get full episodes and join the Discord community.

    Check out my free video series about what's missing in AI and Neuroscience

    Panayiota Poirazi runs the Poirazi Lab at the FORTH Institute of Molecular Biology and Biotechnology, and Yiota loves dendrites, those branching tree-like structures sticking out of all your neurons, and she thinks you should love dendrites, too, whether you study biological or artificial intelligence. In neuroscience, the old story was that dendrites just reach out and collect incoming signals for the all-important neuron cell body to process. Yiota, and people Like Matthew Larkum, with whom I chatted in episode 138, are continuing to demonstrate that dendrites are themselves computationally complex and powerful, doing many varieties of important signal transformation before signals reach the cell body. For example, in 2003, Yiota showed that because of dendrites, a single neuron can act as a two-layer artificial neural network, and since then others have shown single neurons can act as deeper and deeper multi-layer networks.  In Yiota's opinion, an even more important function of dendrites is increased computing efficiency, something evolution favors and something artificial networks need to favor as well moving forward.

    Poirazi LabTwitter: @YiotaPoirazi.Related papersDrawing Inspiration from Biological Dendrites to Empower Artificial Neural Networks.Illuminating dendritic function with computational models.Introducing the Dendrify framework for incorporating dendrites to spiking neural networks.Pyramidal Neuron as Two-Layer Neural Network

    0:00 - Intro3:04 - Yiota's background6:40 - Artificial networks and dendrites9:24 - Dendrites special sauce?14:50 - Where are we in understanding dendrite function?20:29 - Algorithms, plasticity, and brains29:00 - Functional unit of the brain42:43 - Engrams51:03 - Dendrites and nonlinearity54:51 - Spiking neural networks56:02 - Best level of biological detail57:52 - Dendrify1:05:41 - Experimental work1:10:58 - Dendrites across species and development1:16:50 - Career reflection1:17:57 - Evolution of Yiota's thinking