Brain Inspired show

Brain Inspired

Summary: Neuroscience and artificial intelligence work better together. Brain inspired is a celebration and exploration of the ideas driving our progress to understand intelligence. I interview experts about their work at the interface of neuroscience, artificial intelligence, cognitive science, philosophy, psychology, and more: the symbiosis of these overlapping fields, how they inform each other, where they differ, what the past brought us, and what the future brings. Topics include computational neuroscience, supervised machine learning, unsupervised learning, reinforcement learning, deep learning, convolutional and recurrent neural networks, decision-making science, AI agents, backpropagation, credit assignment, neuroengineering, neuromorphics, emergence, philosophy of mind, consciousness, general AI, spiking neural networks, data science, and a lot more. The podcast is not produced for a general audience. Instead, it aims to educate, challenge, inspire, and hopefully entertain those interested in learning more about neuroscience and AI.

Join Now to Subscribe to this Podcast

Podcasts:

 BI 140 Jeff Schall: Decisions and Eye Movements | File Type: audio/mpeg | Duration: 01:20:22

Check out my short video series about what's missing in AI and Neuroscience. Support the show to get full episodes and join the Discord community. Jeff Schall is the director of the Center for Visual Neurophysiology at York University, where he runs the Schall Lab. His research centers around studying the mechanisms of our decisions, choices, movement control, and attention within the saccadic eye movement brain systems and in mathematical psychology models- in other words, how we decide where and when to look. Jeff was my postdoctoral advisor at Vanderbilt University, and I wanted to revisit a few guiding principles he instills in all his students. Linking Propositions by Davida Teller are a series of logical statements to ensure we rigorously connect the brain activity we record to the psychological functions we want to explain. Strong Inference by John Platt is the scientific method on steroids - a way to make our scientific practice most productive and efficient. We discuss both of these topics in the context of Jeff's eye movement and decision-making science. We also discuss how neurophysiology has changed over the past 30 years, we compare the relatively small models he employs with the huge deep learning models, many of his current projects, and plenty more. If you want to learn more about Jeff's work and approach, I recommend reading in order two of his review papers we discuss as well. One was written 20 years ago (On Building a Bridge Between Brain and Behavior), and the other 2-ish years ago (Accumulators, Neurons, and Response Time). Schall Lab.Twitter: @LabSchall.Related papersLinking Propositions.Strong Inference.On Building a Bridge Between Brain and Behavior.Accumulators, Neurons, and Response Time. 0:00 - Intro 6:51 - Neurophysiology old and new 14:50 - Linking propositions 24:18 - Psychology working with neurophysiology 35:40 - Neuron doctrine, population doctrine 40:28 - Strong Inference and deep learning 46:37 - Model mimicry 51:56 - Scientific fads 57:07 - Current projects 1:06:38 - On leaving academia 1:13:51 - How academia has changed for better and worse

 BI 139 Marc Howard: Compressed Time and Memory | File Type: audio/mpeg | Duration: 01:20:11

Check out my short video series about what's missing in AI and Neuroscience. Support the show to get full episodes and join the Discord community. Marc Howard runs his Theoretical Cognitive Neuroscience Lab at Boston University, where he develops mathematical models of cognition, constrained by psychological and neural data. In this episode, we discuss the idea that a Laplace transform and its inverse may serve as a unified framework for memory. In short, our memories are compressed on a continuous log-scale: as memories get older, their representations "spread out" in time. It turns out this kind of representation seems ubiquitous in the brain and across cognitive functions, suggesting it is likely a canonical computation our brains use to represent a wide variety of cognitive functions. We also discuss some of the ways Marc is incorporating this mathematical operation in deep learning nets to improve their ability to handle information at different time scales. Theoretical Cognitive Neuroscience Lab. Related papers:Memory as perception of the past: Compressed time in mind and brain.Formal models of memory based on temporally-varying representations.Cognitive computation using neural representations of time and space in the Laplace domain.Time as a continuous dimension in natural and artificial networks.DeepSITH: Efficient learning via decomposition of what and when across time scales. 0:00 - Intro 4:57 - Main idea: Laplace transforms 12:00 - Time cells 20:08 - Laplace, compression, and time cells 25:34 - Everywhere in the brain 29:28 - Episodic memory 35:11 - Randy Gallistel's memory idea 40:37 - Adding Laplace to deep nets 48:04 - Reinforcement learning 1:00:52 - Brad Wyble Q: What gets filtered out? 1:05:38 - Replay and complementary learning systems 1:11:52 - Howard Goldowsky Q: Gyorgy Buzsaki 1:15:10 - Obstacles

 BI 138 Matthew Larkum: The Dendrite Hypothesis | File Type: audio/mpeg | Duration: 01:51:42

Check out my short video series about what's missing in AI and Neuroscience. Support the show to get full episodes and join the Discord community. Matthew Larkum runs his lab at Humboldt University of Berlin, where his group studies how dendrites contribute to  computations within and across layers of the neocortex. Since the late 1990s, Matthew has continued to uncover key properties of the way pyramidal neurons stretch across layers of the cortex, their dendrites receiving inputs from those different layers - and thus different brain areas. For example, layer 5 pyramidal neurons have a set of basal dendrites near the cell body that receives feedforward-like input, and a set of apical dendrites all the way up in layer 1 that receives feedback--like input. Depending on which set of dendrites is receiving input, or neither or both, the neuron's output functions in different modes- silent, regular spiking, or burst spiking. Matthew realized the different sets of dendritic inputs could signal different operations, often pairing feedforward sensory--like signals and feedback context-like signals. His research has shown this kind of coincidence detection is important for cognitive functions like perception, memory, learning, and even wakefulness. We discuss many of his ideas and research findings, why dendrites have long been neglected in favor of neuron cell bodies, the possibility of learning about computations by studying implementation-level phenomena, and much more. Larkum Lab.Twitter: @mattlark.Related papersCellular Mechanisms of Conscious Processing.Perirhinal input to neocortical layer 1 controls learning. (bioRxiv link: https://www.biorxiv.org/content/10.1101/713883v1)Are dendrites conceptually useful?Memories off the top of your head.Do Action Potentials Cause Consciousness?Blake Richard's episode discussing back-propagation in the brain (based on Matthew's experiments) 0:00 - Intro 5:31 - Background: Dendrites 23:20 - Cortical neuron bodies vs. branches 25:47 - Theories of cortex 30:49 - Feedforward and feedback hierarchy 37:40 - Dendritic integration hypothesis 44:32 - DIT vs. other consciousness theories 51:30 - Mac Shine Q1 1:04:38 - Are dendrites conceptually useful? 1:09:15 - Insights from implementation level 1:24:44 - How detailed to model? 1:28:15 - Do action potentials cause consciousness? 1:40:33 - Mac Shine Q2

 BI 137 Brian Butterworth: Can Fish Count? | File Type: audio/mpeg | Duration: 01:17:49

Check out my short video series about what's missing in AI and Neuroscience. Support the show to get full episodes and join the Discord community. Brian Butterworth is Emeritus Professor of Cognitive Neuropsychology at University College London. In his book, Can Fish Count?: What Animals Reveal About Our Uniquely Mathematical Minds, he describes the counting and numerical abilities across many different species, suggesting our ability to count is evolutionarily very old (since many diverse species can count). We discuss many of the examples in his book, the mathematical disability dyscalculia and its relation to dyslexia, how to test counting abilities in various species, how counting may happen in brains, the promise of creating artificial networks that can do math, and many more topics. Brian's website: The Mathematical BrainTwitter: @b_butterworthThe book:Can Fish Count?: What Animals Reveal About Our Uniquely Mathematical Minds 0:00 - Intro 3:19 - Why Counting? 5:31 - Dyscalculia 12:06 - Dyslexia 19:12 - Counting 26:37 - Origins of counting vs. language 34:48 - Counting vs. higher math 46:46 - Counting some things and not others 53:33 - How to test counting 1:03:30 - How does the brain count? 1:13:10 - Are numbers real?

 BI 136 Michel Bitbol and Alex Gomez-Marin: Phenomenology | File Type: audio/mpeg | Duration: 01:34:12

Support the show to get full episodes and join the Discord community. Check out my short video series about what's missing in AI and Neuroscience. Michel Bitbol is Director of Research at CNRS (Centre National de la Recherche Scientifique). Alex Gomez-Marin is a neuroscientist running his lab, The Behavior of Organisms Laboratory, at the Instituto de Neurociencias in Alicante. We discuss phenomenology as an alternative perspective on our scientific endeavors. Although we like to believe our science is objective and explains the reality of the world we inhabit, we can't escape the fact that all of our scientific knowledge comes through our perceptions and interpretations as conscious living beings. Michel has used phenomenology to resolve many of the paradoxes that quantum mechanics generates when it is understood as a description of reality, and more recently he has applied phenomenology to the philosophy of mind and consciousness. Alex is currently trying to apply the phenomenological approach to his research on brains and behavior. Much of our conversation revolves around how phenomenology and our "normal" scientific explorations can co-exist, including the study of minds, brains, and intelligence- our own and that of other organisms. We also discuss the "blind spot" of science, the history and practice of phenomenology, various kinds of explanation, the language we use to describe things, and more. Michel's websiteAlex's Lab: The Behavior of Organisms Laboratory.Twitter: @behaviOrganisms (Alex)Related papersThe Blind Spot of Neuroscience  The Life of BehaviorA Clash of Umwelts Related events:The Future Scientist (a conversation series) 0:00 - Intro 4:32 - The Blind Spot 15:53 - Phenomenology and interpretation 22:51 - Personal stories: appreciating phenomenology 37:42 - Quantum physics example 47:16 - Scientific explanation vs. phenomenological description 59:39 - How can phenomenology and science complement each other? 1:08:22 - Neurophenomenology 1:17:34 - Use of language 1:25:46 - Mutual constraints

 BI 135 Elena Galea: The Stars of the Brain | File Type: audio/mpeg | Duration: 01:17:25

Support the show to get full episodes and join the Discord community. Check out my short video series about what's missing in AI and Neuroscience. Brains are often conceived as consisting of neurons and "everything else." As Elena discusses, the "everything else," including glial cells and in particular astrocytes, have largely been ignored in neuroscience. That's partly because the fast action potentials of neurons have been assumed to underlie computations in the brain, and because technology only recently afforded closer scrutiny of astrocyte activity. Now that we can record calcium signaling in astrocytes, it's possible to relate how astrocyte signaling with each other and with neurons may complement the cognitive roles once thought the sole domain of neurons. Although the computational role of astrocytes remains unclear, it is clear that astrocytes interact with neurons and neural circuits in dynamic and interesting ways. We talk about the historical story of astrocytes, the emerging modern story, and Elena shares her views on the path forward to understand astrocyte function in cognition, disease, homeostasis, and - Elena's favorite current hypothesis - their integrative role in negative feedback control. Elena's website.Twitter: @elenagalea1Related papersA roadmap to integrate astrocytes into Systems Neuroscience.Elena recommended this paper: Biological feedback control—Respect the loops. 0:00 - Intro 5:23 - The changing story of astrocytes 14:58 - Astrocyte research lags neuroscience 19:45 - Types of astrocytes 23:06 - Astrocytes vs neurons 26:08 - Computational roles of astrocytes 35:45 - Feedback control 43:37 - Energy efficiency 46:25 - Current technology 52:58 - Computational astroscience 1:10:57 - Do names for things matter

 BI 134 Mandyam Srinivasan: Bee Flight and Cognition | File Type: audio/mpeg | Duration: 01:26:17

Support the show to get full episodes and join the Discord community. Check out my short video series about what's missing in AI and Neuroscience. Srini is Emeritus Professor at Queensland Brain Institute in Australia. In this episode, he shares his wide range of behavioral experiments elucidating the principles of flight and navigation in insects. We discuss how bees use optic flow signals to determine their speed, distance, proximity to objects, and to gracefully land. These abilities are largely governed via control systems, balancing incoming perceptual signals with internal reference signals. We also talk about a few of the aerial robotics projects his research has inspired, many of the other cognitive skills bees can learn, the possibility of their feeling pain , and the nature of their possible subjective conscious experience. Srini's Website.Related papersVision, perception, navigation and 'cognition' in honeybees and applications to aerial robotics. 0:00 - Intro 3:34 - Background 8:20 - Bee experiments 14:30 - Bee flight and navigation 28:05 - Landing 33:06 - Umwelt and perception 37:26 - Bee-inspired aerial robotics 49:10 - Motion camouflage 51:52 - Cognition in bees 1:03:10 - Small vs. big brains 1:06:42 - Pain in bees 1:12:50 - Subjective experience 1:15:25 - Deep learning 1:23:00 - Path forward

 BI 133 Ken Paller: Lucid Dreaming, Memory, and Sleep | File Type: audio/mpeg | Duration: 01:29:14

Support the show to get full episodes and join the Discord community. Check out my short video series about what's missing in AI and Neuroscience. Ken discusses the recent work in his lab that allows communication with subjects while they experience lucid dreams. This new paradigm opens many avenues to study the neuroscience and psychology of consciousness, sleep, dreams, memory, and learning, and to improve and optimize sleep for cognition. Ken and his team are developing a Lucid Dreaming App which is freely available via his lab. We also discuss much of his work on memory and learning in general and specifically related to sleep, like reactivating specific memories during sleep to improve learning. Ken's Cognitive Neuroscience Laboratory.Twitter: @kap101.The Lucid Dreaming App.Related papersMemory and Sleep: How Sleep Cognition Can Change the Waking Mind for the Better.Does memory reactivation during sleep support generalization at the cost of memory specifics?Real-time dialogue between experimenters and dreamers during REM sleep. 0:00 - Intro 2:48 - Background and types of memory 14:44 -Consciousness and memory 23:32 - Phases and sleep and wakefulness 28:19 - Sleep, memory, and learning 33:50 - Targeted memory reactivation 48:34 - Problem solving during sleep 51:50 - 2-way communication with lucid dreamers 1:01:43 - Confounds to the paradigm 1:04:50 - Limitations and future studies 1:09:35 - Lucid dreaming app 1:13:47 - How sleep can inform AI 1:20:18 - Advice for students

 BI 132 Ila Fiete: A Grid Scaffold for Memory | File Type: audio/mpeg | Duration: 01:17:20

Announcement: I'm releasing my Neuro-AI course April 10-13, after which it will be closed for some time. Learn more here. Support the show to get full episodes and join the Discord community. Ila discusses her theoretical neuroscience work suggesting how our memories are formed within the cognitive maps we use to navigate the world and navigate our thoughts. The main idea is that grid cell networks in the entorhinal cortex internally generate a structured scaffold, which gets sent to the hippocampus. Neurons in the hippocampus, like the well-known place cells, receive that scaffolding and also receive external signals from the neocortex- signals about what's happening in the world and in our thoughts. Thus, the place cells act to "pin" what's happening in our neocortex to the scaffold, forming a memory. We also discuss her background as a physicist and her approach as a "neurophysicist", and a review she's publishing all about the many brain areas and cognitive functions being explained as attractor landscapes within a dynamical systems framework. The Fiete Lab.Related papersA structured scaffold underlies activity in the hippocampus.Attractor and integrator networks in the brain. 0:00 - Intro 3:36 - "Neurophysicist" 9:30 - Bottom-up vs. top-down 15:57 - Tool scavenging 18:21 - Cognitive maps and hippocampus 22:40 - Hopfield networks 27:56 - Internal scaffold 38:42 - Place cells 43:44 - Grid cells 54:22 - Grid cells encoding place cells 59:39 - Scaffold model: stacked hopfield networks 1:05:39 - Attractor landscapes 1:09:22 - Landscapes across scales 1:12:27 - Dimensionality of landscapes

 BI 131 Sri Ramaswamy and Jie Mei: Neuromodulation-aware DNNs | File Type: audio/mpeg | Duration: 01:26:52

Support the show to get full episodes and join the Discord community. Sri and Mei join me to discuss how including principles of neuromodulation in deep learning networks may improve network performance. It's an ever-present question how much detail to include in models, and we are in the early stages of learning how neuromodulators and their interactions shape biological brain function. But as we continue to learn more, Sri and Mei are interested in building "neuromodulation-aware DNNs". Neural Circuits Laboratory.Twitter: Sri: @srikipedia; Jie: @neuro_Mei.Related papersInforming deep neural networks by multiscale principles of neuromodulatory systems. 0:00 - Intro 3:10 - Background 9:19 - Bottom-up vs. top-down 14:42 - Levels of abstraction 22:46 - Biological neuromodulation 33:18 - Inventing neuromodulators 41:10 - How far along are we? 53:31 - Multiple realizability 1:09:40 -Modeling dendrites 1:15:24 - Across-species neuromodulation

 BI 130 Eve Marder: Modulation of Networks | File Type: audio/mpeg | Duration: 01:00:56

Support the show to get full episodes and join the Discord community. Eve discusses many of the lessons she has learned studying a small nervous system, the crustacean stomatogastric nervous system (STG). The STG has only about 30 neurons and its connections and neurophysiology are well-understood. Yet Eve's work has shown it functions under a remarkable diversity of conditions, and does so is a remarkable variety of ways. We discuss her work on the STG specifically, and what her work implies about trying to study much larger nervous systems, like our human brains. The Marder Lab.Twitter: @MarderLab.Related to our conversation:Understanding Brains: Details, Intuition, and Big Data.Emerging principles governing the operation of neural networks (Eve mentions this regarding "building blocks" of neural networks). 0:00 - Intro 3:58 - Background 8:00 - Levels of ambiguity 9:47 - Stomatogastric nervous system 17:13 - Structure vs. function 26:08 - Role of theory 34:56 - Technology vs. understanding 38:25 - Higher cognitive function 44:35 - Adaptability, resilience, evolution 50:23 - Climate change 56:11 - Deep learning 57:12 - Dynamical systems

 BI 129 Patryk Laurent: Learning from the Real World | File Type: audio/mpeg | Duration: 01:21:01

Support the show to get full episodes and join the Discord community. Patryk and I discuss his wide-ranging background working in both the neuroscience and AI worlds, and his resultant perspective on what's needed to move forward in AI, including some principles of brain processes that are more and less important. We also discuss his own work using some of those principles to help deep learning generalize to better capture how humans behave in and perceive the world. Patryk's homepage.Twitter: @paklnet.Related papersUnsupervised Learning from Continuous Video in a Scalable Predictive Recurrent Network. 0:00 - Intro 2:22 - Patryk's background 8:37 - Importance of diverse skills 16:14 - What is intelligence? 20:34 - Important brain principles 22:36 - Learning from the real world 35:09 - Language models 42:51 - AI contribution to neuroscience 48:22 - Criteria for "real" AI 53:11 - Neuroscience for AI 1:01:20 - What can we ignore about brains? 1:11:45 - Advice to past self

 BI 128 Hakwan Lau: In Consciousness We Trust | File Type: audio/mpeg | Duration: 01:25:40

Support the show to get full episodes and join the Discord community. Hakwan and I discuss many of the topics in his new book, In Consciousness we Trust: The Cognitive Neuroscience of Subjective Experience. Hakwan describes his perceptual reality monitoring theory of consciousness, which suggests consciousness may act as a systems check between our sensory perceptions and higher cognitive functions. We also discuss his latest thoughts on mental quality space and how it relates to perceptual reality monitoring. Among many other topics, we chat about the many confounds and challenges to empirically studying consciousness, a topic featured heavily in the first half of his book. Hakwan was on a previous episode with Steve Fleming, BI 099 Hakwan Lau and Steve Fleming: Neuro-AI Consciousness. Hakwan's lab: Consciousness and Metacognition Lab.Twitter: @hakwanlau.Book:In Consciousness we Trust: The Cognitive Neuroscience of Subjective Experience. 0:00 - Intro 4:37 - In Consciousness We Trust 12:19 - Too many consciousness theories? 19:26 - Philosophy and neuroscience of consciousness 29:00 - Local vs. global theories 31:20 - Perceptual reality monitoring and GANs 42:43 - Functions of consciousness 47:17 - Mental quality space 56:44 - Cognitive maps 1:06:28 - Performance capacity confounds 1:12:28 - Blindsight 1:19:11 - Philosophy vs. empirical work

 BI 127 Tomás Ryan: Memory, Instinct, and Forgetting | File Type: audio/mpeg | Duration: 01:42:39

Support the show to get full episodes and join the Discord community. Tomás and I discuss his research and ideas on how memories are encoded (the engram), the role of forgetting, and the overlapping mechanisms of memory and instinct. Tomás uses otpogenetics and other techniques to label and control neurons involved in learning and memory, and has shown that forgotten memories can be restored by stimulating "engram cells" originally associated with the forgotten memory. This line of research has led Tomás to think forgetting might be a learning mechanism itself, a adaption our brains make based on the predictability and affordances of the environment. His work on engrams has also led Tomás to think our instincts (ingrams) may share the same mechanism of our memories (engrams), and that memories may transition to instincts across generations. We begin by addressing Randy Gallistel's engram ideas from the previous episode: BI 126 Randy Gallistel: Where Is the Engram? Ryan Lab.Twitter: @TJRyan_77.Related papersEngram cell connectivity: an evolving substrate for information storage.Forgetting as a form of adaptive engram cell plasticity.Memory and Instinct as a Continuum of Information Storage in The Cognitive Neurosciences.The Bandwagon by Claude Shannon. 0:00 - Intro 4:05 - Response to Randy Gallistel 10:45 - Computation in the brain 14:52 - Instinct and memory 19:37 - Dynamics of memory 21:55 - Wiring vs. connection strength plasticity 24:16 - Changing one's mind 33:09 - Optogenetics and memory experiments 47:24 - Forgetting as learning 1:06:35 - Folk psychological terms 1:08:49 - Memory becoming instinct 1:21:49 - Instinct across the lifetime 1:25:52 - Boundaries of memories 1:28:52 - Subjective experience of memory 1:31:58 - Interdisciplinary research 1:37:32 - Communicating science

 BI 126 Randy Gallistel: Where Is the Engram? | File Type: audio/mpeg | Duration: 01:19:57

Support the show to get full episodes and join the Discord community. Randy and I discuss his long-standing interest in how the brain stores information to compute. That is, where is the engram, the physical trace of memory in the brain? Modern neuroscience is dominated by the view that memories are stored among synaptic connections in populations of neurons. Randy believes a more reasonable and reliable way to store abstract symbols, like numbers, is to write them into code within individual neurons. Thus, the spiking code, whatever it is, functions to write and read memories into and out of intracellular substrates, like polynucleotides (DNA, RNA, e.g.). He lays out his case in detail in his book with Adam King, Memory and the Computational Brain: Why Cognitive Science will Transform Neuroscience. We also talk about some research and theoretical work since then that support his views. Randy's Rutger's website.Book:Memory and the Computational Brain: Why Cognitive Science will Transform Neuroscience.Related papers:The theoretical RNA paper Randy mentions: An RNA-based theory of natural universal computation.Evidence for intracellular engram in cerebellum: Memory trace and timing mechanism localized to cerebellar Purkinje cells.The exchange between Randy and John Lisman.The blog post Randy mentions about Universal function approximation:The Truth About the [Not So] Universal Approximation Theorem 0:00 - Intro 6:50 - Cognitive science vs. computational neuroscience 13:23 - Brain as computing device 15:45 - Noam Chomsky's influence 17:58 - Memory must be stored within cells 30:58 - Theoretical support for the idea 34:15 - Cerebellum evidence supporting the idea 40:56 - What is the write mechanism? 51:11 - Thoughts on deep learning 1:00:02 - Multiple memory mechanisms? 1:10:56 - The role of plasticity 1:12:06 - Trying to convince molecular biologists

Comments

Login or signup comment.