Brain Inspired show

Brain Inspired

Summary: Neuroscience and artificial intelligence work better together. Brain inspired is a celebration and exploration of the ideas driving our progress to understand intelligence. I interview experts about their work at the interface of neuroscience, artificial intelligence, cognitive science, philosophy, psychology, and more: the symbiosis of these overlapping fields, how they inform each other, where they differ, what the past brought us, and what the future brings. Topics include computational neuroscience, supervised machine learning, unsupervised learning, reinforcement learning, deep learning, convolutional and recurrent neural networks, decision-making science, AI agents, backpropagation, credit assignment, neuroengineering, neuromorphics, emergence, philosophy of mind, consciousness, general AI, spiking neural networks, data science, and a lot more. The podcast is not produced for a general audience. Instead, it aims to educate, challenge, inspire, and hopefully entertain those interested in learning more about neuroscience and AI.

Join Now to Subscribe to this Podcast

Podcasts:

 BI 091 Carsen Stringer: Understanding 40,000 Neurons | File Type: audio/mpeg | Duration: 01:28:19

Carsen and I discuss how she uses 2-photon calcium imaging data from over 10,000 neurons to understand the information processing of such large neural population activity. We talk about the tools she makes and uses to analyze the data, and the type of high-dimensional neural activity structure they found, which seems to allow efficient and robust information processing. We also talk about how these findings may help build better deep learning networks, and Carsen's thoughts on how to improve the diversity, inclusivity, and equality in neuroscience research labs. Guest question from Matt Smith. Stringer Lab.Twitter: @computingnature.The papers we discuss or mention:High-dimensional geometry of population responses in visual cortexSpontaneous behaviors drive multidimensional, brain-wide population activity. Timestamps: 0:00 - Intro 5:51 - Recording > 10k neurons 8:51 - 2-photon calcium imaging 14:56 - Balancing scientific questions and tools 21:16 - Unsupervised learning tools and rastermap 26:14 - Manifolds 32:13 - Matt Smith question 37:06 - Dimensionality of neural activity 58:51 - Future plans 1:00:30- What can AI learn from this? 1:13:26 - Diversity, inclusivity, equality

 BI 090 Chris Eliasmith: Building the Human Brain | File Type: audio/mpeg | Duration: 01:38:57

Chris and I discuss his Spaun large scale model of the human brain (Semantic Pointer Architecture Unified Network), as detailed in his book How to Build a Brain. We talk about his philosophical approach, how Spaun compares to Randy O'Reilly's Leabra networks, the Applied Brain Research Chris co-founded, and I have guest questions from Brad Aimone, Steve Potter, and Randy O'Reilly. Chris's website.Applied Brain Research.The book: How to Build a Brain.Nengo (you can run Spaun).Paper summary of Spaun: A large-scale model of the functioning brain. Some takeaways: Spaun is an embodied fully functional cognitive architecture with one eye for task instructions and an arm for responses.Chris uses elements from symbolic, connectionist, and dynamical systems approaches in cognitive science.The neural engineering framework (NEF) is how functions get instantiated in spiking neural networks.The semantic pointer architecture (SPA) is how representations are stored and transformed - i.e. the symbolic-like cognitive processing. Time Points: 0:00 - Intro 2:29 - Sense of awe 6:20 - Large-scale models 9:24 - Descriptive pragmatism 15:43 - Asking better questions 22:48 - Brad Aimone question: Neural engineering framework 29:07 - Engineering to build vs. understand 32:12 - Why is AI world not interested in brains/minds? 37:09 - Steve Potter neuromorphics question 44:51 - Spaun 49:33 - Semantic Pointer Architecture 56:04 - Representations 58:21 - Randy O'Reilly question 1 1:07:33 - Randy O'Reilly question 2 1:10:31 - Spaun vs. Leabra 1:32:43 - How would Chris start over?

 BI 089 Matt Smith: Drifting Cognition | File Type: audio/mpeg | Duration: 01:26:52

Matt and I discuss how cognition and behavior drifts over the course of minutes and hours, and how global brain activity drifts with it. How does the brain continue to produce steady perception and action in the midst of such drift? We also talk about how to think about variability in neural activity. How much of it is noise and how much of it is hidden important activity? Finally, we discuss the effect of recording more and more neurons simultaneously, collecting bigger and bigger datasets, plus guest questions from Adam Snyder and Patrick Mayo. Smith Lab.Twitter: @SmithLabNeuro.Related:Slow drift of neural activity as a signature of impulsivity in macaque visual and prefrontal cortex.Artwork by Melissa Neely Take home points: The “noise” in the variability of neural activity is likely just activity devoted to processing other things.Recording lots of neurons simultaneously helps resolve the question of what’s noise and how much information is in a population of neurons.There’s a neural signature of the behavioral “slow drift” of our internal cognitive state.The neural signature is global, and it’s an open question how the brain compensates to produce steady perception and action. Timestamps: 0:00 - Intro 4:35 - Adam Snyder question  15:26 - Multi-electrode recordings  17:48 - What is noise in the brain?  23:55 - How many neurons is enough?  27:43 - Patrick Mayo question  33:17 - Slow drift  54:10 - Impulsivity  57:32 - How does drift happen?  59:49 - Relation to AI  1:06:58 - What AI and neuro can teach each other  1:10:02 - Ecologically valid behavior  1:14:39 - Brain mechanisms vs. mind  1:17:36 - Levels of description  1:21:14 - Hard things to make in AI  1:22:48 - Best scientific moment 

 BI 088 Randy O’Reilly: Simulating the Human Brain | File Type: audio/mpeg | Duration: 01:39:08

Randy and I discuss his LEABRA cognitive architecture that aims to simulate the human brain, plus his current theory about how a loop between cortical regions and the thalamus could implement predictive learning and thus solve how we learn with so few examples. We also discuss what Randy thinks is the next big thing neuroscience can contribute to AI (thanks to a guest question from Anna Schapiro), and much more. Computational Cognitive Neuroscience Laboratory.The papers we discuss or mention:The Leabra Cognitive Architecture: How to Play 20 Principles with Nature and Win!Deep Predictive Learning in Neocortex and Pulvinar.Unraveling the Mysteries of Motivation.His youTube series detailing the theory and workings of Leabra:Computational Cognitive Neuroscience.The free textbook:Computational Cognitive Neuroscience A few take-home points: Leabra has been a slow incremental project, inspired in part by Alan Newell’s suggested approach.Randy began by developing a learning algorithm that incorporated both kinds of biological learning (error-driven and associative).Leabra's core is 3 brain areas - frontal cortex, parietal cortex, and hippocampus - and has grown from there.There’s a constant balance between biological realism and computational feasibility.It’s important that a cognitive architecture address multiple levels- micro-scale, macro-scale, mechanisms, functions, and so on.Deep predictive learning is a possible brain mechanism whereby predictions from higher layer cortex precede input from lower layer cortex in the thalamus, where an error is computed and used to drive learning.Randy believes our metacognitive ability to know what we do and don’t know is a key next function to build into AI. Timestamps: 0:00 -  Intro  3:54 - Skip Intro  6:20 - Being in awe  18:57 - How current AI can inform neuro  21:56 - Anna Schapiro question - how current neuro can inform AI. 29:20 - Learned vs. innate cognition  33:43 - LEABRA  38:33 - Developing Leabra  40:30 - Macroscale 42:33 - Thalamus as microscale  43:22 - Thalamocortical circuitry  47:25 - Deep predictive learning  56:18 - Deep predictive learning vs. backrop  1:01:56 - 10 Hz learning cycle  1:04:58 - Better theory vs. more data  1:08:59 - Leabra vs. Spaun  1:13:59 - Biological realism  1:21:54 - Bottom-up inspiration  1:27:26 - Biggest mistake in Leabra  1:32:14 - AI consciousness  1:34:45 - How would Randy begin again? 

 BI 087 Dileep George: Cloning for Cognitive Maps | File Type: audio/mpeg | Duration: 01:23:00

When a waiter hands me the bill, how do I know whether to pay it myself or let my date pay? On this episode, I get a progress update from Dileep on his company, Vicarious, since Dileep's last episode. We also talk broadly about his experience running Vicarious to develop AGI and robotics. Then we turn to his latest brain-inspired AI efforts using cloned structured probabilistic graph models to develop an account of how the hippocampus makes a model of the world and represents our cognitive maps in different contexts, so we can simulate possible outcomes to choose how to act. Special guest questions from Brad Love (episode 70: How We Learn Concepts) . Vicarious website - Dileep's AGI robotics company.Twitter: @dileeplearning.Papers we discuss:Learning cognitive maps as structured graphs for vicarious evaluation.A detailed mathematical theory of thalamic and cortical microcircuits based on inference in a generative vision model.Probabilistic graphical models.Hierarchical temporal memory. Time stamps: 0:00 - Intro 3:00 - Skip Intro 4:00 - Previous Dileep episode 10:22 - Is brain-inspired AI over-hyped? 14:38 - Compteition in robotics field 15:53 - Vicarious robotics 22:12 - Choosing what product to make 28:13 - Running a startup 30:52 - Old brain vs. new brain 37:53 - Learning cognitive maps as structured graphs 41:59 - Graphical models 47:10 - Cloning and merging, hippocampus 53:36 - Brad Love Question 1 1:00:39 - Brad Love Question 2 1:02:41 - Task examples 1:11:56 - What does hippocampus do? 1:14:14 - Intro to thalamic cortical microcircuit 1:15:21 - What AI folks think of brains 1:16:57 - Which levels inform which levels 1:20:02 - Advice for an AI startup

 BI 086 Ken Stanley: Open-Endedness | File Type: audio/mpeg | Duration: 01:35:43

Ken and I discuss open-endedness, the pursuit of ambitious goals by seeking novelty and interesting products instead of advancing directly toward defined objectives. We talk about evolution as a prime example of an open-ended system that has produced astounding organisms, Ken relates how open-endedness could help advance artificial intelligence and neuroscience, and we discuss a range of topics related to the general concept of open-endedness, and Ken takes a couple questions from Stefan Leijnen and Melanie Mitchell. Related: Ken's website.Twitter: @kenneth0stanley.The book:Why Greatness Cannot Be Planned: The Myth of the Objective by Kenneth Stanley and Joel Lehman.Papers:Evolving Neural Networks Through Augmenting Topologies (2002)Minimal Criterion Coevolution: A New Approach to Open-Ended Search Some key take-aways: Many of the best inventions were not the result of trying to achieve a specific objective.Open-endedness is the pursuit of ambitious advances without a clearly defined objective.Evolution is a quintessential example of an open-ended process: it produces a vast array of complex beings by searching the space of possible organisms, constrained by the environment, survival, and reproduction.Perhaps the key to developing artificial general intelligence is by following an open-ended path rather that pursing objectives (solving the same old benchmark tasks, etc.). 0:00 - Intro 3:46 - Skip Intro 4:30 - Evolution as an Open-ended process 8:25 - Why Greatness Cannot Be Planned 20:46 - Open-endedness in AI 29:35 - Constraints vs. objectives 36:26 - The adjacent possible 41:22 - Serendipity 44:33 - Stefan Leijnen question 53:11 - Melanie Mitchell question 1:00:32 - Efficiency 1:02:13 - Gentle Earth 1:05:25 - Learning vs. evolution 1:10:53 - AGI 1:14:06 - Neuroscience, AI, and open-endedness 1:26:06 - Open AI

 BI 085 Ida Momennejad: Learning Representations | File Type: audio/mpeg | Duration: 01:43:41

Ida and I discuss the current landscape of reinforcement learning in both natural and artificial intelligence, and how the old story of two RL systems in brains - model-free and model-based - is giving way to a more nuanced story of these two systems constantly interacting and additional RL strategies between model-free and model-based to drive the vast repertoire of our habits and goal-directed behaviors. We discuss Ida’s work on one of those “in-between” strategies, the successor representation RL strategy, which maps onto brain activity and accounts for behavior. We also discuss her interesting background and how it affects her outlook and research pursuit, and the role philosophy has played and continues to play in her thought processes. Related links: Ida’s website.Twitter: @criticalneuro.A nice review of what we discuss:Learning Structures: Predictive Representations, Replay, and Generalization. Time stamps: 0:00 - Intro 4:50 - Skip intro 9:58 - Core way of thinking 19:58 - Disillusionment 27:22 - Role of philosophy 34:51 - Optimal individual learning strategy 39:28 - Microsoft job 44:48 - Field of reinforcement learning 51:18 - Learning vs. innate priors 59:47 - Incorporating other cognition into RL 1:08:24 - Evolution 1:12:46 - Model-free and model-based RL 1:19:02 - Successor representation 1:26:48 - Are we running all algorithms all the time? 1:28:38 - Heuristics and intuition 1:33:48 - Levels of analysis 1:37:28 - Consciousness

 BI 084 György Buzsáki and David Poeppel | File Type: audio/mpeg | Duration: 01:56:01

David, Gyuri, and I discuss the issues they argue for in their back and forth commentaries about the importance of neuroscience and psychology, or implementation-level and computational-level, to advance our understanding of brains and minds - and the names we give to the things we study. Gyuri believes it’s time we use what we know and discover about brain mechanisms to better describe the psychological concepts we refer to as explanations for minds; David believes the psychological concepts are constantly being refined and are just as valid as objects of study to understand minds. They both agree these are important and enjoyable topics to debate. Also, special guest questions from Paul Cisek and John Krakauer. Related: Buzsáki lab; Poeppel labTwitter: @davidpoeppel.The papers we discuss or mention:Calling Names by Christophe BernardThe Brain–Cognitive Behavior Problem: A Retrospective by György Buzsáki.Against the Epistemological Primacy of the Hardware: The Brain from Inside Out, Turned Upside Down by David Poeppel.Books:The Brain from Inside Out by György Buzsáki.The Cognitive Neurosciences (edited by David Poeppel et al). Timeline: 0:00 - Intro 5:31 - Skip intro 8:42 - Gyuri and David summaries 25:45 - Guest questions 36:25 - Gyuri new language 49:41 - Language and oscillations 53:52 - Do we know what cognitive functions we're looking for? 58:25 - Psychiatry 1:00:25 - Steve Grossberg approach 1:02:12 - Neuroethology 1:09:08 - AI as tabula rasa 1:17: 40 - What's at stake? 1:36:20 - Will the space between neuroscience and psychology disappear?

 BI 083 Jane Wang: Evolving Altruism in AI | File Type: audio/mpeg | Duration: 01:13:16

Jane and I discuss the relationship between AI and neuroscience (cognitive science, etc), from her perspective at Deepmind after a career researching natural intelligence. We also talk about her meta-reinforcement learning work that connects deep reinforcement learning with known brain circuitry and processes, and finally we talk about her recent work using evolutionary strategies to develop altruism and cooperation among the agents in a multi-agent reinforcement learning environment. Related: Jane’s website.Twitter: @janexwang. The papers we discuss or mention:Learning to reinforcement learn.Blog post with a link to the paper: Prefrontal cortex as a meta-reinforcement learning system.Deep Reinforcement Learning and its Neuroscientific ImplicationsEvolving Intrinsic Motivations for Altruistic Behavior.Books she recommended:Human Compatible: AI and the Problem of Control, by Stuart Russell:Algorithms to Live By, by Brian Christian and Tom Griffiths. Timeline: 0:00 - Intro 3:36 - Skip Intro 4:45 - Transition to Deepmind 19:56 - Changing perspectives on neuroscience 24:49 - Is neuroscience useful for AI? 33:11 - Is deep learning hitting a wall? 35:57 - Meta-reinforcement learning 52:00 - Altruism in multi-agent RL

 BI 082 Steve Grossberg: Adaptive Resonance Theory | File Type: audio/mpeg | Duration: 02:15:38

Steve and I discuss his long and productive career as a theoretical neuroscientist. We cover his tried and true method of taking a large body of psychological behavioral findings, determining how they fit together and what’s paradoxical about them, developing design principles, theories, and models from that body of data, and using experimental neuroscience to inform and confirm his model predictions. We talk about his Adaptive Resonance Theory (ART) to describe how our brains are self-organizing, adaptive, and deal with changing environments. We also talk about his complementary computing paradigm to describe how two systems can complement each other to create emergent properties neither system can create on its own , how the resonant states in ART support consciousness, his place in the history of both neuroscience and AI, and quite a bit more. Related: Steve's BU website.Some papers we discuss or mention (much more on his website):Adaptive Resonance Theory: How a brain learns to consciously attend, learn, and recognize a changing world.Towards solving the Hard Problem of Consciousness: The varieties of brain resonances and the conscious experiences that they support.A Path Toward Explainable AI and Autonomous Adaptive Intelligence: Deep Learning, Adaptive Resonance, and Models of Perception, Emotion, and Action. Topics Time stamps: 0:00 - Intro 5:48 - Skip Intro 9:42 - Beginnings 18:40 - Modeling method 44:05 - Physics vs. neuroscience 54:50 - Historical credit for Hopfield network 1:03:40 - Steve's upcoming book 1:08:24 - Being shy 1:11:21 - Stability plasticity dilemma 1:14:10 - Adaptive resonance theory 1:18:25 - ART matching rule 1:21:35 - Consciousness as resonance 1:29:15 - Complementary computing 1:38:58 - Vigilance to re-orient 1:54:58 - Deep learning vs. ART

 BI 081 Pieter Roelfsema: Brain-propagation | File Type: audio/mpeg | Duration: 01:22:05

Pieter and I discuss his ongoing quest to figure out how the brain implements learning that solves the credit assignment problem, like backpropagation does for neural networks. We also talk about his work to understand how we perceive individual objects in a crowded scene, his neurophysiological recordings in support of the global neuronal workspace hypothesis of consciousness, and the visual prosthetic device he’s developing to cure blindness by directly stimulating early visual cortex.  Related: Pieter's lab website.Twitter: @Pieters_Tweet.His startup to cure blindness: Phosphoenix.Talk:Seeing and thinking with your visual brainThe papers we discuss or mention:Control of synaptic plasticity in deep cortical networks.A Biologically Plausible Learning Rule for Deep Learning in the Brain.Conscious Processing and the Global Neuronal Workspace Hypothesis.Pieter's neuro-origin book inspiration (like so many others): Gödel, Escher, Bach: An Eternal Golden Braid by Douglas Hofstadter.

 BI 080 Daeyeol Lee: Birth of Intelligence | File Type: audio/mpeg | Duration: 01:31:09

Daeyeol and I discuss his book Birth of Intelligence: From RNA to Artificial Intelligence, which argues intelligence is a function of and inseparable from life, bound by self-replication and evolution. The book covers a ton of neuroscience related to decision making and learning, though we focused on a few theoretical frameworks and ideas like division of labor and principal-agent relationships to understand how our brains and minds are related to our genes, how AI is related to humans (for now), metacognition, consciousness, and a ton more. Related: Lee Lab for Learning and Decision Making.Twitter: @daeyeol_lee.Daeyeol’s side passion, creating music.His book: Birth of Intelligence: From RNA to Artificial Intelligence.

 BI 079 Romain Brette: The Coding Brain Metaphor | File Type: audio/mpeg | Duration: 01:19:04

Romain and I discuss his theoretical/philosophical work examining how neuroscientists rampantly misuse the word "code" when making claims about information processing in brains. We talk about the coding metaphor, various notions of information, the different roles and facets of mental representation, perceptual invariance, subjective physics, process versus substance metaphysics, and the experience of writing a Behavior and Brain Sciences article (spoiler: it's a demanding yet rewarding experience). Romain's website.Twitter: @RomainBrette.The papers we discuss or mention:.Philosophy of the spike: rate-based vs. spike-based theories of the brain.Is coding a relevant metaphor for the brain? (bioRxiv link).Subjective physics.Related worksThe Ecological Approach to Visual Perception by James Gibson.Why Red Doesn't Sound Like a Bell by Kevin O’Reagan.

 BI 078 David and John Krakauer: Part 2 | File Type: audio/mpeg | Duration: 01:14:37

In this second part of our conversation David, John, and I continue to discuss the role of complexity science in the study of intelligence, brains, and minds. We also get into functionalism and multiple realizability, dynamical systems explanations, the role of time in thinking, and more. Be sure to listen to the first part, which lays the foundation for what we discuss in this episode. Notes: David’s page at the Santa Fe Institute.John’s BLAM lab website.Follow SFI on twitter: @sfiscience.BLAM on Twitter: @blamlab Related Krakauer stuff:At the limits of thought. An Aeon article by DavidComplex Time: Cognitive Regime Shift II - When/Why/How the Brain Breaks. A video conversation with both John and David.Complexity Podcast.Books mentioned:Worlds Hidden in Plain Sight: The Evolving Idea of Complexity at the Santa Fe Institute, ed. David Krakauer.Understanding Scientific Understanding by Henk de Regt.The Idea of the Brain by Matthew Cobb.New Dark Age: Technology and the End of the Future by James Bridle.The River of Consciousness by Oliver Sacks.

 BI 077 David and John Krakauer: Part 1 | File Type: audio/mpeg | Duration: 01:33:04

David, John, and I discuss the role of complexity science in the study of intelligence. In this first part, we talk about complexity itself, its role in neuroscience, emergence and levels of explanation, understanding, epistemology and ontology, and really quite a bit more. Notes: David’s page at the Santa Fe Institute.John’s BLAM lab website.Follow SFI on twitter: @sfiscience.BLAM on Twitter: @blamlab Related Krakauer stuff:At the limits of thought. An Aeon article by DavidComplex Time: Cognitive Regime Shift II - When/Why/How the Brain Breaks. A video conversation with both John and David.Complexity Podcast.Books mentioned:Worlds Hidden in Plain Sight: The Evolving Idea of Complexity at the Santa Fe Institute, ed. David Krakauer.Understanding Scientific Understanding by Henk de Regt.The Idea of the Brain by Matthew Cobb.New Dark Age: Technology and the End of the Future by James Bridle.The River of Consciousness by Oliver Sacks.

Comments

Login or signup comment.