Brain Inspired show

Brain Inspired

Summary: Neuroscience and artificial intelligence work better together. Brain inspired is a celebration and exploration of the ideas driving our progress to understand intelligence. I interview experts about their work at the interface of neuroscience, artificial intelligence, cognitive science, philosophy, psychology, and more: the symbiosis of these overlapping fields, how they inform each other, where they differ, what the past brought us, and what the future brings. Topics include computational neuroscience, supervised machine learning, unsupervised learning, reinforcement learning, deep learning, convolutional and recurrent neural networks, decision-making science, AI agents, backpropagation, credit assignment, neuroengineering, neuromorphics, emergence, philosophy of mind, consciousness, general AI, spiking neural networks, data science, and a lot more. The podcast is not produced for a general audience. Instead, it aims to educate, challenge, inspire, and hopefully entertain those interested in learning more about neuroscience and AI.

Join Now to Subscribe to this Podcast

Podcasts:

 BI 061 Jörn Diedrichsen and Niko Kriegeskorte: Brain Representations | File Type: audio/mpeg | Duration: 01:29:17

Jörn, Niko and I continue the discussion of mental representation from last episode with Michael Rescorla, then we discuss their review paper, Peeling The Onion of Brain Representations, about different ways to extract and understand what information is represented in measured brain activity patterns. Show notes: Jörn's lab website. Niko's lab website. Jörn on twitter: DiedrichsenLab. Niko on twitter: KriegeskorteLab.The papers we discuss or mention: Peeling the Onion of Brain Representations. Annual Review of Neuroscience, 2019 Representational models: A common framework for understanding encoding, pattern-component, and representational-similarity analysis. PLoS, 2017.

 BI 060 Michael Rescorla: Mind as Representation Machine | File Type: audio/mpeg | Duration: 01:36:03

Michael and I discuss the philosophy and a bit of history of mental representation including the computational theory of mind and the language of thought hypothesis, how science and philosophy interact, how representation relates to computation in brains and machines, levels of computational explanation, and we discuss some examples of representational approaches to mental processes like bayesian modeling. Show notes: Michael's website (with links to a ton of his publications). Science and PhilosophyWhy science needs philosophy by Laplane et al 2019.Why Cognitive Science Needs Philosophy and Vice Versa by Paul Thagard, 2009.Some of Michael's papers/articles we discuss or mention: The Computational Theory of Mind. Levels of Computational Explanation. Computational Modeling of the Mind: What Role for Mental Representation?From Ockham to Turing --- and Back Again. Talks: Predictive coding “debate” with Michael and a few other folks. An overview and history of the philosophy of representation. Books we mentioned: The Structure of Scientific Revolutions by Thomas Kuhn. Memory and the Computational Brain by Randy Gallistel and Adam King. Representation In Cognitive Science by Nicholas Shea. Types and Tokens: On Abstract Objects by Linda Wetzel. Probabilistic Robotics by Thrun, Burgard, and Fox.

 BI 059 Wolfgang Maass: How Do Brains Compute? | File Type: audio/mpeg | Duration: 01:00:06

In this second part of my discussion with Wolfgang (check out the first part), we talk about spiking neural networks in general, principles of brain computation he finds promising for implementing better network models, and we quickly overview some of his recent work on using these principles to build models with biologically plausible learning mechanisms, a spiking network analog of the well-known LSTM recurrent network, and meta-learning using reservoir computing. Wolfgang's website.Advice To a Young Investigator (has the quote at the beginning of the episode) by Santiago Ramon y Cajal.Papers we discuss or mention: Searching for principles of brain computation. Brain Computation: A Computer Science Perspective.Long short-term memory and learning-to-learn in networks of spiking neurons.A solution to the learning dilemma for recurrent networks of spiking neurons.Reservoirs learn to learn.Talks that cover some of these topics:Computation in Networks of Neurons in the Brain I.Computation in Networks of Neurons in the Brain II.

 BI 058 Wolfgang Maass: Computing Brains and Spiking Nets | File Type: audio/mpeg | Duration: 00:55:10

In this first part of our conversation (here's the second part), Wolfgang and I discuss the state of theoretical and computational neuroscience, and how experimental results in neuroscience should guide theories and models to understand and explain how brains compute. We also discuss brain-machine interfaces, neuromorphics, and more. In the next part (here), we discuss principles of brain processing to inform and constrain theories of computations, and we briefly talk about some of his most recent work making spiking neural networks that incorporate some of these brain processing principles. Wolfgang's website. The book Wolfgang recommends: The Brain from Inside Out by György Buzsáki.Papers we discuss or mention: Searching for principles of brain computation. Brain Computation: A Computer Science Perspective.Long short-term memory and learning-to-learn in networks of spiking neurons.A solution to the learning dilemma for recurrent networks of spiking neurons.Reservoirs learn to learn.Talks that cover some of these topics:Computation in Networks of Neurons in the Brain I.Computation in Networks of Neurons in the Brain II.

 BI 057 Nicole Rust: Visual Memory and Novelty | File Type: audio/mpeg | Duration: 01:21:12

Nicole and I discuss how a signature for visual memory can be coded among the same population of neurons known to encode object identity, how the same coding scheme arises in convolutional neural networks trained to identify objects, and how neuroscience and machine learning (reinforcement learning) can join forces to understand how curiosity and novelty drive efficient learning. Check out Nicole’s Visual Memory Laboratory website. Follow her on twitter: @VisualMemoryLab The papers we discuss or mention: Single-exposure visual memory judgments are reflected in inferotemporal cortex. Population response magnitude variation in inferotemporal cortex predicts image memorability.Visual novelty, curiosity, and intrinsic reward in machine learning and the brain.The work by Dan Yamins’s group that Nicole mentions: Local Aggregation for Unsupervised Learning of Visual Embeddings

 BI 056 Tom Griffiths: The Limits of Cognition | File Type: audio/mpeg | Duration: 01:27:37

I speak with Tom Griffiths about his “resource-rational framework”, inspired by Herb Simon's bounded rationality and Stuart Russel’s bounded optimality concepts. The resource-rational framework illuminates how the constraints of optimizing our available cognition can help us understand what algorithms our brains use to get things done, and can serve as a bridge between Marr’s computational, algorithmic, and implementation levels of understanding. We also talk cognitive prostheses, artificial general intelligence, consciousness, and more. Visit Tom's Computational Cognitive Science Lab. Check out his book with Brian Christian, Algorithms To Live By.Some of the papers we discuss or mention:Rational Use of Cognitive Resources: Levels of Analysis Between the Computational and the Algorithmic. Resource-rational analysis: understanding human cognition as the optimal use of limited computational resources.Data on the mind - the data repository we discussed briefly A paper that discusses it: Finding the traces of behavioral and cognitive processes in big data and naturally occurring datasets.

 BI 055 Thomas Naselaris: Seeing Versus Imagining | File Type: audio/mpeg | Duration: 01:26:18

Thomas and I talk about what happens in the brain’s visual system when you see something versus imagine it. He uses generative encoding and decoding models and brain signals like fMRI and EEG to test the nature of mental imagery. We also discuss the huge fMRI dataset of natural images he’s collected to infer models of the entire visual system, how we’ve still not tapped the potential of fMRI, and more. Thomas's lab website.  Papers we discuss or mention: Resolving Ambiguities of MVPA Using Explicit Models of Representation. Human brain activity during mental imagery exhibits signatures of inference in a hierarchical generative model.

 BI 054 Kanaka Rajan: How Do We Switch Behaviors? | File Type: audio/mpeg | Duration: 01:15:24

Support the Podcast Kanaka and I discuss a few different ways she uses recurrent neural networks to understand how brains give rise to behaviors. We talk about her work showing how neural circuits transition from active to passive coping behavior in zebrafish, and how RNNs could be used to understand how we switch tasks in general and how we multi-task. Plus the usual fun speculation, advice, and more. Kanaka’s google scholar profile. Follow her on twitter: @rajankdr. Papers we discuss: Neuronal Dynamics Regulating Brain and Behavioral State Transitions. How to study the neural mechanisms of multiple tasks. Gilbert Strang's linear algebra video lectures Kanaka suggested.

 BI 053 Jon Brennan: Linguistics in Minds and Machines | File Type: audio/mpeg | Duration: 01:33:24

Support the Podcast Jon and I discuss understanding the syntax and semantics of language in our brains. He uses linguistic knowledge at the level of sentence and words, neuro-computational models, and neural data like EEG and fMRI to figure out how we process and understand language while listening to the natural language found in everyday conversations and stories. I also get his take on the current state of natural language processing and other AI advances, and how linguistics, neurolinguistics, and AI can contribute to each other.  Jon's Computational Neurolinguistics Lab. His personal website. The papers we discuss or mention: Hierarchical structure guides rapid linguistic predictions during naturalistic listening.Finding syntax in human encephalography with beam search.

 BI 052 Andrew Saxe: Deep Learning Theory | File Type: audio/mpeg | Duration: 01:25:48

Support the Podcast Andrew and I discuss his work exploring how various facets of deep networks contribute to their function, i.e. deep network theory. We talk about what he’s learned by studying linear deep networks and asking how depth and initial weights affect learning dynamics, when replay is appropriate (and when it’s not), how semantics develop, and what it all might tell us about deep learning in brains. Show notes: Visit Andrew's website. The papers we discuss or mention: Are Efficient Deep Representations Learnable?A theory of memory replay and generalization performance in neural networks.A mathematical theory of semantic development in deep neural networks.A good talk: High-Dimensional Dynamics Of Generalization Errors. A few recommended texts to dive deeper: Introduction To The Theory Of Neural Computation.Statistical Mechanics of Learning.Theoretical Neuroscience.

 BI 051 Jess Hamrick: Mental Simulation and Construction | File Type: audio/mpeg | Duration: 01:28:21

Support the Podcast Jess and I discuss construction using graph neural networks. She makes AI agents that build structures to solve tasks in a simulated blocks and glue world using graph neural networks and deep reinforcement learning. We also discuss her work modeling mental simulation in humans and how it could be implemented in machines, and plenty more. Show notes: Jess’s website. Follow her on twitter: @jhamrick The papers we discuss or mention: Analogues of mental simulation and imagination in deeplearning. Structured agents for physical construction. Relational inductive biases, deep learning, and graph networks.Build your own graph networks: Open source graph network library.

 BI 050 Kyle Dunovan: Academia to Industry | File Type: audio/mpeg | Duration: 01:36:02

Kyle and I talk about his work modeling the basal ganglia and its circuitry to control whether we take an action and how we select among alternative actions. We also reflect on his experiences in academia, the larger picture of what it’s like in graduate school and after - at least in a computational neuroscience program - why he left, what he’s doing now, and how it all fits together. Show notes: Kyle’s website. Follow him on twitter: @dunovank Examples of his work on basal ganglia and decision-making and control: Believer-Skeptic Meets Actor-Critic: Rethinking the Role of Basal Ganglia Pathways during Decision-Making and Reinforcement Learning. Reward-driven changes in striatal pathway competition shape evidence evaluation in decision-making. Errors in Action Timing and Inhibition Facilitate Learning by Tuning Distinct Mechanisms in the Underlying Decision Process. Mark Humphries’ article on Medium: Academia is the Alternative Career Path. For fun, a bit about the “free will” experiments of Benjamin Libet. 

 BI 049 Phillip Alvelda: Trustworthy Brain Machines | File Type: audio/mpeg | Duration: 01:24:45

Support the Podcast Phillip and I discuss his company Brainworks, which uses the latest neuroscience to build AI into its products. We talk about their first product, Ambient Biometrics, that measures vital signs using your smartphone's camera. We also dive into entrepreneurship in the AI startup world, ethical issues in AI and social media companies, his early days using neural networks at NASA, where he thinks this is all headed, and more. Show notes: His company, Brainworks.Follow Phillip on twitter: @alvelda.Here's a talk he gave: Building Synthetic Brains.A guest post on Rodney Brooks's blog: Pondering the Empathy Gap.

 BI 048 Liz Spelke: What Makes Us Special? | File Type: audio/mpeg | Duration: 01:24:37

Support the Podcast Liz and I discuss her work on cognitive development, specially in infants, and what it can tell us about what makes human cognition different from other animals, what core cognitive abilities we’re born with, and how those abilities may form the foundation for much of our other cognitive abilities to develop. We also talk about natural language as the potential key faculty that synthesizes our early core abilities into the many higher cognitive functions that make us unique as a species, the potential for AI to capitalize on what we know about cognition in infants, plus plenty more. Show notes: Visit Liz’s lab website. Related talks/lectures by Liz:The Power and Limits of Artificial Intelligence. A developmental perspective on brains, minds and machines. Visit the CCN conference website to learn more and see more talks.

 BI 047 David Poeppel: Wrong in Interesting Ways | File Type: audio/mpeg | Duration: 00:48:12

In this second part of our conversation, (listen to the first part) David and I discuss his thoughts about current language and speech techniques in AI, his thoughts about the prospects of artificial general intelligence, the challenge of mapping the parts of linguistics onto the parts of neuroscience, the state of graduate training, and more. Visit David's lab website at NYU. He’s also a director at Max Planck Institute for Empirical Aesthetics. Follow him on twitter: @davidpoeppel. Some of the papers we discuss or mention (lots more on his website): The cortical organization of speech processing.The maps problem and the mapping problem: Two challenges for a cognitive neuroscience of speech and language.A good talk: What Language Processing in the Brain Tells Us About the Structure of the Mind. Transformer model: How do Transformers Work in NLP? A Guide to the Latest State-of-the-Art Models. Attention Is All You Need.

Comments

Login or signup comment.