![The AI Podcast show](https://d3dthqtvwic6y7.cloudfront.net/podcast-covers/000/068/756/small/the-ai-podcast.jpg)
Summary: Robots following coded instructions to complete a task? Old school. Robots learning to do things by watching how humans do it? That’s the future. Earlier this year, Stanford’s Animesh Garg and Marynel Vázquez shared their research in a talk on “Generalizable Autonomy for Robotic Mobility and Manipulation” at the GPU Technology Conference last week. We caught up with them to learn more about generalizable autonomy - the idea that a robot should be able to observe human behavior, and learn to imitate it in a way that’s applicable to a variety of tasks and situations. Like learning to cook by watching YouTube videos, or figuring out how to cross a crowded room for another.