Skip to content Skip to navigation

HAI Weekly Seminar with Jeannette Bohg - Scaffolding and Imitation Learning

October 21, 2020 - 10:00am to 11:00am
Virtual - Zoom

Abstract: Learning contact-rich, robotic manipulation skills is a challenging problem due to the high-dimensionality of the state and action space as well as uncertainty from noisy sensors and inaccurate motor control. In this talk, I want to show how two principles of human learning can be transferred to robots to combat these factors and achieve more robust manipulation in a variety of tasks.

The first principle is scaffolding. Humans actively exploit contact constraints in the environment. By adopting a similar strategy, robots can also achieve more robust manipulation. In this talk, I will present an approach that enables a robot to autonomously modify its environment and thereby discover how to ease manipulation skill learning. Specifically, we provide the robot with fixtures that it can freely place within the environment. These fixtures provide hard constraints that limit the outcome of robot actions. Thereby, they funnel uncertainty from perception and motor control and scaffold manipulation skill learning. We show that manipulation skill learning is dramatically sped up through this way of scaffolding. 

The second principle is learning from demonstrations through imitation. Humans have gradually developed language, mastered complex motor skills, created and utilized sophisticated tools. The act of conceptualization is fundamental to these abilities because it allows humans to mentally represent, summarize and abstract diverse knowledge and skills. By means of abstraction, concepts that we learn from a limited number of examples can be extended to a potentially infinite set of new and unanticipated situations and they can be more easily taught to others by demonstration.

I will present work that gives robots the ability to acquire a variety of manipulation concepts that act as mental representations of verbs in a natural language instruction. We propose to use learning from human demonstrations of manipulation actions as recorded in large-scale video data sets that are annotated with natural language instructions. In extensive simulation experiments, we show that the policy learned in the proposed way can perform a large percentage of the 78 different manipulation tasks on which it was trained. We show that the policy generalizes over variations of the environment. We also show examples of successful generalization over novel but similar instructions. 

Biography: Bohg is a Professor for the Robotics department and is a part of the Stanford AI lab within the Computer Science Department of Stanford University. Bohg is also directing the Interactive Perception and Robot Learning Lab, and enjoys research at the intersection of Robotics, Machine Learning, and Computer Vision.  

Previously, Bohg was a group leader at the Autonomous Motion Department (AMD) of the MPI for Intelligent Systems. Bohg favourite robot will always be Apollo. Before joining the MPI in 2012, Bohg did her PhD at the Division of Robotics, Perception and Learning (RPL) at KTH in Stockholm. In Bohg’s thesis, she proposed novel methods towards multi-modal scene understanding for robotic grasping. Bohg did her undergrad in Computer Science at the Technical University in Dresden. Bohg also studied Art and Technology at Chalmers in Gothenburg.

Event Sponsor: 
Stanford Institute for Human-Centered Artificial Intelligence (HAI)
Contact Email: 
jactran@stanford.edu