Talks
Spring 2017

Deep Robotic Learning

Tuesday, January 24th, 2017, 3:30 pm5:00 pm

Add to Calendar

Location: 

Calvin Lab Auditorium

The problem of building an autonomous robot has traditionally been viewed as one of integration: connecting together modular components, each one designed to handle some portion of the perception and decision making process. For example, a vision system might be connected to a planner that might in turn provide commands to a low-level controller that drives the robot's motors. In this talk, I will discuss how ideas from deep learning can allow us to build robotic control mechanisms that combine both perception and control into a single system. This system can then be trained end-to-end on the task at hand, in effect allowing the entire robotic perception and control system to be learned. I will show how this end-to-end approach actually simplifies the perception and control problems, by allowing the perception and control mechanisms to adapt to one another and to the task. I will also present some recent work on scaling up deep robotic learning, and demonstrate results for learning grasping strategies that involve continuous feedback and hand-eye coordination using deep convolutional neural networks.

AttachmentSize
PDF icon Deep Robotic Learning9.69 MB