Robert Babuška Plenary Lecture

Robert Babuška T1: To be announced, room: To be announced
Title: Learning Control in Robotics
Name: Robert Babuška
Affiliation: DCSC, Delft University of Technology, Netherlands

Abstract: To Developments in robotics are coming at an accelerated rate and robots are quickly being adopted for tasks in unstructured environments. In such settings, there is a need for effective learning control methods that can deal with the uncertainty in the robot’s environment, handle new situations and perform robustly under changing conditions. In addition, learning is essential for effective interaction with humans, so that robots can can be trained by means of demonstration, become more responsive and cooperative with humans.

There is a variety of learning control approaches, ranging from the acquisition of initial skills via teaching by a human, through transfer learning methods that enable the reuse of previously learned skills and behaviors, to learning from experience, which aims at continual improvement of performance. The talk will focus on the last category of methods, addressing mainly techniques based on reinforcement learning, which is a form of data-driven optimal control. We will cover sample-efficient algorithms suitable for real-time learning in continuous state and action spaces and show examples of applications in robot motion control. We will also address recently introduced learning methods based on deep neural networks and discuss their potential for robot control.

Biography: Robert Babuska received the M.Sc. degree (cum laude) in control engineering from the Czech Technical University in Prague, in 1990, and the Ph.D. degree (cum laude) from the Delft University of Technology, the Netherlands, in 1997. He has had faculty appointments at Czech Technical University Prague and at the Delft University of Technology. Currently, he is a Professor of Intelligent Control and Robotics and the scientific director of the TU Delft Robotics Institute. His research interests include reinforcement learning, neural and fuzzy systems, identification and state-estimation, nonlinear model-based and adaptive control and dynamic multi-agent systems. He has been working on applications of these techniques in the fields of robotics, mechatronics, and aerospace.