Model-based contextual policy search for data-efficient generalization of robot skills

Kupcsik, A., Deisenroth, M. P., Peters, J. , Loh, A. P., Vadakkepat, P. and Neumann, G. (2017) Model-based contextual policy search for data-efficient generalization of robot skills. Artificial Intelligence, 247 . pp. 415-439. ISSN 0004-3702

Documents
Kupcsik_AIJ_2015.pdf
[img]
[Download]
[img]
Preview
PDF
Kupcsik_AIJ_2015.pdf - Whole Document

5MB
Item Type:Article
Item Status:Live Archive

Abstract

In robotics, lower-level controllers are typically used to make the robot solve a specific task in a fixed context. For example, the lower-level controller can encode a hitting movement while the context defines the target coordinates to hit. However, in many learning problems the context may change between task executions. To adapt the policy to a new context, we utilize a hierarchical approach by learning an upper-level policy that generalizes the lower-level controllers to new contexts. A common approach to learn such upper-level policies is to use policy search. However, the majority of current contextual policy search approaches are model-free and require a high number of interactions with the robot and its environment. Model-based approaches are known to significantly reduce the amount of robot experiments, however, current model-based techniques cannot be applied straightforwardly to the problem of learning contextual upper-level policies. They rely on specific parametrizations of the policy and the reward function, which are often unrealistic in the contextual policy search formulation. In this paper, we propose a novel model-based contextual policy search algorithm that is able to generalize lower-level controllers, and is data-efficient. Our approach is based on learned probabilistic forward models and information theoretic policy search. Unlike current algorithms, our method does not require any assumption on the parametrization of the policy or the reward function. We show on complex simulated robotic tasks and in a real robot experiment that the proposed learning framework speeds up the learning process by up to two orders of magnitude in comparison to existing methods, while learning high quality policies.

Keywords:Policy Search, Multi-Task Learning, Model-Based, Gaussian Processes
Subjects:G Mathematical and Computer Sciences > G760 Machine Learning
H Engineering > H671 Robotics
Divisions:College of Science > School of Computer Science
Related URLs:
ID Code:25774
Deposited On:17 Jan 2017 12:06

Repository Staff Only: item control page