Learning Replanning Policies with Direct Policy Search

Brandherm, F., Peters, J., Neumann, Gerhard and Akrour, R. (2019) Learning Replanning Policies with Direct Policy Search. IEEE Robotics and Automation Letters (RA-L), 4 (2). 2196 -2203. ISSN 2377-3766

Full content URL: http://doi.org/10.1109/LRA.2019.2901656

Documents
florian_ral_sub.pdf

Request a copy
Learning Replanning Policies with Direct Policy Search
Published PDF
[img]
[Download]
[img] PDF
florian_ral_sub.pdf
Restricted to Repository staff only

1MB
[img] PDF
08651517.pdf - Whole Document
Available under License Creative Commons Attribution.

1MB
Item Type:Article
Item Status:Live Archive

Abstract

Direct policy search has been successful in learning challenging real world robotic motor skills by learning open-loop movement primitives with high sample efficiency. These primitives can be generalized to different contexts with varying initial configurations and goals. Current state-of-the-art contextual policy search algorithms can however not adapt to changing, noisy context measurements. Yet, these are common characteristics of real world robotic tasks. Planning a trajectory ahead based on an inaccurate context that may change during the motion often results in poor accuracy, especially with highly dynamical tasks. To adapt to updated contexts, it is sensible to learn trajectory replanning strategies. We propose a framework to learn trajectory replanning policies via contextual policy search and demonstrate that they are safe for the robot, that they can be learned efficiently and that they outperform non-replanning policies for problems with partially observable or perturbed context

Keywords:Robotics, Machine Learning
Subjects:H Engineering > H671 Robotics
G Mathematical and Computer Sciences > G760 Machine Learning
Divisions:College of Science > School of Computer Science
ID Code:36284
Deposited On:24 Jun 2019 08:57

Repository Staff Only: item control page