Latent space policy search for robotics

Luck, K. S., Neumann, G., Berger, E. , Peters, J. and Amor, H. B. (2014) Latent space policy search for robotics. In: IEEE/RSJ Conference on Intelligent Robots and Systems (IROS), 14 - 18 September 2014, Chicago, Illinois.

Documents
Luck_IROS_2014.pdf
[img]
[Download]
[img]
Preview
PDF
Luck_IROS_2014.pdf - Whole Document

4MB
Item Type:Conference or Workshop contribution (Paper)
Item Status:Live Archive

Abstract

Learning motor skills for robots is a hard
task. In particular, a high number of degrees-of-freedom
in the robot can pose serious challenges to existing reinforcement
learning methods, since it leads to a highdimensional
search space. However, complex robots are
often intrinsically redundant systems and, therefore, can
be controlled using a latent manifold of much smaller
dimensionality. In this paper, we present a novel policy
search method that performs efficient reinforcement learning
by uncovering the low-dimensional latent space of
actuator redundancies. In contrast to previous attempts
at combining reinforcement learning and dimensionality
reduction, our approach does not perform dimensionality
reduction as a preprocessing step but naturally combines
it with policy search. Our evaluations show that the new
approach outperforms existing algorithms for learning
motor skills with high-dimensional robots.

Keywords:Policy Search, Dimensionality Reduction, Robotics
Subjects:G Mathematical and Computer Sciences > G760 Machine Learning
H Engineering > H671 Robotics
Divisions:College of Science > School of Computer Science
Related URLs:
ID Code:25772
Deposited On:02 Feb 2017 15:49

Repository Staff Only: item control page