Model-Free Trajectory-based Policy Optimization with Monotonic Improvement

Akrour, R. and Abdolmaleki, A. and Abdulsamad, H. and Peters, J. and Neumann, Gerhard (2018) Model-Free Trajectory-based Policy Optimization with Monotonic Improvement. Journal of Machine Learning Research (JMLR), 19 (14). pp. 1-25. ISSN 1532-4435

Full content URL: http://jmlr.org/papers/v19/17-329.html

Documents
Model-Free Trajectory-based Policy Optimization with Monotonic Improvement
Riad Akrour, Abbas Abdolmaleki, Hany Abdulsamad, Jan Peters & Gerhard Neumann (2018). Model-Free Trajectory-based Policy Optimization with Monotonic Improvement. Journal of Machine Learning Research, 19, 1-25. Available from http://jmlr.org/papers/v19/17-329.html
[img]
[Download]
[img]
Preview
PDF (pdf)
moto_jmlr18.pdf - Whole Document
Available under License Creative Commons Attribution 4.0 International.

1MB
Item Type:Article
Item Status:Live Archive

Abstract

Many of the recent trajectory optimization algorithms alternate between linear approximation
of the system dynamics around the mean trajectory and conservative policy update.
One way of constraining the policy change is by bounding the Kullback-Leibler (KL)
divergence between successive policies. These approaches already demonstrated great experimental
success in challenging problems such as end-to-end control of physical systems.
However, these approaches lack any improvement guarantee as the linear approximation of
the system dynamics can introduce a bias in the policy update and prevent convergence
to the optimal policy. In this article, we propose a new model-free trajectory-based policy
optimization algorithm with guaranteed monotonic improvement. The algorithm backpropagates
a local, quadratic and time-dependent Q-Function learned from trajectory data
instead of a model of the system dynamics. Our policy update ensures exact KL-constraint
satisfaction without simplifying assumptions on the system dynamics. We experimentally
demonstrate on highly non-linear control tasks the improvement in performance of our algorithm
in comparison to approaches linearizing the system dynamics. In order to show the
monotonic improvement of our algorithm, we additionally conduct a theoretical analysis of
our policy update scheme to derive a lower bound of the change in policy return between
successive iterations.

Keywords:reinforcement learning, policy search, trajectory optimization
Subjects:G Mathematical and Computer Sciences > G760 Machine Learning
Divisions:College of Science > School of Computer Science
ID Code:32457
Deposited On:21 Jun 2018 22:17

Repository Staff Only: item control page