Projections for Approximate Policy Iteration Algorithms

Akrour, R., Pajarinen, J., Neumann, Gerhard and Peters, J. (2019) Projections for Approximate Policy Iteration Algorithms. In: Proceedings of the International Conference on Machine Learning (ICML), 9th - 15th June 2019, California, USA.

Full content URL: http://proceedings.mlr.press/v97/akrour19a.html

Documents
Projections for Approximate Policy Iteration Algorithms
Published PDF
[img]
[Download]
[img] PDF
papi.pdf - Whole Document

3MB
Item Type:Conference or Workshop contribution (Paper)
Item Status:Live Archive

Abstract

Approximate policy iteration is a class of reinforcement learning (RL) algorithms where the policy is encoded using a function approximator and which has been especially prominent in RL with continuous action spaces. In this class of RL algorithms, ensuring increase of the policy return during policy update often requires to constrain the change in action distribution. Several approximations exist in the literature to solve this constrained policy update problem. In this paper, we propose to improve over such solutions by introducing a set of projections that transform the constrained problem into an unconstrained one which is then solved by standard gradient descent. Using these projections, we empirically demonstrate that our approach can improve the policy update solution and the control over exploration of existing approximate policy iteration algorithms.

Keywords:Deep Reinforcement Learning
Subjects:G Mathematical and Computer Sciences > G760 Machine Learning
Divisions:College of Science > School of Computer Science
ID Code:36285
Deposited On:24 Jun 2019 09:09

Repository Staff Only: item control page