Model-free preference-based reinforcement learning

Wirth, C. and Furnkranz, J. and Neumann, G. (2016) Model-free preference-based reinforcement learning. In: Thirtieth AAAI Conference on Artificial Intelligence, 12 - 17 February 2016, Phoenix, United States.

Documents
Wirth_AAAI2016.pdf
[img]
[Download]
[img]
Preview
PDF
Wirth_AAAI2016.pdf - Whole Document

254kB
Item Type:Conference or Workshop contribution (Paper)
Item Status:Live Archive

Abstract

Specifying a numeric reward function for reinforcement learning typically requires a lot of hand-tuning from a human expert. In contrast, preference-based reinforcement learning (PBRL) utilizes only pairwise comparisons between trajectories as a feedback signal, which are often more intuitive to specify. Currently available approaches to PBRL for control problems with continuous state/action spaces require a known or estimated model, which is often not available and hard to learn. In this paper, we integrate preference-based estimation of the reward function into a model-free reinforcement learning (RL) algorithm, resulting in a model-free PBRL algorithm. Our new algorithm is based on Relative Entropy Policy Search (REPS), enabling us to utilize stochastic policies and to directly control the greediness of the policy update. REPS decreases exploration of the policy slowly by limiting the relative entropy of the policy update, which ensures that the algorithm is provided with a versatile set of trajectories, and consequently with informative preferences. The preference-based estimation is computed using a sample-based Bayesian method, which can also estimate the uncertainty of the utility. Additionally, we also compare to a linear solvable approximation, based on inverse RL. We show that both approaches perform favourably to the current state-of-the-art. The overall result is an algorithm that can learn non-parametric continuous action policies from a small number of preferences.

Keywords:Preference Learning, Reinforcement Learning
Subjects:G Mathematical and Computer Sciences > G760 Machine Learning
Divisions:College of Science > School of Computer Science
Related URLs:
ID Code:25746
Deposited On:17 Mar 2017 14:35

Repository Staff Only: item control page