A vision-guided parallel parking system for a mobile robot using approximate policy iteration

Shaker, Marwan, Duckett, Tom and Yue, Shigang (2010) A vision-guided parallel parking system for a mobile robot using approximate policy iteration. In: 11th Conference Towards Autonomous Robotic Systems (TAROS'2010), 31st August - 1st September 2010, Plymouth, Devon.

Full content URL: http://www.tech.plym.ac.uk/soc/staff/guidbugm/taro...

Documents
TAROS.pdf
[img]
[Download]
[img]
Preview
PDF
TAROS.pdf

370kB
Item Type:Conference or Workshop contribution (Paper)
Item Status:Live Archive

Abstract

Reinforcement Learning (RL) methods enable autonomous robots to learn skills from scratch by interacting with the environment. However, reinforcement learning can be very time consuming. This paper focuses on accelerating the reinforcement learning process on a mobile robot in an unknown environment. The presented algorithm is based on approximate policy iteration with a continuous state space and a fixed number of actions. The action-value function is represented by a weighted combination of basis functions.
Furthermore, a complexity analysis is provided to show that the implemented approach is guaranteed to converge on an optimal policy with less computational time.
A parallel parking task is selected for testing purposes. In the experiments, the efficiency of the proposed approach is demonstrated and analyzed through a set of simulated and real robot experiments, with comparison drawn from two well known algorithms (Dyna-Q and Q-learning).

Additional Information:Reinforcement Learning (RL) methods enable autonomous robots to learn skills from scratch by interacting with the environment. However, reinforcement learning can be very time consuming. This paper focuses on accelerating the reinforcement learning process on a mobile robot in an unknown environment. The presented algorithm is based on approximate policy iteration with a continuous state space and a fixed number of actions. The action-value function is represented by a weighted combination of basis functions. Furthermore, a complexity analysis is provided to show that the implemented approach is guaranteed to converge on an optimal policy with less computational time. A parallel parking task is selected for testing purposes. In the experiments, the efficiency of the proposed approach is demonstrated and analyzed through a set of simulated and real robot experiments, with comparison drawn from two well known algorithms (Dyna-Q and Q-learning).
Keywords:Reinforcement learning, approximate policy iteration, Parallel Parking
Subjects:H Engineering > H670 Robotics and Cybernetics
H Engineering > H671 Robotics
G Mathematical and Computer Sciences > G400 Computer Science
Divisions:College of Science > School of Computer Science
ID Code:3865
Deposited On:18 Jan 2011 22:19

Repository Staff Only: item control page