Continuous action reinforcement learning applied to vehicle suspension control

Howell, M. N., Frost, G. P., Gordon, T. J. and Wu, Q. H. (1997) Continuous action reinforcement learning applied to vehicle suspension control. Mechatronics, 7 (3). pp. 263-276. ISSN 0957-4158

Full text not available from this repository.

Item Type:Article
Item Status:Live Archive


A new reinforcement learning algorithm is introduced which can be applied over a continuous range of actions. The learning algorithm is reward-inaction based, with a set of probability density functions being used to determine the action set. An experimental study is presented, based on the control of a semi-active suspension system on a road-going, four wheeled, passenger vehicle. The control objective is to minimise the mean square acceleration of the vehicle body, thus improving the ride isolation qualities of the vehicle. This represents a difficult class of learning problems, owing to the stochastic nature of the road input disturbance together with unknown high order dynamics, sensor noise and the non-linear (semi-active) control actuators. The learning algorithm described here operates over a bounded continuous action set, is robust to high levels of noise and is ideally suited to operating in a parallel computing environment. © 1997 Elsevier Science Ltd.

Keywords:Acceleration control, Actuators, Closed loop control systems, Dynamics, Performance, Probability density function, Spurious signal noise, Vehicle suspensions, Reinforcement learning, Vehicle suspension control, Learning algorithms
Subjects:H Engineering > H660 Control Systems
H Engineering > H330 Automotive Engineering
Divisions:College of Science > School of Engineering
Related URLs:
ID Code:11688
Deposited On:22 Aug 2013 15:52

Repository Staff Only: item control page