Associative reinforcement learning for discrete-time optimal control

Howell, M. N. and Gordon, Timothy (2000) Associative reinforcement learning for discrete-time optimal control. IEE Colloquium (Digest) (69). pp. 1-4. ISSN 0963-3308

Full content URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumbe...

Full text not available from this repository.

Item Type:Article
Item Status:Live Archive

Abstract

This paper investigates the application of associative reinforcement learning techniques to the optimal control of linear discrete-time dynamic systems. Associative reinforcement learning involves the trial and error interaction with a dynamic system to determine the control actions that optimally achieve some desired performance index. The methodology can be applied either on-line or off-line and in a model based or model free manner. Associative reinforcement learning techniques are applied to the optimal regulator (LQR) control of discrete-time linear systems. Adaptive critic designs are implemented and the convergence speed compared for the different approaches. These methods can determine the optimal state and state/action value function and the optimal policy without requiring system models.

Keywords:Algorithms, Discrete time control systems, Error analysis, Learning systems, Linear control systems, Neural networks, Problem solving, Riccati equations, Bellman equations, Control laws, Infinite time control, Reinforcement learning techniques, Optimal control systems
Subjects:H Engineering > H990 Engineering not elsewhere classified
H Engineering > H650 Systems Engineering
Divisions:College of Science > School of Engineering
ID Code:11677
Deposited On:04 Oct 2013 11:16

Repository Staff Only: item control page