Some practical aspects on incremental training of RBF network for robot behavior learning

Jun, Li and Duckett, Tom (2008) Some practical aspects on incremental training of RBF network for robot behavior learning. In: 7th World Congress on Intelligent Control and Automation, 2008. WCICA 2008, 25-27 June 2008, Chongqing.

Full content URL:

Full text not available from this repository.

Item Type:Conference or Workshop contribution (Paper)
Item Status:Live Archive


The radial basis function (RBF) neural network with Gaussian activation function and least- mean squares (LMS) learning algorithm is a popular function approximator widely used in many applications due to its simplicity, robustness, optimal approximation, etc.. In practice, however, making the RBF network (and other neural networks) work well can sometimes be more of an art than a science, especially concerning parameter selection and adjustment. In this paper, we address three issues, namely the normalization of raw sensory-motor data, the choice of receptive fields for the RBFs, and the adjustment of the learning rate when training the RBF network in incremental learning fashion for robot behavior learning, where the RBF network is used to map sensory inputs to motor outputs. Though these issues are less theoretical and scientific, they are more practical, and sometimes more crucial for the application of the RBF network to the problems at hand. We believe that being aware of these practical issues can enable a better use of the RBF network in the real-world application. 1Introduction The radial basis function (RBF) network [3, 16] has found a wide range of application due to its simplicity, local learning, robustness, optimal approximation, etc.. For example, in an autonomous robot control system, the RBF network can be applied to directly map the sensory inputs to motor outputs [23, 21, 9, 15] for acquiring the required behaviors. However, in these successful applications there has been much less description on how to choose and adjust the parameters and why they are adjusted so for the applications of interest. In this paper, we address three practical aspects for incremental training of the RBF network, namely normalizing the raw sensor input, choosing the receptive fields of RBFs, and adjusting the learning rate for robot behavior learning. We restrict our investigation of these issues to the following situations: First of all, for simplicity of notation, consider a multi-input and single-output (MISO) system in which x = [x1,x2,...,xm]Tis an m-dimensional input vector, and y the scalar output. The RBF neural network can be defined as: ˆ y = F(x) = K ? k=1 wkφk(x) + b,φk(x) = e− 1 (γσk)2?x−µk?2 for k = 1,2,...,K, (1) where wkis the weight of k-th Gaussian function φk(x), µk= [µk1,µk2,...,µkm]Tis the m-dimensional position vector of k-th radial basis function, and σkis receptive field of k-th radial basis function. In addition, K is the number of the RBFs, b is the bias, and γ is the optimal factor introduced for optimising the receptive field σk, as in [20]. We assume that the number of RBFs K could either be designated in advance before training, thus clustering algorithms like McQueen’s K-means, or Kohonen’s SOM [10] can be used for determining the position vector µk; or it could be automatically obtained in real time during the training process by using dynamically adaptive clustering algorithms such as GWR [14]. In both cases, the receptive field σkcan be determined by some empirical estimation method (see section 3). We also assume that the RBF network’s weights wkand bias b are updated by the least mean squares (LMS) algorithm, as wk⇐ wk+ ηt(yt− ˆ y)φk(xt), for k = 1,2,...,K,b ⇐ b + ηt(yp− ˆ y), (2) 1

Subjects:H Engineering > H670 Robotics and Cybernetics
Divisions:College of Science > School of Computer Science
ID Code:12853
Deposited On:06 Jan 2014 09:17

Repository Staff Only: item control page