Yin, H. and Allinson, N. M.
(1998)
A self-organising mixture network for density modelling.
In: The 1998 IEEE International Joint Conference on Neural Networks, 4-9 May 1998, Anchorage, USA.
Full content URL: http://dx.doi.org/10.1109/IJCNN.1998.687216
A self-organising mixture network for density modelling | | ![[img]](http://eprints.lincoln.ac.uk/style/images/fileicons/application_pdf.png) [Download] |
|
![[img]](http://eprints.lincoln.ac.uk/style/images/fileicons/application_pdf.png)  Preview |
|
PDF
00687216.pdf
- Whole Document
440kB |
Item Type: | Conference or Workshop contribution (Paper) |
---|
Item Status: | Live Archive |
---|
Abstract
A completely unsupervised mixture distribution network, namely the self-organising mixture network, is proposed for learning arbitrary density functions. The algorithm minimises the Kullback-Leibler information by means of stochastic approximation methods. The density functions are modelled as mixtures of parametric distributions such as Gaussian and Cauchy. The first layer of the network is similar to the Kohonen's self-organising map (SOM), but with the parameters of the class conditional densities as the learning weights. The winning mechanism is based on maximum posterior probability, and the updating of weights can be limited to a small neighbourhood around the winner. The second layer accumulates the responses of these local nodes, weighted by the learning mixing parameters. The network possesses simple structure and computation, yet yields fast and robust convergence. Experimental results are also presented
Additional Information: | A completely unsupervised mixture distribution network, namely the self-organising mixture network, is proposed for learning arbitrary density functions. The algorithm minimises the Kullback-Leibler information by means of stochastic approximation methods. The density functions are modelled as mixtures of parametric distributions such as Gaussian and Cauchy. The first layer of the network is similar to the Kohonen's self-organising map (SOM), but with the parameters of the class conditional densities as the learning weights. The winning mechanism is based on maximum posterior probability, and the updating of weights can be limited to a small neighbourhood around the winner. The second layer accumulates the responses of these local nodes, weighted by the learning mixing parameters. The network possesses simple structure and computation, yet yields fast and robust convergence. Experimental results are also presented |
---|
Keywords: | Gaussian distribution, convergence, probability, self-organising feature maps, unsupervised learning, algorithms, Bayesian methods, pattern classification |
---|
Subjects: | G Mathematical and Computer Sciences > G730 Neural Computing |
---|
Divisions: | College of Science > School of Computer Science |
---|
ID Code: | 5085 |
---|
Deposited On: | 21 Apr 2012 07:35 |
---|
Repository Staff Only: item control page