Comparison of a Bayesian SOM with the EM algorithm for Gaussian mixtures

Yin, Hujun and Allinson, Nigel (1997) Comparison of a Bayesian SOM with the EM algorithm for Gaussian mixtures. In: Workshop on Self-Organising Maps (WSOM'97), 4-6 June 1997, Helsinki, Finland.

Documents
Comparison of a Bayesian SOM with the EM algorithm for Gaussian mixtures
[img]
[Download]
Request a copy
[img] PDF
10.1.1.49.9534.pdf - Whole Document
Restricted to Repository staff only

176Kb

Official URL: http://users.ics.tkk.fi/wsom97/program.html

Abstract

A Bayesian SOM (BSOM) [8], is proposed and applied to the unsupervised learning of
Gaussian mixture distributions and its performance is compared with the expectation-maximisation
(EM) algorithm. The BSOM is found to yield as good results as the well-known EM algorithm but
with much fewer iterations and, more importantly it can be used as an on-line training method. The
neighbourhood function and distance measures of the traditional SOM [3] are replaced by the neuron's
on-line estimated posterior probabilities, which can be interpreted as a Bayesian inference of the
neuron's opportunity to share in the winning response and so to adapt to the input pattern. Such
posteriors starting from uniform priors are gradually sharpened when more and more data samples
become available and so improve the estimation of model parameters. Each neuron then converges to
one component of the mixture. Experimental results are compared with those of the EM algorithm.

Item Type:Conference or Workshop Item (Paper)
Additional Information:A Bayesian SOM (BSOM) [8], is proposed and applied to the unsupervised learning of Gaussian mixture distributions and its performance is compared with the expectation-maximisation (EM) algorithm. The BSOM is found to yield as good results as the well-known EM algorithm but with much fewer iterations and, more importantly it can be used as an on-line training method. The neighbourhood function and distance measures of the traditional SOM [3] are replaced by the neuron's on-line estimated posterior probabilities, which can be interpreted as a Bayesian inference of the neuron's opportunity to share in the winning response and so to adapt to the input pattern. Such posteriors starting from uniform priors are gradually sharpened when more and more data samples become available and so improve the estimation of model parameters. Each neuron then converges to one component of the mixture. Experimental results are compared with those of the EM algorithm.
Keywords:pattern classification, Mixture of Gaussians, Algorithm, BSOM Algorithm, bmjpermission
Subjects:G Mathematical and Computer Sciences > G400 Computer Science
Divisions:College of Science > School of Computer Science
ID Code:5020
Deposited By: Tammie Farley
Deposited On:12 Apr 2012 15:07
Last Modified:13 Mar 2013 09:05

Repository Staff Only: item control page