Unsupervised learning
From Wikipedia, the free encyclopedia
Unsupervised learning is a method of machine learning where a model is fit to observations. It is distinguished from supervised learning by the fact that there is no a priori output. In unsupervised learning, a data set of input objects is gathered. Unsupervised learning then typically treats input objects as a set of random variables. A joint density model is then built for the data set.
Unsupervised learning can be used in conjunction with Bayesian inference to produce conditional probabilities (i.e. supervised learning) for any of the random variables given the others.
Unsupervised learning is also useful for data compression: fundamentally, all data compression algorithms either explicitly or implicitly rely on a probability distribution over a set of inputs.
Another form of unsupervised learning is clustering, which is sometimes not probabilistic. Also see formal concept analysis.
Adaptive resonance theory (ART) allows the number of clusters to vary with problem size and lets the user control the degree of similarity between members of the same clusters by means of a user-defined constant called the vigilance parameter. ART networks are also used for many pattern recognition tasks, such as automatic target recognition and seismic signal processing. The first version of ART was "ART1", developed by Carpenter and Grossberg(1988).
[edit] Bibliography
- Geoffrey Hinton, Terrence J. Sejnowski (editors) (1999) Unsupervised Learning and Map Formation: Foundations of Neural Computation, MIT Press, ISBN 0-262-58168-X (This book focuses on unsupervised learning in neural networks.)