Abstract
In conventional competitive learning the neurons compare the input vector to their own weight vectors and one of them is declared the winner based on a deterministic distortion measure, such as the Euclidean distance. However, the neurons which start very far from the data points may not win at all, hence never learn. This results in a neuron underutilization problem. In this article we introduce a new stochastic competitive learning algorithm (SCoLA). Here the criterion for selecting the winning neuron consists of a deterministic component and a stochastic component. The deterministic component is inversely proportional to the distance between the input vector and the weight vector, whereas the stochastic component is a zero-mean normal random variable whose variance decreases monotonically with the frequency of winning the competition. Neurons that do not frequently win have high variance, and thus a better chance of winning the competition. Simulation results are presented which demonstrate the effectiveness of the proposed stochastic competitive learning scheme. It achieves better neuron utilization than conventional competitive learning does, resulting in lower distortion rates in clustering and vector quantization applications.
Original language | English |
---|---|
Pages | 908-913 |
Number of pages | 6 |
Publication status | Published - 2001 |
Externally published | Yes |
Event | International Joint Conference on Neural Networks (IJCNN'01) - Washington, DC, United States Duration: 15 Jul 2001 → 19 Jul 2001 |
Conference
Conference | International Joint Conference on Neural Networks (IJCNN'01) |
---|---|
Country/Territory | United States |
City | Washington, DC |
Period | 15/07/01 → 19/07/01 |