ANN – Self Organizing Neural Network (SONN) Learning Algorithm

Prerequisite:
Self-Organizing Neural Network Learning Algorithm:
Step 0:
  • Initialize synaptic weights to random values in a specific interval like, [-1, 1] or [0, 1].
  • Assign topological neighborhood parameters.
  • Define learning rate (say, 0.1).
Step 1:
Steps 2-8
  • Step 2: For randomly chosen input vector from the set of training samples, do loop: Steps 3-5.
  • Step 3: Synaptic weight vector of the winning neuron for the input vector . For each , Euclidean Distance is calculated between a pair of (n X 1) vectors and is represented by-

       

    This is a criteria to finding similarity between two sets of samples. The nodes (neurons) in the network are evaluated to determine the most likely input vector according to its weights
  • Step 4: To select the winning neuron, , that best matches the input vector , so that the is minimum.

       

    Where: is the number of neurons in the input layer, is the number of neurons in the Kohonen layer. The winning node is generally termed as the Best Matching Unit (BMU).
  • Step 5: Now it is the Learning phase; to update the synaptic weights. For all nodes within the neighborhood of that neuron, for every :

       

    Where is the weight correction at iteration . This process to update weight is based on the competitive learning rule:

       

    Where is the learning rate. Here the neighborhood function centered around the winner-takes-all neuron at iteration . Any neurons within the radius of the BMU are modified to make them more similar to the input vector.
  • Step 6: Update the learning rate . Following the equation-

       

  • Step 7: At specified times reduce radius of topological neighborhood BMU. As per the clustering process progresses, the radius of the neighborhood around a cluster unit also decreases accordingly.
  • Step 8: Check for the termination condition.
  • Example with iterations:
    1. The initial weight vectors, , are given by
    2.    

    3. We find the winning (best-matching) neuron satisfying the minimum distance Euclidean criterion:
    4.    

    5. Neuron 3 is the winner and its weight vector is updated following the competitive learning rule.
    6.    

    7. The updated weight vector at iteration is calculated as:
    8.    

    9. The weight vector of the winning neuron 3 becomes closer to the input vector in each iteration.


    Contact Us