Weight vector stabilization and Oja's rule

The easiest way to stabilize the weight vector is normalizing it after each update. In this way, its length will be always kept equal to one. In fact, in this kind of neural networks we are not interested in the magnitude, but only in the direction (that remains unchanged after the normalization). However, there are two main reasons that discourage this approach:

  • It's non-local. To normalize vector w, we need to know all its values and this isn't biologically plausible. A real synaptic weight model should be self-limiting, without the need to have access to external pieces of information that cannot be available.
  • The normalization must be performed after having applied the correction and hence needs a double iterative step.

In many machine learning contexts, these conditions are not limiting and they can be freely adopted, but when it's necessary to work with neuroscientific models, it's better to look for other solutions. In a discrete form, we need to determine a correction term for the standard Hebb's rule:

The f function can work both as a local and non-local normalizer. An example of the first type is Oja's rule:

The α parameter is a positive number that controls the strength of the normalization. A non-rigorous proof of the stability of this rule can be obtained considering the condition:

The second expression implies that:

Therefore, when t → ∞, the magnitude of the weight correction becomes close to zero and the length of the weight vector w will approach a finite limit value:

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.176.5