Initializing with a mean zero distribution

A better idea is to initialize your weights with small random values all centered at zero. For this, we can use random values from a normal distribution with zero mean and unit variance, which are then scaled by some small value, such as 0.01.

Doing this will break the symmetry in the weights as they will all be random and unique, which is a good thing. Calculating the forward and backward passes, our model neurons will now update in distinct ways. This will give them the chance to learn many different features that will all work together as part of a big neural network.

The only thing to then worry about is how small we set our weight values. If set too small, backpropagation updates will also be very small, which can cause vanishing gradient problems in deeper networks.

The following illustration shows one of the requirements for the weights (Zero-mean):

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.143.203.96