Batch normalization

It's common practice in ML to scale and standardize the input training data before feeding it to a model for training. For neural networks as well, scaling is one of the preprocessing steps and has shown some improvements in the model performance. Can we apply the same trick before we feed the data to a hidden layer? Batch normalization is based on this idea. It normalizes the previous layer's activations by subtracting the mini-batch mean, μ, of activations and dividing by the mini-batch standard deviation, σ. While making predictions, we may have only one example at a time. So, calculating batch mean μ and batch σ is not possible. These are replaced by an average over all the values collected at training time.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.129.25.49