Inexact gradients

Most of the optimization algorithms are based on assumptions that we have the exact gradient known at a given point. But, actually we only have an estimate of the gradient. How good is this estimate? In SGD, the batch size significantly influences the behavior of the stochastic-optimization algorithm, since it determines the variance of the gradient estimates. 

In summary, the different problems that are faced in neural network training can be addressed by the following four tricks:

  • Choosing a proper learning rate—possibly an adaptive learning rate for each parameter
  • Choosing a good batch size—gradient estimates are dependent on this
  • Choosing a good initialization of the weights
  • Choosing the right activation functions for hidden layers

Now, let's briefly discuss the various heuristics/strategies that made learning DNNs practically feasible and continue to make deep learning a huge success.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.136.236.231