Optimizers in TensorFlow

From your high school math, you must know that a function's first derivative is zero at its maxima and minima. The gradient descent algorithm is based on the same principle--the coefficients (weights and biases) are adjusted such that the gradient of the loss function decreases. In regression, we use gradient descent to optimize the loss function and obtain coefficients. In this recipe, you will learn how to use the gradient descent optimizer of TensorFlow and some of its variants.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.222.111.134