Optimizers

In the previous section, we explored various activation functions and noticed that the ReLU activation function gives a better result when run over a high number of epochs.

In this section, we will look at the impact of varying the optimizer while the activation function remains ReLU on the scaled dataset.

The various loss functions and their corresponding accuracies on the test dataset when run for 10 epochs are as follows:

Optimizer

Test dataset accuracy

SGD

88%

RMSprop

98.44%

Adam

98.4%

 

Now we have seen that RMSprop and Adam optimizers perform better than the stochastic gradient descent optimizer; let's look at the other parameter within an optimizer that can be modified to improve the accuracy of the model—learning rate.

The learning rate of an optimizer can be varied by specifying it as follows:

In the preceding code snippet, lr represents learning rate. The typical values of learning rate vary between 0.001 and 0.1.

On the MNIST dataset, the accuracy did not improve further when we changed the learning rate; however, typically for a lower learning rate, more epochs are required to reach the same amount of accuracy.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.189.185.221