Preventing Overfitting with Regularization

So far, in the previous chapters, we understood about building neural network, evaluating the TensorBoard results, and varying the hyperparameters of the neural network model to improve the accuracy of the model.

While the hyperparameters in general help with improving the accuracy of model, certain configuration of hyperparameters results in the model overfitting to the training data, while not generalizing for testing data is the problem of overfitting to the training data.

A key parameter that can help us in avoiding overfitting while generalizing on an unseen dataset is the regularization technique. Some of the key regularization techniques are as follows:

  • L2 regularization
  • L1 regularization
  • Dropout
  • Scaling
  • Batch normalization
  • Weight initialization

In this chapter, we will go through the following:

  • Intuition of over/under fitting
  • Reducing overfitting using regularization
  • Improving the underfitting scenario
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.17.186.247