Summary

This chapter showed several approaches to preventing overfitting including common penalties, the L1 penalty and L2 penalty, ensembles of simpler models, and dropout where variables and/or cases are dropped to make the model noisy and prevent overfitting. We examined the role of penalties in regression problems and for neural networks. In the next chapter, we will move into deep learning and deep neural networks and see how to push the accuracy and performance of our predictive models even further.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.116.67.22