Summary

During this chapter, we have covered quite a lot of ground, finally exploring the most experimental and scientific part of the task of modeling linear regression or classification models.

Starting with the topic of generalization, we explained what can go wrong in a model and why it is always important to check the true performances of your work by train/test splits and by bootstraps and cross-validation (though we recommend using the latter more for validation work than general evaluation itself).

Model complexity as a source of variance in the estimate gave us the occasion to introduce variable selection, first by greedy selection of features, univariate or multivariate, then using regularization techniques, such as Ridge, Lasso and Elastic Net.

Finally, we demonstrated a powerful application of Lasso, called stability selection, which, in the light of our experience, we recommend you try for many feature selection problems.

In the next chapter, we will deal with the problem of incrementally growing datasets, proposing solutions that may work well even if your problem is that of datasets too large to easily and timely fit into the memory of your working computer.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.119.248.149