Generalization error and overfitting

So, how do we know that the model we have discussed is good? One obvious and ultimate criterion is its performance in practice.

One common problem that plagues the more complex models, such as decision trees and neural nets, is overfitting. The model can minimize the desired metric on the provided data, but does a very poor job on a slightly different dataset in practical deployments, Even a standard technique, when we split the dataset into training and test, the training for deriving the model and test for validating that the model works well on a hold-out data, may not capture all the changes that are in the deployments. For example, linear models such as ANOVA, logistic, and linear regression are usually relatively stable and less of a subject to overfitting. However, you might find that any particular technique either works or doesn't work for your specific domain.

Another case when generalization may fail is time-drift. The data may change over time significantly so that the model trained on the old data no longer generalizes on the new data in a deployment. In practice, it is always a good idea to have several models in production and constantly monitor their relative performance.

I will consider standard ways to avoid overfitting such as hold out datasets and cross-validation in Chapter 7, Working with Graph Algorithms and model monitoring in Chapter 9, NLP in Scala.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.217.122.218