Lasso, ridge, and elasticnet in caret

We have already discussed ordinary least squares (OLS) and its related techniques, lasso and ridge, in the context of linear regression. In this recipe, we will see how easily these techniques can be implemented in caret and how to tune the corresponding hyperparameters.

OLS is designed to find the estimates that minimize the square distances between the observations and the predicted values of a linear model. There are three reasons why this approach might not be ideal:

  • If the number of predictors is greater than the number of samples, OLS cannot be used. This is not usually a problem, since in most of the practical cases we have, n>p.
  • If we have lots of variables of dubious importance, OLS will still estimate a coefficient for each one of them. After the model is estimated, we will need to do some variable selection and discard the irrelevant ones (usually using t- or p-values). When we have lots of variables, this is very tedious to do manually, so we need to rely on an automatic approach.
  • Even if everything works as expected in OLS and we manage to remove the irrelevant features, the resulting OLS model might not be very good. Maybe it could be improved by sacrificing some bias (forcing some coefficients to be smaller than they should be in order to gain via a reduction in variance). This is the best and most frequent reason why lasso, ridge, and elasticnet are used.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.227.52.7