Lasso represents another alternative of regularization and it overcomes the disadvantage of Ridge regression, reducing the number of predictors in the final model. This time, it penalizes the size of the regression coefficients using an L1 penalty:
When the λ is sufficiently large, it forces some of the coefficient estimates to be exactly equal to zero, obtaining more parsimonious models.