Performance in Ensemble Learning

So far, we have learned that no two models will give the same result. In other words, different combinations of data or algorithms will result in a different outcome. This outcome can be good for a particular combination and not so good for another combination. What if we have a model that tries to take these combinations into account and comes up with a generalized and better result? This is called an ensemble model.

In this chapter, we will be learning about a number of concepts in regard to ensemble modeling, which are as follows:

  • Bagging
  • Random forest
  • Boosting
  • Gradient boosting
  • Optimization of parameters 

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.225.57.164