Summary

In this chapter, we examined various methods for constructing ensembles of machine learning algorithms. The main purposes of creating ensembles are these:

  • Reducing the error of the elementary algorithms
  • Expanding the set of possible hypotheses
  • Increasing a probability of reaching the global optimum during optimizing

We saw that there are three main approaches for building ensembles: training elementary algorithms on various datasets and averaging the errors (bagging); consistently improving the results of the previous, weaker algorithms (boosting); and learning the meta-algorithm from the results of elementary algorithms (stacking). Note that the methods of building ensembles that we've covered, except stacking, require that the elementary algorithms belong to the same class, and this is one of the main requirements for ensembles. It is also believed that boosting gives more accurate results than bagging, but, at the same time, is more prone to overfitting. The main disadvantage of stacking is that it begins to significantly improve the results of elementary algorithms only with a relatively large number of training samples.

In the next chapter, we will discuss the fundamentals of artificial neural networks (ANNs). We'll look at the historical aspect of their creation, we will go through the basic mathematical concepts used in ANNs, we will implement a multilayer perceptron (MLP) network and a simple convolutional neural network (CNN), and we will discuss what deep learning is and why it is so trendy.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.116.118.229