Classification and regression trees and random forest

We will now introduce classification and regression trees (CART) and random forest. CART uses different statistical criteria to decide on tree splits. Random forest uses ensemble learning (a combination of CART trees) to improve classification using a voting principle.

CART

There are a number of differences between CART used for classification and the family of algorithms we just discovered. Here, we only superficially discuss the partitioning criterion and pruning.

In CART, the attribute to be partition is selected with the Gini index as a decision criterion. In classification trees, the Gini index is simply computed as: 1—the sum of the squared probabilities for each possible partition on the attribute. The formula notation is:

CART

This is more efficient compared to information gain and information ratio. Note that CART does not only do necessary partitioning on the modalities of the attribute, but also merges modalities together for the partition (for example, modality A versus modalities B and C).

CART can predict a numeric outcome, which is not the case for the attributes we have seen previously. In the case of regression trees, CART performs regression and builds the tree in a way that minimizes the squared residuals. As this chapter is about classification, we will focus on this aspect here.

Regarding pruning, CART uses another dataset and generated pruned trees from iterative simplification of the previous tree (replacement of nodes by leaves). It then selects the tree that minimizes a measure called cost complexity. Cost complexity is computed as the error rate of the considered tree, plus the number of leaves multiplied by a complexity parameter. The complexity parameter can be set by the user (or determined by CART). It can range from 0 to 1. The higher the value, the smaller the obtained tree.

Random forest

In random forest, an algorithm close to CART is used to produce trees. Differences include the use of bagging and the selection of random predictors at each partition of the tree. Each tree votes on the classification and the majority vote wins—how democratic!

Bagging

The algorithm generates T trees (the default in the function we will use is T=500). For each T, a different random sample (sampling with replacement) of observation is obtained from the observation pool. The size of each of the samples is always the same as the number of observations in the original data. The aim of bagging is to reduce the impact of measurement error (noise) in the data and therefore avoid overfitting.

Random selection of attributes: for each partition, a number of attributes is selected. The analysis is used on these predictors. As for bagging, this procedure avoids overfitting to the training set, and therefore, a potentially better performance in predicting a test set.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.149.228.138