Turning a linear regression model into a curve – polynomial regression

In the previous sections, we assumed a linear relationship between explanatory and response variables. One way to account for the violation of linearity assumption is to use a polynomial regression model by adding polynomial terms:

Turning a linear regression model into a curve – polynomial regression

Here, Turning a linear regression model into a curve – polynomial regression denotes the degree of the polynomial. Although we can use polynomial regression to model a nonlinear relationship, it is still considered a multiple linear regression model because of the linear regression coefficients Turning a linear regression model into a curve – polynomial regression.

We will now discuss how to use the PolynomialFeatures transformer class from scikit-learn to add a quadratic term (Turning a linear regression model into a curve – polynomial regression) to a simple regression problem with one explanatory variable, and compare the polynomial to the linear fit. The steps are as follows:

  1. Add a second degree polynomial term:
    from sklearn.preprocessing import PolynomialFeatures
    >>> X = np.array([258.0, 270.0, 294.0,
    …                          320.0, 342.0, 368.0,
    …                          396.0, 446.0, 480.0,
    …                          586.0])[:, np.newaxis]
    
    >>> y = np.array([236.4, 234.4, 252.8,
    …                         298.6, 314.2, 342.2,
    …                         360.8, 368.0, 391.2,
    …                         390.8])
    >>> lr = LinearRegression()
    >>> pr = LinearRegression()
    >>> quadratic = PolynomialFeatures(degree=2)
    >>> X_quad = quadratic.fit_transform(X)
  2. Fit a simple linear regression model for comparison:
    >>> lr.fit(X, y)
    >>> X_fit = np.arange(250,600,10)[:, np.newaxis]
    >>> y_lin_fit = lr.predict(X_fit)
  3. Fit a multiple regression model on the transformed features for polynomial regression:
    >>> pr.fit(X_quad, y)
    >>> y_quad_fit = pr.predict(quadratic.fit_transform(X_fit))
    Plot the results:
    >>> plt.scatter(X, y, label='training points')
    >>> plt.plot(X_fit, y_lin_fit, 
    ...          label='linear fit', linestyle='--')
    >>> plt.plot(X_fit, y_quad_fit,
    ...          label='quadratic fit')
    >>> plt.legend(loc='upper left')
    >>> plt.show()

In the resulting plot, we can see that the polynomial fit captures the relationship between the response and explanatory variable much better than the linear fit:

Turning a linear regression model into a curve – polynomial regression
>>> y_lin_pred = lr.predict(X)
>>> y_quad_pred = pr.predict(X_quad)
>>> print('Training MSE linear: %.3f, quadratic: %.3f' % (
...         mean_squared_error(y, y_lin_pred),
...         mean_squared_error(y, y_quad_pred)))
Training MSE linear: 569.780, quadratic: 61.330
>>> print('Training R^2 linear: %.3f, quadratic: %.3f' % (
...         r2_score(y, y_lin_pred),
...         r2_score(y, y_quad_pred)))
Training R^2 linear: 0.832, quadratic: 0.982

As we can see after executing the preceding code, the MSE decreased from 570 (linear fit) to 61 (quadratic fit), and the coefficient of determination reflects a closer fit to the quadratic model (Turning a linear regression model into a curve – polynomial regression) as opposed to the linear fit (Turning a linear regression model into a curve – polynomial regression) in this particular toy problem.

Modeling nonlinear relationships in the Housing Dataset

After we discussed how to construct polynomial features to fit nonlinear relationships in a toy problem, let's now take a look at a more concrete example and apply those concepts to the data in the Housing Dataset. By executing the following code, we will model the relationship between house prices and LSTAT (percent lower status of the population) using second degree (quadratic) and third degree (cubic) polynomials and compare it to a linear fit.

The code is as follows:

>>> X = df[['LSTAT']].values
>>> y = df['MEDV'].values
>>> regr = LinearRegression()

# create polynomial features
>>> quadratic = PolynomialFeatures(degree=2)
>>> cubic = PolynomialFeatures(degree=3)
>>> X_quad = quadratic.fit_transform(X)
>>> X_cubic = cubic.fit_transform(X)

# linear fit
>>> X_fit = np.arange(X.min(), X.max(), 1)[:, np.newaxis]
>>> regr = regr.fit(X, y)
>>> y_lin_fit = regr.predict(X_fit)
>>> linear_r2 = r2_score(y, regr.predict(X))

# quadratic fit
>>> regr = regr.fit(X_quad, y)
>>> y_quad_fit = regr.predict(quadratic.fit_transform(X_fit))
>>> quadratic_r2 = r2_score(y, regr.predict(X_quad))

# cubic fit
>>> regr = regr.fit(X_cubic, y)
>>> y_cubic_fit = regr.predict(cubic.fit_transform(X_fit))
>>> cubic_r2 = r2_score(y, regr.predict(X_cubic))

# plot results
>>> plt.scatter(X, y, 
...             label='training points', 
...             color='lightgray')
>>> plt.plot(X_fit, y_lin_fit, 
...          label='linear (d=1), $R^2=%.2f$' 
...            % linear_r2, 
...          color='blue', 
...          lw=2, 
...          linestyle=':')
>>> plt.plot(X_fit, y_quad_fit, 
...          label='quadratic (d=2), $R^2=%.2f$' 
...            % quadratic_r2,
...          color='red', 
...          lw=2,
...          linestyle='-')
>>> plt.plot(X_fit, y_cubic_fit, 
...          label='cubic (d=3), $R^2=%.2f$' 
...            % cubic_r2,
...          color='green', 
...          lw=2, 
...          linestyle='--')
>>> plt.xlabel('% lower status of the population [LSTAT]')
>>> plt.ylabel('Price in $1000's [MEDV]')
>>> plt.legend(loc='upper right')
>>> plt.show()

As we can see in the resulting plot, the cubic fit captures the relationship between the house prices and LSTAT better than the linear and quadratic fit. However, we should be aware that adding more and more polynomial features increases the complexity of a model and therefore increases the chance of overfitting. Thus, in practice, it is always recommended that you evaluate the performance of the model on a separate test dataset to estimate the generalization performance:

Modeling nonlinear relationships in the Housing Dataset

In addition, polynomial features are not always the best choice for modeling nonlinear relationships. For example, just by looking at the MEDV-LSTAT scatterplot, we could propose that a log transformation of the LSTAT feature variable and the square root of MEDV may project the data onto a linear feature space suitable for a linear regression fit. Let's test this hypothesis by executing the following code:

# transform features
>>> X_log = np.log(X)
>>> y_sqrt = np.sqrt(y)

# fit features
>>> X_fit = np.arange(X_log.min()-1, 
...                   X_log.max()+1, 1)[:, np.newaxis]
>>> regr = regr.fit(X_log, y_sqrt)
>>> y_lin_fit = regr.predict(X_fit)
>>> linear_r2 = r2_score(y_sqrt, regr.predict(X_log))

# plot results
>>> plt.scatter(X_log, y_sqrt,
...             label='training points',
...             color='lightgray')
>>> plt.plot(X_fit, y_lin_fit, 
...          label='linear (d=1), $R^2=%.2f$' % linear_r2, 
...          color='blue', 
...          lw=2)
>>> plt.xlabel('log(% lower status of the population [LSTAT])')
>>> plt.ylabel('$sqrt{Price ; in ; $1000's [MEDV]}$')
>>> plt.legend(loc='lower left')
>>> plt.show()

After transforming the explanatory onto the log space and taking the square root of the target variables, we were able to capture the relationship between the two variables with a linear regression line that seems to fit the data better (Modeling nonlinear relationships in the Housing Dataset) than any of the polynomial feature transformations previously:

Modeling nonlinear relationships in the Housing Dataset

Dealing with nonlinear relationships using random forests

In this section, we are going to take a look at random forest regression, which is conceptually different from the previous regression models in this chapter. A random forest, which is an ensemble of multiple decision trees, can be understood as the sum of piecewise linear functions in contrast to the global linear and polynomial regression models that we discussed previously. In other words, via the decision tree algorithm, we are subdividing the input space into smaller regions that become more manageable.

Decision tree regression

An advantage of the decision tree algorithm is that it does not require any transformation of the features if we are dealing with nonlinear data. We remember from Chapter 3, A Tour of Machine Learning Classifiers Using Scikit-learn, that we grow a decision tree by iteratively splitting its nodes until the leaves are pure or a stopping criterion is satisfied. When we used decision trees for classification, we defined entropy as a measure of impurity to determine which feature split maximizes the Information Gain (IG), which can be defined as follows for a binary split:

Decision tree regression

Here, Decision tree regression is the feature to perform the split, Decision tree regression is the number of samples in the parent node, Decision tree regression is the impurity function, Decision tree regression is the subset of training samples in the parent node, and Decision tree regression and Decision tree regression are the subsets of training samples in the left and right child node after the split. Remember that our goal is to find the feature split that maximizes the information gain, or in other words, we want to find the feature split that reduces the impurities in the child nodes. In Chapter 3, A Tour of Machine Learning Classifiers Using Scikit-learn, we used entropy as a measure of impurity, which is a useful criterion for classification. To use a decision tree for regression, we will replace entropy as the impurity measure of a node Decision tree regression by the MSE:

Decision tree regression

Here, Decision tree regression is the number of training samples at node Decision tree regression, Decision tree regression is the training subset at node Decision tree regression, Decision tree regression is the true target value, and Decision tree regression is the predicted target value (sample mean):

Decision tree regression

In the context of decision tree regression, the MSE is often also referred to as within-node variance, which is why the splitting criterion is also better known as variance reduction. To see what the line fit of a decision tree looks like, let's use the DecisionTreeRegressor implemented in scikit-learn to model the nonlinear relationship between the MEDV and LSTAT variables:

>>> from sklearn.tree import DecisionTreeRegressor
>>> X = df[['LSTAT']].values
>>> y = df['MEDV'].values
>>> tree = DecisionTreeRegressor(max_depth=3)
>>> tree.fit(X, y)
>>> sort_idx = X.flatten().argsort()
   >>> lin_regplot(X[sort_idx], y[sort_idx], tree)
>>> plt.xlabel('% lower status of the population [LSTAT]')
>>> plt.ylabel('Price in $1000's [MEDV]')
>>> plt.show()

As we can see from the resulting plot, the decision tree captures the general trend in the data. However, a limitation of this model is that it does not capture the continuity and differentiability of the desired prediction. In addition, we need to be careful about choosing an appropriate value for the depth of the tree to not overfit or underfit the data; here, a depth of 3 seems to be a good choice:

Decision tree regression

In the next section, we will take a look at a more robust way for fitting regression trees: random forests.

Random forest regression

As we discussed in Chapter 3, A Tour of Machine Learning Classifiers Using Scikit-learn, the random forest algorithm is an ensemble technique that combines multiple decision trees. A random forest usually has a better generalization performance than an individual decision tree due to randomness that helps to decrease the model variance. Other advantages of random forests are that they are less sensitive to outliers in the dataset and don't require much parameter tuning. The only parameter in random forests that we typically need to experiment with is the number of trees in the ensemble. The basic random forests algorithm for regression is almost identical to the random forest algorithm for classification that we discussed in Chapter 3, A Tour of Machine Learning Classifiers Using Scikit-learn. The only difference is that we use the MSE criterion to grow the individual decision trees, and the predicted target variable is calculated as the average prediction over all decision trees.

Now, let's use all the features in the Housing Dataset to fit a random forest regression model on 60 percent of the samples and evaluate its performance on the remaining 40 percent. The code is as follows:

>>> X = df.iloc[:, :-1].values
>>> y = df['MEDV'].values
>>> X_train, X_test, y_train, y_test =
...       train_test_split(X, y, 
...                        test_size=0.4, 
...                        random_state=1)

>>> from sklearn.ensemble import RandomForestRegressor
>>> forest = RandomForestRegressor(                             ...                                n_estimators=1000, 
...                                criterion='mse', 
...                                random_state=1, 
...                                n_jobs=-1)
>>> forest.fit(X_train, y_train)
>>> y_train_pred = forest.predict(X_train)
>>> y_test_pred = forest.predict(X_test)
>>> print('MSE train: %.3f, test: %.3f' % (
...        mean_squared_error(y_train, y_train_pred),
...        mean_squared_error(y_test, y_test_pred)))
>>> print('R^2 train: %.3f, test: %.3f' % (
...        r2_score(y_train, y_train_pred),
...        r2_score(y_test, y_test_pred)))
MSE train: 1.642, test: 11.635
R^2 train: 0.960, test: 0.871

Unfortunately, we see that the random forest tends to overfit the training data. However, it's still able to explain the relationship between the target and explanatory variables relatively well (Random forest regression on the test dataset).

Lastly, let's also take a look at the residuals of the prediction:

>>> plt.scatter(y_train_pred,  
...             y_train_pred - y_train, 
...             c='black', 
...             marker='o', 
...             s=35,
...             alpha=0.5,
...             label='Training data')
>>> plt.scatter(y_test_pred,  
...             y_test_pred - y_test, 
...             c='lightgreen', 
...             marker='s', 
...             s=35,
...             alpha=0.7,
...             label='Test data')
>>> plt.xlabel('Predicted values')
>>> plt.ylabel('Residuals')
>>> plt.legend(loc='upper left')
>>> plt.hlines(y=0, xmin=-10, xmax=50, lw=2, color='red')
>>> plt.xlim([-10, 50])
>>> plt.show()

As it was already summarized by the Random forest regression coefficient, we can see that the model fits the training data better than the test data, as indicated by the outliers in the y axis direction. Also, the distribution of the residuals does not seem to be completely random around the zero center point, indicating that the model is not able to capture all the exploratory information. However, the residual plot indicates a large improvement over the residual plot of the linear model that we plotted earlier in this chapter:

Random forest regression

Note

In Chapter 3, A Tour of Machine Learning Classifiers Using Scikit-learn, we also discussed the kernel trick that can be used in combination with support vector machine (SVM) for classification, which is useful if we are dealing with nonlinear problems. Although a discussion is beyond of the scope of this book, SVMs can also be used in nonlinear regression tasks. The interested reader can find more information about Support Vector Machines for regression in an excellent report by S. R. Gunn: S. R. Gunn et al. Support Vector Machines for Classification and Regression. (ISIS technical report, 14, 1998). An SVM regressor is also implemented in scikit-learn, and more information about its usage can be found at http://scikit-learn.org/stable/modules/generated/sklearn.svm.SVR.html#sklearn.svm.SVR.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
52.15.129.253