Testing the model

To test the generalization performance of the model, we calculate the mean squared error on the test data:

In [24]: y_pred = linreg.predict(X_test)
In [25]: metrics.mean_squared_error(y_test, y_pred)
Out[25]: 14.995852876582541

We note that the mean squared error is a little lower on the test set than the training set. This is good news, as we care mostly about the test error. However, from these numbers it is really hard to understand how good the model really is. Perhaps it's better to plot the data:

In [26]: plt.figure(figsize=(10, 6))
... plt.plot(y_test, linewidth=3, label='ground truth')
... plt.plot(y_pred, linewidth=3, label='predicted')
... plt.legend(loc='best')
... plt.xlabel('test data points')
... plt.ylabel('target value')
Out[26]: <matplotlib.text.Text at 0x7ff46783c7b8>

This produces the following diagram:

This makes more sense! Here, we see the ground truth housing prices for all test samples in red and our predicted housing prices in blue. It's pretty close, if you ask me. It is interesting to note, though, that the model tends to be off the most for really high or really low housing prices, such as the peak values of data points 1218, and 42. We can formalize the amount of variance in the data that we were able to explain by calculating R squared:

In [27]: plt.figure(figsize=(10, 6))
... plt.plot(y_test, y_pred, 'o')
... plt.plot([-10, 60], [-10, 60], 'k--')
... plt.axis([-10, 60, -10, 60])
... plt.xlabel('ground truth')
... plt.ylabel('predicted')

This will plot the ground truth prices, y_test, on the x axis and our predictions, y_pred, on the y axis. We also plot a diagonal line for reference (using a black dashed line, 'k--'), as we will see soon. But we also want to display the R2 score and mean squared error in a textbox:

...      scorestr = r'R$^2$ = %.3f' % linreg.score(X_test, y_test)
... errstr = 'MSE = %.3f' % metrics.mean_squared_error(y_test, y_pred)
... plt.text(-5, 50, scorestr, fontsize=12)
... plt.text(-5, 45, errstr, fontsize=12)
Out[27]: <matplotlib.text.Text at 0x7ff4642d0400>

This will produce the following diagram and is a professional way of plotting a model fit:

If our model was perfect, then all data points would lie on the dashed diagonal, since y_pred would always be equal to y_true. Deviations from the diagonal indicate that the model made some errors, or that there is some variance in the data that the model was not able to explain. Indeed,  indicates that we were able to explain 76% of the scatter in the data, with a mean squared error of 14.996. These are some performance measures we can use to compare the linear regression model to some more complicated ones.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.16.79.147