Evaluating the perceptron classifier

In the following steps, you will be evaluating the trained perceptron on the test data:

  1. In order to find out how good our perceptron performs, we can calculate the accuracy score on all data samples:
In [10]: from sklearn.metrics import accuracy_score
... accuracy_score(p.predict(X), y)
Out[10]: 1.0

Perfect score!

  1. Let's have a look at the decision landscape by bringing back our plot_decision_boundary from the earlier chapters:

In [10]: def plot_decision_boundary(classifier, X_test, y_test):
... # create a mesh to plot in
... h = 0.02 # step size in mesh
... x_min, x_max = X_test[:, 0].min() - 1, X_test[:, 0].max() + 1
... y_min, y_max = X_test[:, 1].min() - 1, X_test[:, 1].max() + 1
... xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
... np.arange(y_min, y_max, h))
...
... X_hypo = np.c_[xx.ravel().astype(np.float32),
... yy.ravel().astype(np.float32)]
... zz = classifier.predict(X_hypo)
... zz = zz.reshape(xx.shape)
...
... plt.contourf(xx, yy, zz, cmap=plt.cm.coolwarm, alpha=0.8)
... plt.scatter(X_test[:, 0], X_test[:, 1], c=y_test, s=200)
  1. We can plot the decision landscape
  1. by passing the perceptron object (p), the data (X), and the corresponding target labels (y):
In [11]: plot_decision_boundary(p, X, y)

This will produce the following graph:

The preceding screenshot shows the linear decision boundary that separates the two classes.

Of course, this problem was rather simple, even for a simple linear classifier. Even more so, you might have noticed that we didn't split the data into training and test sets. If we had data that wasn't linearly separable, the story might be a little different.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.12.162.37