Training and testing the MLP classifier

This is the easy part. Training the MLP classifier is the same as with all other classifiers:

In [11]: mlp.train(X, cv2.ml.ROW_SAMPLE, y)
Out[11]: True

The same goes for predicting target labels:

In [12]: _, y_hat = mlp.predict(X)

The easiest way to measure accuracy is by using scikit-learn's helper function:

In [13]: from sklearn.metrics import accuracy_score
... accuracy_score(y_hat.round(), y)
Out[13]: 0.88

It looks like we were able to increase our performance from 81% with a single perceptron to 88% with an MLP consisting of 10 hidden-layer neurons and 2 output neurons. In order to see what changed, we can look at the decision boundary one more time:

In [14]: def plot_decision_boundary(classifier, X_test, y_test):
... # create a mesh to plot in
... h = 0.02 # step size in mesh
... x_min, x_max = X_test[:, 0].min() - 1, X_test[:, 0].max() + 1
... y_min, y_max = X_test[:, 1].min() - 1, X_test[:, 1].max() + 1
... xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
... np.arange(y_min, y_max, h))
...
... X_hypo = np.c_[xx.ravel().astype(np.float32),
... yy.ravel().astype(np.float32)]
... _, zz = classifier.predict(X_hypo)

However, there is a problem right here, in that zz is now a one-hot encoded matrix. In order to transform the one-hot encoding into a number that corresponds to the class label (zero or one), we can use NumPy's argmax function:

...          zz = np.argmax(zz, axis=1)

Then the rest stays the same:

...          zz = zz.reshape(xx.shape)
... plt.contourf(xx, yy, zz, cmap=plt.cm.coolwarm, alpha=0.8)
... plt.scatter(X_test[:, 0], X_test[:, 1], c=y_test, s=200)

Then we can call the function like this:

In [15]: plot_decision_boundary(mlp, X, y_raw)

The output looks like this:

The preceding output shows a decision boundary of an MLP with one hidden layer.

And voila! The decision boundary is no longer a straight line. That being said, you got a great performance increase and might have expected a more drastic performance increase. But nobody said we have to stop here!

There are at least two different things we can try from here on out:

  • We can add more neurons to the hidden layer. You can do this by replacing n_hidden on line 6 with a larger value and running the code again. Generally speaking, the more neurons you put in the network, the more powerful the MLP will be.
  • We can add more hidden layers. It turns out that this is where neural networks really get their power from.

Hence, this is where I should tell you about deep learning.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.20.68