Building the support vector machine

In OpenCV, SVMs are built, trained, and scored the same exact way as every other learning algorithm we have encountered so far, using the following four steps:

  1. Call the create method to construct a new SVM:
In [6]: import cv2
... svm = cv2.ml.SVM_create()

As shown in the following command, there are different modes in which we can operate an SVM. For now, all we care about is the case we discussed in the previous example: an SVM that tries to partition the data with a straight line. This can be specified with the setKernel method:

In [7]: svm.setKernel(cv2.ml.SVM_LINEAR)
  1. Call the classifier's train method to find the optimal decision boundary:
In [8]: svm.train(X_train, cv2.ml.ROW_SAMPLE, y_train)
Out[8]: True
  1. Call the classifier's predict method to predict the target labels of all data samples in the test set:
In [9]: _, y_pred = svm.predict(X_test)
  1. Use scikit-learn's metrics module to score the classifier:
In [10]: from sklearn import metrics
... metrics.accuracy_score(y_test, y_pred)
Out[10]: 0.80000000000000004

Congratulations, we got 80 percent correctly classified test samples!

Of course, so far we have no idea what happened under the hood. For all we know, we might as well have got these commands off a web search and typed them into the Terminal, without really knowing what we're doing. But this is not who we want to be. Getting a system to work is one thing and understanding it is another. Let's get to that!

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.235.62