Bootstrapping the model

An interesting way to improve the performance of our model is to use bootstrapping. This idea was actually applied in one of the first papers on using SVMs in combination with HOG features for pedestrian detection. So let's pay a little tribute to the pioneers and try to understand what they did.

Their idea was quite simple. After training the SVM on the training set, they scored the model and found that the model produced some false positives. Remember that false positive means that the model predicted a positive (+) for a sample that was really a negative (-). In our context, this would mean the SVM falsely believed an image to contain a pedestrian. If this happens for a particular image in the dataset, this example is clearly troublesome. Hence, we should add it to the training set and retrain the SVM with the additional troublemaker so that the algorithm can learn to classify that one correctly. This procedure can be repeated until the SVM gives satisfactory performance.

We will talk about bootstrapping more formally in Chapter 11, Selecting the Right Model with Hyperparameter Tuning.

Let's do that. We will repeat the training procedure a maximum of three times. After each iteration, we identify the false positives in the test set and add them to the training set for the next iteration. We can break this up into several steps:

  1. Train and score the model as follows:
In [20]: score_train = []
... score_test = []
... for j in range(3):
... svm = train_svm(X_train, y_train)
... score_train.append(score_svm(svm, X_train, y_train))
... score_test.append(score_svm(svm, X_test, y_test))

  1. Find the false positives in the test set. If there aren't any, we're done:
...          _, y_pred = svm.predict(X_test)
... false_pos = np.logical_and((y_test.ravel() == -1),
... (y_pred.ravel() == 1))
... if not np.any(false_pos):
... print('no more false positives: done')
... break
  1. Append the false positives to the training set, and then repeat the procedure:
...          X_train = np.concatenate((X_train,
... X_test[false_pos, :]),
... axis=0)
... y_train = np.concatenate((y_train, y_test[false_pos]),
... axis=0)

This will allow us to improve the model over time:

In [21]: score_train
Out[21]: [1.0, 1.0]
In [22]: score_test
Out[22]: [0.64615384615384619, 1.0]

Here, we achieved 64.6 percent accuracy in the first round but were able to get that up to a perfect 100 percent in the second round.

You can find the original paper on ResearchGate at https://www.researchgate.net/publication/3703226_Pedestrian_detection_using_wavelet_templates. The paper was presented at the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR) in 1997 by M. Oren, P. Sinha, and T. Poggio from MIT, doi: 10.1109/CVPR.1997.609319.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.131.38.14