Feature extraction

Generally speaking, a feature is an interesting area of an image. It is a measurable property of an image that is very informative about what the image represents. Usually, the grayscale value of an individual pixel (the raw data) does not tell us a lot about the image as a whole. Instead, we need to derive a property that is more informative.

For example, knowing that there are patches in the image that look like eyes, a nose, and a mouth will allow us to reason about how likely it is that the image represents a face. In this case, the number of resources required to describe the data (are we seeing an image of a face?) is drastically reduced (does the image contain two eyes? a nose? a mouth?).

More low-level features, such as the presence of edges, corners, blobs, or ridges, may be more informative generally. Some features may be better than others, depending on the application. Once we have made up our mind on how to describe our favorite feature, we need to come up with a way to check whether or not the image contains such features and where it contains them.

Feature detection

The process of finding areas of interest in an image is called feature detection. OpenCV provides a whole range of feature detection algorithms, such as these:

  • Harris corner detection: Knowing that edges are areas with high-intensity changes in all directions, Harris and Stephens came up with a fast way of finding such areas. This algorithm is implemented as cv2.cornerHarris in OpenCV.
  • Shi-Tomasi corner detection: Shi and Tomasi have their own idea of what are good features to track, and they usually do better than Harris corner detection by finding the N strongest corners. This algorithm is implemented as cv2.goodFeaturesToTrack in OpenCV.
  • Scale-Invariant Feature Transform (SIFT): Corner detection is not sufficient when the scale of the image changes. To this end, Lowe developed a method to describe keypoints in an image that are independent of orientation and size (hence the name scale invariant).The algorithm is implemented as cv2.SIFT in OpenCV2, but was moved to the extra modules in OpenCV3 since its code is proprietary.
  • Speeded-Up Robust Features (SURF): SIFT has proven to be really good, but it is not fast enough for most applications. This is where SURF comes in, which replaces the expensive Laplacian of a Gaussian from SIFT with a box filter. The algorithm is implemented as cv2.SURF in OpenCV2, but, like SIFT, it was moved to the extra modules in OpenCV3 since its code is proprietary.

OpenCV has support for even more feature descriptors, such as Features from Accelerated Segment Test (FAST), Binary Robust Independent Elementary Features (BRIEF), and Oriented FAST and Rotated BRIEF (ORB), the latter being an open source alternative to SIFT or SURF.

Detecting features in an image with SURF

In the remainder of this chapter, we will make use of the SURF detector.

The SURF algorithm can be roughly divided into two distinctive steps: detecting points of interest, and formulating a descriptor. SURF relies on the Hessian corner detector for interest point detection, which requires the setting of a min_hessian threshold. This threshold determines how large the output from the Hessian filter must be in order for a point to be used as an interest point. A larger value results in fewer but (theoretically) more salient interest points, whereas a smaller value results in more numerous but less salient points. Feel free to experiment with different values. In this chapter, we will choose a value of 400, as seen earlier in FeatureMatching.__init__, where we created a SURF descriptor with the following code snippet:

self.min_hessian = 400
self.SURF = cv2.SURF(self.min_hessian)

Both the features and the descriptor can then be obtained in a single step, for example, on an input image img_query without the use of a mask (None):

key_query, desc_query = self.SURF.detectAndCompute(img_query, None)

In OpenCV 2.4.8 or later, we can now easily draw the keypoints with the following function:

imgOut = cv2.drawKeypoints(img_query, key_query, None, (255, 0, 0), 4)
cv2.imshow(imgOut)

Note

Make sure that you check len(keyQuery) first, as SURF might return a large number of features. If you care only about drawing the keypoints, try setting min_hessian to a large value until the number of returned keypoints is manageable.

If our OpenCV distribution is older than that, we might have to write our own function. Note that SURF is protected by patent laws. Therefore, if you wish to use SURF in a commercial application, you will be required to obtain a license.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.119.235.79