Learning the Harris corner-detection algorithm

One of the most well-known corner- and edge-detection algorithms is the Harris corner-detection algorithm, which is implemented in the cornerHarris function in OpenCV. Here is how this function is used:

Mat image = imread("Test.png"); 
cvtColor(image, image, COLOR_BGR2GRAY); 
 
Mat result; 
int blockSize = 2; 
int ksize = 3; 
double k = 1.0; 
cornerHarris(image, 
             result, 
             blockSize, 
             ksize, 
             k);

blockSize determines the width and height of the square block over which the Harris corner-detection algorithm will calculate a 2 x 2 gradient-covariance matrix. ksize is the kernel size of the Sobel operator internally used by the Harris algorithm. The preceding example demonstrates one of the most commonly used sets of Harris algorithm parameters, but for more detailed information about the Harris corner-detection algorithm and its internals mathematics, you can refer to the OpenCV documentation. It's important to note that the result object from the preceding example code is not displayable unless it is normalized using the following example code:

normalize(result, result, 0.0, 1.0, NORM_MINMAX, -1); 

Here is the result of the Harris corner-detection algorithm from the preceding example, when normalized and displayed using the OpenCV imshow function:

The OpenCV library includes another famous corner-detection algorithm, called Good Features to Track (GFTT). You can use the goodFeaturesToTrack function in OpenCV to use the GFTT algorithm to detect corners, as seen in the following example:

Mat image = imread("Test.png"); 
Mat imgGray; 
cvtColor(image, imgGray, COLOR_BGR2GRAY); 
 
vector<Point2f> corners; 
int maxCorners = 500; 
double qualityLevel = 0.01; 
double minDistance = 10; 
Mat mask; 
int blockSize = 3; 
int gradientSize = 3; 
bool useHarrisDetector = false; 
double k = 0.04; 
goodFeaturesToTrack(imgGray, 
                    corners, 
                    maxCorners, 
                    qualityLevel, 
                    minDistance, 
                    mask, 
                    blockSize, 
                    gradientSize, 
                    useHarrisDetector, 
                    k); 

As you can see, this function requires a single-channel image, so, before doing anything else, we have converted our BGR image to grayscale. Also, this function uses the maxCorners value to limit the number of detected corners based on how strong they are as candidates, and setting maxCorners to a negative value or to zero means all detected corners should be returned, which is not a good idea if you are looking for the best corners in an image, so make sure you set a reasonable value for this based on the environment in which you'll be using it. qualityLevel is the internal threshold value for accepting detected corners. minDistance is the minimum allowed distance between returned corners. This is another parameter that is completely dependent on the environment this algorithm will be used in. You have already seen  the remaining parameters in the previous algorithms from this chapter and the preceding one. It's important to note that this function also incorporates the Harris corner-detection algorithm, so, by setting useHarrisDetector to true, the resultant features will be calculated using the Harris corner-detection algorithm.

You might have already noticed that the goodFeaturesToTrack function returns a set of Point objects (Point2f to be precise) instead of a Mat object. The returned corners vector simply contains the best possible corners detected in the image using the GFTT algorithm, so we can use the drawMarker function to visualize the results properly, as seen in the following example:

Scalar color(0, 0, 255); 
MarkerTypes markerType = MARKER_TILTED_CROSS; 
int markerSize = 8; 
int thickness = 2; 
for(int i=0; i<corners.size(); i++) 
{ 
    drawMarker(image, 
               corners[i], 
               color, 
               markerType, 
               markerSize, 
               thickness); 
}

Here is the result of the preceding example and detecting corners using the goodFeaturesToTrack function:

You can also use the GFTTDetector class to detect corners in a similar way as you did with the goodFeaturesToTrack function. The difference here is that the returned type is a vector of KeyPoint objects. Many OpenCV functions and classes use the KeyPoint class to return various properties of detected keypoints, instead of just a Point object that corresponds to the location of the keypoint. Let's see what this means with the following:

Ptr<GFTTDetector> detector =  
    GFTTDetector::create(maxCorners, 
                         qualityLevel, 
                         minDistance, 
                         blockSize, 
                         gradientSize, 
                         useHarrisDetector, 
                         k); 
 
vector<KeyPoint> keypoints; 
detector->detect(image, keypoints); 

The parameters passed to the GFTTDetector::create function are no different from the parameters we used with the goodFeaturesToTrack function. You can also omit all of the given parameters and simply write the following to use the default and optimal values for all parameters:

Ptr<GFTTDetector> detector = GFTTDetector::create();

But let's get back to the KeyPoint class and the result of the detect function from the previous example. Recall that we used a loop to go through all of the detected points and draw them on the image. There is no need for this if we use the GFTTDetector class, since we can use an existing OpenCV function called drawKeypoints to properly visualize all of the detected keypoints. Here's how this function is used:

Mat outImg; 
drawKeypoints(image, 
              keypoints, 
              outImg); 

The drawKeypoints function goes through all KeyPoint objects in the keypoints vector and draws them using random colors on image and saves the result in the outImg object, which we can then display by calling the imshow function. The following image is the result of the drawKeypoints function when it is called using the preceding example code:

The drawKeypoints function can be provided with an additional (optional) color parameter in case we want to use a specific color instead of random colors. In addition, we can also provide a flag parameter that can be used to further enhance the visualized result of the detected keypoints. For instance, if the flag is set to DRAW_RICH_KEYPOINTS, the drawKeypoints function will also use the size and orientation values in each detected keypoint to visualize more properties of keypoints.

Each KeyPoint object may contain the following properties, depending on the algorithm used for calculating it:
- pt: A Point2f object containing the coordinates of the keypoint.
- size: The diameter of the meaningful keypoint neighborhood.
- angle: The orientation of the keypoint in degrees, or -1 if not applicable.
- response: The strength of the keypoint determined by the algorithm.
- octave: The octave or pyramid layer from which the keypoint was extracted. Using octaves allows us to deal with keypoints detected from the same image but in different scales. Algorithms that set this value usually require an input octave parameter, which is used to define the number of octaves (or scales) of an image that is used to extract keypoints.
- class_id: This integer parameter can be used to group keypoints, for instance, when keypoints belong to a single object, they can have the same optional class_id value.

In addition to Harris and GFTT algorithms, you can also use the FAST corner-detection algorithm using the FastFeatureDetector class, and the AGAST corner-detection algorithm (Adaptive and Generic Corner Detection Based on the Accelerated Segment Test) using the AgastFeatureDetector class, quite similar to how we used the GFTTDetector class. It's important to note that all of these classes belong to the features2d module in the OpenCV library and they are all subclasses of the Feature2D class, therefore all of them contain a static create function that creates an instance of their corresponding classes and a detect function that can be used to extract the keypoints from an image.

Here is an example code demonstrating the usage of FastFeatureDetector using all of its default parameters:

int threshold = 10; 
bool nonmaxSuppr = true; 
int type = FastFeatureDetector::TYPE_9_16; 
Ptr<FastFeatureDetector> fast = 
        FastFeatureDetector::create(threshold, 
                                    nonmaxSuppr, 
                                    type); 
 
vector<KeyPoint> keypoints; 
fast->detect(image, keypoints);

Try increasing the threshold value if too many corners are detected. Also, make sure to check out the OpenCV documentation for more information about the type parameter used in the FastFeatureDetector class. As mentioned previously, you can simply omit all of the parameters in the preceding example code to use the default values for all parameters.

Using the AgastFeatureDetector class is extremely similar to using FastFeatureDetector. Here is an example:

int threshold = 10; 
bool nonmaxSuppr = true; 
int type = AgastFeatureDetector::OAST_9_16; 
Ptr<AgastFeatureDetector> agast = 
        AgastFeatureDetector::create(threshold, 
                                     nonmaxSuppr, 
                                     type); 
 
vector<KeyPoint> keypoints; 
agast->detect(image, keypoints); 

Before moving on to edge-detection algorithms, it's worth noting that OpenCV also contains the AGAST and FAST functions, which can be employed to directly use their corresponding algorithms and avoid dealing with creating an instance to use them; however, using the class implementation of these algorithms has the huge advantage of switching between algorithms using polymorphism. Here's a simple example that demonstrates how we can use polymorphism to benefit from the class implementations of corner-detection algorithms:

Ptr<Feature2D> detector; 
switch (algorithm) 
{ 
 
case 1: 
    detector = GFTTDetector::create(); 
    break; 
 
case 2: 
    detector = FastFeatureDetector::create(); 
    break; 
 
case 3: 
    detector = AgastFeatureDetector::create(); 
    break; 
 
default: 
    cout << "Wrong algorithm!" << endl; 
    return 0; 
 
} 
 
vector<KeyPoint> keypoints; 
detector->detect(image, keypoints); 

algorithm, in the preceding example, is an integer value that can be set at run-time and will change the type of the corner-detection algorithm assigned to the detector object, which has the Feature2D type, or in other words, the base class of all corner-detection algorithms.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.135.247.11