Hand region segmentation

The automatic detection of an arm, and later the hand region, could be designed to be arbitrarily complicated, maybe by combining information about the shape and color of an arm or hand. However, using a skin color as a determining feature to find hands in visual scenes might fail terribly in poor lighting conditions or when the user is wearing gloves. Instead, we choose to recognize the user's hand by its shape in the depth map. Allowing hands of all sorts to be present in any region of the image unnecessarily complicates the mission of the present chapter, so we make two simplifying assumptions:

  • We will instruct the user of our app to place their hand in front of the center of the screen, orienting their palm roughly parallel to the orientation of the Kinect sensor so that it is easier to identify the corresponding depth layer of the hand.
  • We will also instruct the user to sit roughly one to two meters away from the Kinect, and to slightly extend their arm in front of their body so that the hand will end up in a slightly different depth layer than the arm. However, the algorithm will still work even if the full arm is visible.

In this way, it will be relatively straightforward to segment the image based on the depth layer alone. Otherwise, we would have to come up with a hand detection algorithm first, which would unnecessarily complicate our mission. If you feel adventurous, feel free to do this on your own.

Finding the most prominent depth of the image center region

Once the hand is placed roughly in the center of the screen, we can start finding all image pixels that lie on the same depth plane as the hand.

To do this, we simply need to determine the most prominent depth value of the center region of the image. The simplest approach would be as follows: look only at the depth value of the center pixel:

width, height = depth.shape
center_pixel_depth = depth[width/2, height/2]

Then, create a mask in which all pixels at a depth of center_pixel_depth are white and all others are black:

import numpy as np

depth_mask = np.where(depth == center_pixel_depth, 255, 0).astype(np.uint8)

However, this approach will not be very robust, because chances are that it will be compromised by the following:

  • Your hand will not be placed perfectly parallel to the Kinect sensor
  • Your hand will not be perfectly flat
  • The Kinect sensor values will be noisy

Therefore, different regions of your hand will have slightly different depth values.

The _segment_arm method takes a slightly better approach; that is, looking at a small neighborhood in the center of the image and determining the median (meaning the most prominent) depth value. First, we find the center region (for example, 21 x 21 pixels) of the image frame:

def _segment_arm(self, frame):
    """ segments the arm region based on depth """
    center_half = 10 # half-width of 21 is 21/2-1
    lowerHeight = self.height/2 – center_half
    upperHeight = self.height/2 + center_half
    lowerWidth = self.width/2 – center_half
    upperWidth = self.width/2 + center_half
    center = frame[lowerHeight:upperHeight, lowerWidth:upperWidth]

We can then reshape the depth values of this center region into a one-dimensional vector and determine the median depth value, med_val:

med_val = np.median(center)

We can now compare med_val with the depth value of all pixels in the image and create a mask in which all pixels whose depth values are within a particular range [med_val-self.abs_depth_dev, med_val+self.abs_depth_dev] are white, and all other pixels are black. However, for reasons that will be come clear in a moment, let's paint the pixels gray instead of white:

frame = np.where(abs(frame – med_val) <= self.abs_depth_dev,128, 0).astype(np.uint8)

The result will look like this:

Finding the most prominent depth of the image center region

Applying morphological closing to smoothen the segmentation mask

A common problem with segmentation is that a hard threshold typically results in small imperfections (that is, holes, as in the preceding image) in the segmented region. These holes can be alleviated by using morphological opening and closing. Opening removes small objects from the foreground (assuming that the objects are bright on a dark foreground), whereas closing removes small holes (dark regions).

This means that we can get rid of the small black regions in our mask by applying morphological closing (dilation followed by erosion) with a small 3 x 3 pixel kernel:

        kernel = np.ones((3, 3), np.uint8)
        frame = cv2.morphologyEx(frame, cv2.MORPH_CLOSE, kernel)

The result looks a lot smoother, as follows:

Applying morphological closing to smoothen the segmentation mask

Notice, however, that the mask still contains regions that do not belong to the hand or arm, such as what appears to be one of my knees on the left and some furniture on the right. These objects just happen to be on the same depth layer as my arm and hand. If possible, we could now combine the depth information with another descriptor, maybe a texture-based or skeleton-based hand classifier, that would weed out all non-skin regions.

Finding connected components in a segmentation mask

An easier approach is to realize that most of the time hands are not connected to knees or furniture. We already know that the center region belongs to the hand, so we can simply apply cv2.floodfill to find all the connected image regions.

Before we do this, we want to be absolutely certain that the seed point for the flood fill belongs to the right mask region. This can be achieved by assigning a grayscale value of 128 to the seed point. However, we also want to make sure that the center pixel does not, by any coincidence, lie within a cavity that the morphological operation failed to close. So, let's set a small 7 x 7 pixel region with a grayscale value of 128 instead:

small_kernel = 3
frame[self.height/2-small_kernel :
            self.height/2+small_kernel, self.width/2-small_kernel : self.width/2+small_kernel] = 128

As flood filling (as well as morphological operations) is potentially dangerous, later OpenCV versions require the specification of a mask that avoids flooding the entire image. This mask has to be 2 pixels wider and taller than the original image and has to be used in combination with the cv2.FLOODFILL_MASK_ONLY flag. It can be very helpful in constraining the flood filling to a small region of the image or a specific contour so that we need not connect two neighboring regions that should have never been connected in the first place. It's better to be safe than sorry, right?

Ah, screw it! Today, we feel courageous! Let's make the mask entirely black:

mask = np.zeros((self.height+2, self.width+2), np.uint8)

Then, we can apply the flood fill to the center pixel (the seed point) and paint all the connected regions white:

flood = frame.copy()
cv2.floodFill(flood, mask, (self.width/2, self.height/2), 255, flags=4 | (255 << 8))

At this point, it should be clear why we decided to start with a gray mask earlier. We now have a mask that contains white regions (arm and hand), gray regions (neither arm nor hand but other things in the same depth plane), and black regions (all others). With this setup, it is easy to apply a simple binary threshold to highlight only the relevant regions of the pre-segmented depth plane:

ret, flooded = cv2.threshold(flood, 129, 255, cv2.THRESH_BINARY)

This is what the resulting mask looks like:

Finding connected components in a segmentation mask

The resulting segmentation mask can now be returned to the recognize method, where it will be used as an input to _find_hull_defects, as well as a canvas for drawing the final output image (img_draw).

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.139.67.5