Chapter 6. NiTE and Hand Tracking

In this chapter, we will cover:

  • Recognizing predefined hand gestures
  • Tracking hands
  • Finding the related user ID for each hand ID
  • Event-based reading of hands' data
  • The working sample for controlling the mouse by hand

Introduction

In this chapter, we are going to cover every topic related to hand tracking in NiTE, from finding hand gestures to tracking hand movements and finding which hand belongs to which user.

Unfortunately, the new version of NITE does not have some commands, such as the hand swipe, from the old API.

The nite::HandTracker object

Just as with nite::UserTracker, which is responsible for giving us information about users, nite::HandTracker is responsible for giving us information about hand tracking and hand's gestures on the scene. The two main functionalities of nite::HandTracker are recognizing hand gestures and tracking hands. Let's take a look at the important methods of this class:

  • nite:HandTracker::startGestureDetection(): This method starts the process of searching for a specific hand gesture in the scene. It accepts only a single argument from the nite::GestureType enum type. This enum type has three predefined members that are the only supported gestures by nite::HandTracker. It is impossible to expand this functionality.
    • nite::GESTURE_WAVE: This value represents the wave gesture.
    • nite::GESTURE_CLICK: This value represents the click gesture. This is the new name of the old push gesture.
    • nite::GESTURE_HAND_RAISE: This value represents the hand raise gesture.
  • nite::HandTracker::startHandTracking(): This method will start tracking a point that is recognized as a hand. Arguments of this method are a nite::Point3f value that indicates the position of the hand and you need to provide it as an argument to this method, and a nite::HandId value that will be filled with the ID of the hand from the method itself. The return value of this method is in the type nite::Status and indicates the success of the method.
  • nite::HandTracker::stopGestureDetection(): By using this method, you can suspend the search for a specific gesture.
  • nite::HandTracker::stopHandTracking(): This method can be used to abort the event of the tracking of a hand.
  • nite::HandTracker::convertHandCoordinatesToDepth(): Just as with nite::UserTracker::convertJointCoordinatesToDepth(), this method lets you convert the position of the hand returned by NiTE to the number of pixels in the depth stream frame.
  • nite::HandTracker::readFrame(): This method will wait for a new frame and update the passed nite::HandTrackerFrameRef variable or return the latest unread frame of data from nite::HandTracker. This is explained in the following section.

The nite::HandTrackerFrameRef object

nite::HandTrackerFrameRef represents a frame of data from nite::HandTracker, which contains the currently recognized gestures and hands in the scene. Here are some important methods of this class:

  • nite::HandTrackerFrameRef::getGestures(): This returns an array of the recognized gestures in the current scene.
  • nite::HandTrackerFrameRef::getHands(): This returns an array of the requested hands and of those that are being tracked in the current scene.
  • nite::HandTrackerFrameRef::getDepthFrame(): This returns the underlying depth openni::VideoFrameRef that is used by nite::HandsTracker to create this data. This is useful if you need to access the depth frame without using an openni::VideoStream object and if you require extra code to read data from it.

The nite::HandData object

nite::HandData represents a real-life hand. This class contains some information about the position and status of an under-tracking hand. Let's take a look at its important methods:

  • nite::HandData::getId(): The return value is in the type nite::HandId; this is actually a uint16 variable and contains a unique ID for the hand.
  • nite::HandData::getPosition(): The return value of this method is in the type nite::Point3f and contains the current position of the hand.
  • nite::HandData::isLost():The return value is a bool value, indicating that this hand was invisible enough that NiTE decided to remove it from the next frame of data.
  • nite::HandData::isNew(): This method returns a bool value that indicates if this is the first frame of data that we have for this hand.
  • nite::HandData::isTouchingFov(): This method also returns a bool value that indicates if the hand is currently touching the edge of the field of view of a device.
  • nite::HandData::isTracking(): This is another method with a bool return value. It indicates if the hand is visible and is under tracking.

We need to use the nite::HandTrackerFrameRef::getHands() method to retrieve a list of active hands.

The nite::GestureData object

nite::GestureData is very similar to nite::HandData with the only difference that this class is simpler than nite::HandData and contains less data. nite::GestureData is the representation of a real-world gesture:

  • nite::GestureData::getCurrentPosition(): This returns a nite::Point3f value that shows the position of where this gesture occurs. It can be used to start the tracking of a hand.
  • nite::GestureData::getType(): This returns a nite::GestureType enum value that shows the type of recognized gesture.
  • nite::GestureData::isComplete():This method gives you a bool value that can be used to indicate if this gesture completed successfully or if it is still in progress. From our observation, this is always true if the gesture is in the list of recognized gestures.
  • nite::GestureData::isInProgress(): This is another method with a bool return value. It indicates if the gesture has not yet completed and if it is still in progress. But we never saw this with the true value in our experiments.

Compared to skeleton tracking

The next chapter is about the tracking of a user's body and its skeleton joints. This means that in the next chapter, we will be able to track a user's hand joints and simulate a similar result. Now a question may pop up in your mind: what are the advantages of using HandTracker when we can use UserTracker too? Here is the answer:

  • Unlike UserTracker that can only track a user when he/she is in a standing position, HandTracker is able to track hands independent of the user's body.
  • HandTracker doesn't need to see users' full bodies whereas UserTracker can only detect users with their bodies completely in the field of view. This also means that UserTracker can't recognize a user at a distance of less than 2 metres whereas HandTracker can recognize a user at such as distance.
  • The result of HandTracker is more accurate compared to UserTracker as it doesn't need to track the full body.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.37.126