Summary

In this chapter we learned how to track the skeletal data provided by the Kinect sensor and how to interpret them for designing relevant user actions.

With the example developed in this chapter, we definitely went to the core of designing and developing Natural User Interfaces.

Thanks to the KinectSensors.SkeletonStream.Enable() method and the event handler attached to KinectSensors.AllFramesReady, we have started to manipulate the skeleton stream data and the color stream data provided by the Kinect sensor and overlap them.

We addressed the SkeletonStream.TrackingMode property for tracking users in Default (stand-up) and Seated mode. Leveraging the Seated mode together with the ability to track user actions is very useful for application-oriented people with disabilities.

We went through the algorithmic approach for tracking user's actions and recognizing user's gestures and we developed our custom gesture manager. Gestures have been defined as a collection of movement sections for increasing the reliability of the gesture engine. The gestures dealt with in this chapter are simple but the framework we developed can handle more articulated gestures based on discrete movements. Alternative approaches such as the neural network approach or the template-based approach should be considered in cases where the gestures to track are more complex and cannot be decomposed easily in discrete, well-defined movements. This chapter listed a set of references we could use to understand and explore these alternative approaches.

In the code built on this chapter, together with the full version attached to the book, we demonstrated how we could control the skeleton and color stream data and interact with the objects in the Kinect sensor's field of view. This represents a starting point for delivering an augmented reality experience. We encourage you to enhance the example developed in this chapter. You may want to embed content search capabilities in the application and submit queries related to the objects you interact with.

In the next chapter we will explore the voice tracking data to enhance the example developed in this chapter, and we will develop what is a real multimodal interface (voice plus gestures to interact with the application).

Before jumping in to the next reading, we encourage you to develop all the applications in this chapter. You may want to consider the application proposed in this chapter as the starting point to develop an application that can help you to virtually redesign the layout of your room or garage.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
13.58.244.216