Chapter 5. Designing a Touchable User Interface

In this chapter, we will introduce how to use Kinect APIs to simulate multitouch inputs, which are very common in modern interactive applications, especially on mobile devices. As a replacement of traditional methods (keyboard and mouse), the user interface of a multitouch-based application should always be dragged, or held and swiped, to trigger some actions. We will introduce some basic concepts of such interactions and demonstrate how to emulate them with Kinect.

Multitouch systems

The word multitouch refers to the ability to distinguish between two or more fingers touching a touch-sensing surface, such as a touch screen or a touch pad. Typical multitouch devices include tablets, mobile phones, pads, and even powerwalls with images projected from their back.

A single touch is usually done by a finger or a pen (stylus). The touch sensor will detect its X/Y coordinates and generate a touch point for user-level applications to use. If the device simultaneously detects and resolves multiple touches, user applications can thus efficiently recognize and handle complex inputs.

Gestures also play an important role in multitouch systems. A gesture is considered as a standardized motion, which can be used distinctly to represent a certain purpose. For example, the "tap" gesture (hit the surface lightly and release) always means to select and start a program on mobile phones, and the "zoom" gesture (move two fingers towards or apart from each other), or sometimes called the "pinch", is used to scale the content we are viewing.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.22.242.141