Understanding touch events

Although similar in nature to an ordinary mouse click, a touch event allows us to interact with the computer primarily through a point and respond manner. However, touches are far more flexible than clicks and thus open up the stage for a whole new type of game.

Fundamentally, a touch is different than a click in that more than one touch is possible on the same surface, at the same time. Also, a touch is generally different than a click in that it allows for a larger target area as well as varying pressure. I say generally because not all devices detect the touch area with high precision (or with any precision at all) or touch pressure. Similarly, some mouse or other equivalent input devices actually do offer pressure sensitivity, although most browsers don't have use of such a feature, neither do they expose that data through a click event object.

Note

For compatibility purposes, most mobile browsers respond to touch events when JavaScript code expects a touch. In other words, a click handler can be triggered by the user touching the screen. In this case, a regular click event object is passed to the registered callback function and not a touch event object. Furthermore, the experience might differ between a drag event (the dragMove event) and a touch move event. Finally, multiple touches may or may not trigger simultaneous click event listeners.

There are three events related to touch, namely touch start, touch move, and touch end. Touch start and touch end can be related to the mouse down and mouse up events respectively, while a touch move event is similar to a drag move event.

touchstart

This event is triggered when the touch area detects a new touch, whether or not one or more touch events have already started and have not yet ended.

document.body.addEventListener("touchstart", doOnTouchStart);

function doOnTouchStart(event) {
  event.preventDefault();

  // ...
}

The object passed into the registered callback function is an instance of the TouchEvent class, which contains the following attributes:

touches

An instance of the TouchList class which looks like an ordinary array and contains a list of all touches that have been touched down on the touch device and have not yet been removed, even if other active touches have moved about the screen or input device. Each element in this list is an instance of type Touch.

changedTouches

An instance of the TouchList class containing a list of touch objects representing all new touch points that have been introduced since the last touch event. For example, if two touch objects have already been detected (in other words, two fingers have been pressed against the touch device) and a third touch is detected, only this third touch is present in this touch list. Again, every touch-related element contained by this touch list is of type Touch.

targetTouches

An instance of the TouchList class containing a list of touch objects representing all touch points that have been captured by a given DOM node. For example, if multiple touches have been detected throughout the screen but a particular element registered for a touch start event and captured this event (either from the capture or bubble stage), only touch events captured by this node will be present in this touch list. Again, every touch-related element contained by this touch list is of type Touch.

touchend

Similar to a mouse up event, a touchend event is fired when any of the registered touch events leave the input touch device.

document.body.addEventListener("touchend", doOnTouchEnd);

function doOnTouchEnd(event) {
  event.preventDefault();

  // ...
}

Just like a touchstart event, the object passed into the registered callback function is an instance of the TouchEvent class, which contains the same three TouchList attributes. The context of the touches and targetTouches attributes are the exact same as their version found in touchstart. However, the changedTouches touch list has a slightly different meaning in this event.

Although the TouchList object inside a touchend event is the exact same as the one in touchstart, the list of touch objects contained here represents touches that have left the touch input device.

touchmove

The touchmove event, analogous to a drag event, is fired whenever at least one of the registered touch objects changes position without triggering a touchend event. As we'll soon see each touch object is uniquely tracked so that it is possible to determine if any of the registered touch objects have moved and which ones have actually displaced.

document.body.addEventListener("touchmove", doOnTouchMove);

function doOnTouchMove(event) {
  event.preventDefault();

  // ...
}

Again, just like a touchend event, the object passed into the registered callback function is an instance of the TouchEvent class, which contains the same three TouchList attributes. The context of the touches and targetTouches attributes are the exact same as their version found in touchstart. The touch objects in the changedTouches list in the touchmove event represent previously registered touches that have moved about the input device.

One important thing about the touchmove event is that it can be associated with a drag event. If you notice, the default behavior for a drag event is to scroll the page in the direction of the scroll. In some applications involving dragging across the screen with a finger, this behavior may not be desired. For this reason, the event.preventDefault() method is called, which produces the effect of alerting the browser that no scrolling is desired. If, however, the intention is to scroll the screen with a touchmove event, provided that the element being touched supports such behavior, this can be accomplished by omitting the call to the prevent default function.

The touch object

Now, you may have noticed that each TouchList object holds instances of a very specific object which is an instance of the Touch class. This is important because the input device needs to keep track of individual touches. Otherwise, the list of changedTouches would not be accurate thus limiting what we can accomplish with the API.

The way that each touch can be uniquely identified is by having the input device assign a unique ID to each event it captures. This ID remains the same for the same touch object until that object is released (in other words, when that particular touch leaves the input device).

Lets take a look at all the other properties of the Touch class and see what other important information is contained therein.

identifier

A unique integer identifier for a particular touch event contained in the current touches TouchList. This number remains the same until a touch event leaves the input device, which allows us to track each touch individually even if many other touch objects are starting, moving, and ending while the one particular touch object can be singled out and kept appropriately.

Note that sometimes the value of this attribute may match the array index value of the touch object within a TouchList object. Sometimes the identifier property might even match the order in which each touch was detected by the input device. As an attentive programmer, you must never assume that these two values will always be the same.

For example, suppose that the first time a touch is detected by the device it has an identifier ID of zero (and since this is the first touch in the TouchList, it will obviously be indexed into the list with an index value of zero). Now a second touch is detected, making it the second object in the TouchList array, which would give it an index key of one. Suppose this touch also receives an identifier of one so that all three values match (touch order, array order, and identifier value). Now, after moving these two touches around the input device, suppose the first touch object is released and a new touch event is detected. Now there are again two touch objects in the TouchList, but their values are completely different than the first two touch elements. While the second touch event still has the same identifier (in this example, the identifier was one), it's now (possibly) the first element in the TouchList.

Although at times the order in which a touch is detected, the touch's position in the TouchList array, and the touch's unique identifier number may all match (assuming that the input device even assigns specific identifier values), you should never use any of these assumptions to track individual touches. A touch should always be tracked by its unique identifier attribute when more than one touch is being tracked. If only a single touch is tracked, that touch will always be the first element in the TouchList object.

identifier

In summary, the order in which touches are detected and assigned to the TouchList object is unpredictable and should never be assumed. The proper way to track individual touch objects is through the identifier property assigned to each object. Once a touch event is released, the value of its former identifier property can be reassigned to a consequent touch, so be sure to keep that in mind as well.

screenX

The screenX coordinate refers to the point in the browser viewport that was touched relative to the origin of the system display. The origin of the browser's viewport is not taken into account in this calculation at all. Point (0, 0) is the upper left corner of the monitor, and however many pixels to the right of it is touched, that's where this attribute will refer.

screenY

The screenY coordinate refers to the point down from the system's screen (monitor), independent of where the browser is relative to that. If the screen is, say, 800 pixels in height and the browser is set up with a height of, say, 100 pixels located exactly 100 pixels below the top of the screen, then at touch a the half-way point between the browser's viewport's top and bottom left corners would result in the touch's screenY coordinate being 150.

Think about it, the browser's viewport has 100 pixels in height so that it's midpoint is exactly 50 pixels below its origin. If the browser is exactly 100 pixels below the screen's origin, that midpoint is 150 pixels below the screen's vertical origin.

The screenX and screenY attributes almost look like they don't take the browser's coordinate system into account whatsoever. With that, since the origin the browser bases its calculations off of its screen's origin, then it follows that a point returned by screenX and screenY will never be less than zero, since there is no way we can touch a point outside the screen's surface area and still have the screen detect that point.

clientX

Similar to screenX, clientX coordinate refers to the offset from a touch location from the browser's viewport's origin, independent of any scrolling within the page. In other words, since the origin of the browser's viewport is its upper left corner, a touch 100 pixels to the right of that point corresponds to a clientX value of 100. Now, if the user scrolls that page, say, 500 pixels to the right, then a touch to the right of the browser's left border by 100 pixels would still result in a clientX value of 100, even though the touch occurred at point 600 within the page.

clientY

The clientY coordinate refers to the point down from the browser's viewport origin, independent of where within the page the touch occurred. If the page scrolls an arbitrary amount of pixels to the right and to the bottom and a touch is detected at the very first pixel to the right of the upper left corner of the browser's viewport and exactly one pixel down, the clientY value would be calculated as 1.

The clientX and clientY attributes don't take the web page's coordinate system into account whatsoever. With that, because this point is calculated relative to the browser's frame, it follows that a point returned by clientX and clientY will never be less than zero since there is no way we can touch a point outside the browser's viewport surface area and still have the browser detect that point.

pageX

Finally, the coordinate represented by pageX refers to the point within the actual page where the touch was detected. In other words, if a browser is only, say, 500 pixels wide but the application is 3000 pixels wide (meaning that we can scroll the application's content to the right by 2500 pixels), a touch detected exactly 2000 pixels from the browser's viewport's origin would result in a pageX value of 2000.

In the world of gaming, a better name for pageX would probably be worldCoordinateX since the touch takes into account where within the world the touch event took place. Of course, this only works when the web page physically scrolls, not when a representation of a scroll has taken place. For example, say we render a world onto a 2D canvas and the world is actually much larger than the width and height of the canvas element. If we scroll the virtual map by an arbitrary amount of pixels but the canvas element itself never actually moved, then the pageX value will be meaningless with respect to the game's map's offset.

pageY

And to conclude, the pageY coordinate refers to the point where the touch was detected below the browser's viewport's origin, plus any scrolled offsets. As with the other touch point locations, it is impossible to obtain a negative value for the pageX and pageY attributes since there is no way to touch a point in the page that has not been scrolled to yet, especially a point behind the origin of the page where we cannot ever scroll to.

The following illustration shows the difference between screen, client, and page location. The screen location refers to the location within the screen (not browser window), with the origin being the upper left corner of the display. Client location is similar to screen location, but places the origin at the top left corner of the browser viewport. Even if the browser is resized and moved half way across the screen, the first pixel to the right of the browser viewport will be point (0, 0). Page location is similar to client location but takes into account any scrolling within the browser viewport. If the page is scrolled down 100 pixels vertically and none horizontally, the first pixel to the right of the left margin of the browser viewport will be (100, 1).

pageY

radiusX

When a touch is detected by the input device, an ellipse is drawn around the touch area by the input device. The radius of that ellipse can be accessed through the radiusX and radiusY attributes, hinting at how much area is covered by the touch. Keep in mind that the accuracy of the ellipse that describes the area touched is determined by the device used, so mileage may vary greatly here.

radiusY

In order to get the radius across the horizontal axis of the ellipse formed by the touch detected by the input device, we can use the radiusY attribute. With that information, we can add an extra depth to the types of applications we can create using touch as input.

As an example application, the following code snippet detects as many touches as the input device can handle simultaneously, keeping track of the radius of each touch, then displaying each touch at its approximate size.

First, we need to set up the document viewport to be the same width and height as the device as well as set initial zoom levels. We also want to disable pinching gestures, because in this particular sample application, we want that gesture to act as any other touch movement and not have any special meaning.

<meta name="viewport"
  content="width=device-width, initial-scale=1.0,
    user-scalable=no" />

The meta viewport tag allows us to define specific width and height values for the viewport, or use the optional device-width and device-height attributes. If only a width or height value is specified, the other is inferred by the user agent. The tag also allows us to specify a default zoom level as well as disable zooming through gestures or other means.

Next, we need to make sure the root DOM node in the application stretches the entire width and height of the display so that we can capture all touch events within it.

<style>
body, html {
  width: 200%;
  height: 100%;
  margin: 0;
  padding: 0;
  position: relative;
  top: 0;
  left: 0;
}

div {
  position: absolute;
  background: #c00;
  border-radius: 100px;
}
</style>

We set the body tag to be as wide as the viewport and remove any margin and padding from it so that touches near the edge of the screen would not be missed by the element's event handling. We also style the div elements to look round, have a red background color, and be absolutely positioned so that we can place one anywhere a touch is detected. We could have used a canvas element instead of rendering multiple div tags to represent each touch but that is an insignificant detail for this demo.

Finally, we get down to the JavaScript logic of the application. To summarize the structure of this demonstration, we simply use a global array where each touch is stored. Whenever any touch event is detected on the document, we flush that global array that keeps track of each touch, create a div element for each active touch, and push that new node to the global array. At the same time as this is happening, we use a request animation frame look to continuously render all the DOM nodes contained in the global touches array.

// Global array that keeps track of all active touches.
// Each element of this array is a DOM element representingthe location
// and area of each touch.
var touches = new Array();

// Draw each DOM element in the touches array
function drawTouches() {
  for (var i = 0, len = touches.length; i < len; i++) {
    document.body.appendChild(touches[i]);
  }
}

// Deletes every DOM element drawn on screen
function clearMarks() {
  var marks = document.querySelectorAll("div");

  for (var i = 0, len = marks.length; i < len; i++) {
    document.body.removeChild(marks[i]);
  }
}

// Create a DOM element for each active touch detected by the
// input device. Each node is positioned where the touch was
// detected, and has a width and height close to what the device
// determined each touch was
function addTouch(event) {
  // Get a reference to the touches TouchList
  var _touches = event.touches;

  // Flush the current touches array
  touches = new Array();

  for (var i = 0, len = _touches.length; i < len; i++) {
    var width = _touches[i].webkitRadiusX * 20;
    var height = _touches[i].webkitRadiusY * 20;

    var touch = document.createElement("div");
    touch.style.width = width + "px";
    touch.style.height = height + "px";
    touch.style.left = (_touches[i].pageX - width / 2) + "px";
    touch.style.top = (_touches[i].pageY - height / 2) + "px";

    touches.push(touch);
  }
}

// Cancel the default behavior for a drag gesture,
// so that the application doesn't scroll.
document.body.addEventListener("touchmove", function(event) {
  event.preventDefault();
});

// Register our function for all the touch events we want to track.
document.body.addEventListener("touchstart", addTouch);
document.body.addEventListener("touchend", addTouch);
document.body.addEventListener("touchmove", addTouch);

// The render loop
(function render() {
  clearMarks();
  drawTouches();

  requestAnimationFrame(render);
})();

An example of multi touch taking into account the radius of each touch is illustrated as follows. By touching the side of a closed fist to a mobile device, we can see how each part of the hand that touches the screen is detected with their relative size and area of contact.

radiusY

rotationAngle

Depending on the way a touch is detected, the ellipse that represents the touch might be rotated. The rotationAngle attribute associated with each touch object is the clockwise angle in degrees that rotates the ellipse to most closely match the touch.

force

Some touch devices are capable of detecting the amount of pressure applied to the surface of the input surface by the user. When this is the case, the force attribute represents that pressure with a variable between 0.0 and 1.0, where 1.0 represents the maximum pressure that the device can handle. When a device doesn't support force sensitivity, this attribute will always return 1.0.

Since the value of the force attribute is always between zero and one, we can conveniently use this to render elements with a varying degree of opacity (with zero being a completely transparent—invisible—element and one being completely rendered).

var width = _touches[i].webkitRadiusX * 20;
var height = _touches[i].webkitRadiusY * 20;
var force = _touches[i].webkitForce;

var touch = document.createElement("div");
touch.style.width = width + "px";
touch.style.height = height + "px";
touch.style.left = (_touches[i].pageX - width / 2) + "px";
touch.style.top = (_touches[i].pageY - height / 2) + "px";
touch.style.opacity = force;

touches.push(touch);

target

When a touch event is detected, the DOM element, where the touch was first detected, is referenced through the target attribute. Since a touch object is tracked until the touch ends, the target attribute will reference the original DOM element where the touch was first started for the duration of the touch life cycle.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.139.70.101