Chapter 7. Integrating Multitouch and Gestures

When the iPhone was first released in 2007, the world of consumer electronics was accustomed to the resistive touchscreen, which required the use of a stylus and was limited to a single contact point. Because of these limitations, the multitouch capabilities of the original iPhone were a major selling point, and its operating system (now in its tenth iteration) was built around the idea of finger-based touch and gestures.

As the smartphone industry has evolved, all phones have moved toward this model of interaction. Capacitive touchscreens and gesture-based navigation is standard. With the exception of 3D touch in the 2015 iPhone models, the topic of multitouch hasn't changed since its inception.

However, as a developer, it is still your job to understand these aspects of app development. In this chapter, we're going to cover the following topics:

  • The human interface guidelines for gestures in iOS
  • Adding gestures to your app from the storyboard
  • Adding gestures through code
  • Setting up 3D touch shortcuts

For the beginning of this chapter, we're going to take a little break from our app, Snippets, and focus on how and when to use gestures. Since our app has plenty of built-in gesture control from using the UITableView, we'll be working in a new project to experiment, before coming back and adding 3D touch shortcut support to Snippets.

Human interface guidelines – gestures

When we use software, we expect it to act a certain way based on convention. When we see something that looks like a button, we expect to be able to tap it, and for some event to happen when it is tapped. Part of this comes from the fact that some methods of interaction are universally intuitive, and have been established for a long time. However, most application development environments come with a set of Human Interface Guidelines (HIG), which outline the intended look, feel, and use of the software being created.

Apple, famous for its strict policies over design, has a very thorough set of HIG available for developers that make it easy to understand how they expect your software to function. While the full set of documentation covers many aspects of app interactions, we're going to focus on the standard gestures and what users expect from them.

Standard gestures

When using a touch screen device, there are only a handful of basic, intuitive gestures that a user can perform. These basic gestures are based on physical metaphor, and so most people have an expectation of how an app should react to their input. The gestures, as outlined in Apple's HIG, are as follows:

  • Tap: The simple single tap is used to select items, or press buttons. The tap is the most widely used gesture in the entire operating system and can almost always be thought of as a do something gesture. When tapping on an element, the user will almost always expect something to happen. If an element would normally perform an action, but the action can't happen when the user taps it, there should at least be a visual indication that the tap was received.
  • Double Tap: The double tap gesture is used to focus and un-focus on elements. Usually, the double tap will zoom to fill the screen with the double tapped area, as in web browsers, or mapping applications. When applicable, a second double tap will zoom back out to the default view.
  • Drag: When the user places their finger down on the screen and moves it around, it is referred to as a drag. Dragging is primarily used to move the view vertically and horizontally. For example, in our app Snippets, the UITableView automatically uses the drag gesture to let you scroll up and down. In web browsers and map views, you can drag both vertically and horizontally to move the view in all directions. This gesture can also be referred to as a pan.
  • Flick: Similar to the drag gesture, a flick is a drag that is executed quickly. Unlike a drag, a flick has momentum associated with it when the user finishes the gesture. That means that the movement of the view can continue after the user stops touching the screen, allowing them to flick quickly through lists.
  • Swipe: A swipe gesture has several use cases. In a table view, it can bring up the Delete button on a cell. In apps with navigation controllers, swiping from the side of the screen can navigate back through the navigation stack. On an iPad, four fingers swiping up allow you to switch apps. Swiping is one of the most versatile gestures in iOS, and thus doesn't have much of a standard use. However, swiping is usually used to move objects on screen to reveal new information.
  • Pinch: The pinch gesture is almost always used to zoom in and out. The most logical uses are again in web browsers, and map views, but it can be used in any situation where you might want to change the scaling of objects on screen.
  • Shake: The shake gesture is unique, since it doesn't use the screen at all, and only uses the accelerometer data. The shake gesture is used throughout iOS to initiate an undo or redo action.

While all of these gestures are possible on a touch screen, when using UIKit (classes with the UI prefix, like UITableView) there's a good chance that you will get this functionality for free. Since Apple wants to make sure that their gestures are consistent, most of these UI classes have gestures built right in, like how UITableView in our app already supports tapping, dragging, and flicking automatically.

Usage guidelines

When implementing support for these basic gestures, Apple has several recommendations, or usage guidelines, for how these gestures should behave to maintain consistency.

The most important rule that should be followed is that you should never associate different actions to these standard gestures. No matter how intuitive you might think it would be to swap one of these gestures with a different activity, users of iOS software expect a standard method of interaction, and even the slightest changes can confuse users.

Second, try not to create alternate gestures that perform the same tasks as one of the standard gestures. This will also confuse users who might not understand why a certain task is now being completed in a different way.

Third, complex gestures should only be used to expedite tasks, and should not be the only way to perform a given action. While you as a developer might not understand why someone would go out of their way to perform a task when they could just use a custom gesture, it is still important to provide alternative ways to accomplish tasks.

Finally, it is usually not a great idea to create new custom gestures at all. Obviously there are exceptions to this rule, especially if you're making a game. However, if you are making a custom gesture to perform a task, you should really consider why it is necessary and if there are other ways to implement the feature.

With a good understanding of what the standard set of gestures are, in addition to the usage guidelines set forth by Apple, we are now in a good place to start learning how to implement these gestures in a development environment.

How gestures work

So far, we've discussed the theory of gestures: what they consist of, what they are expected to do, and how to use them. However, we should also take a little bit of time to understand how they work in practice. Even though you'll see that a lot of the basic gestures have an abstracted implementation provided by Apple, it's worth understanding how they work below the surface.

To understand the technical side of gestures, we need to first take a look at how the view hierarchy interprets touches. At the top of the inheritance chain is the simple UIView class. Essentially, UIView is a rectangle that can draw itself to the screen. However, it can also receive touch events. A UIView class contains a userInteractionEnabled property, which lets the view know whether it can receive touch information.

If interaction is enabled, UIView is alerted every time a touch begins, moves, and ends inside of it. You can actually override the methods that handle these events in any UIView subclass with touchesBegan(_ touches: Set<UITouch>, with Event event: UIEvent?), touchesMoved (_ touches: Set<UITouch>, with Event event: UIEvent?), and touchesEnded (_ touches: Set<UITouch>, with Event event: UIEvent?).

In the very early days of iOS programming, app developers had to override those methods manually and track the movement of touches to identify gestures. As you can imagine, every developer had different ideas about how to implement those gestures, and functionality varied from app to app, breaking the consistency of interactions. It was also difficult to reuse gestures because you had to program the gesture recognition right into the UIView subclass.

To solve these issues, Apple created a UIGestureRecognizer class. This class provides those same touchesBegan() (and so on) methods, but decouples them from a specific view. This means you can write your gesture code once, and then attach the recognizer to different views. To make it even easier, Apple also provided subclasses for most of the basic gestures; UITapGestureRecognizer, UISwipeGestureRecognizer, and UIPinchGestureRecognizer are some examples.

For the next two sections, we'll be looking at the ways that we can implement these UIGestureRecognizer classes into an app. Using these provided gesture classes means that adding gestures is not only quick and easy, but consistent with the way Apple (and users!) expect the gestures to behave.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.219.208.117