When the iPhone was first released in 2007, the world of consumer electronics was accustomed to the resistive touchscreen, which required the use of a stylus and was limited to a single contact point. Because of these limitations, the multitouch capabilities of the original iPhone were a major selling point, and its operating system (now in its tenth iteration) was built around the idea of finger-based touch and gestures.
As the smartphone industry has evolved, all phones have moved toward this model of interaction. Capacitive touchscreens and gesture-based navigation is standard. With the exception of 3D touch in the 2015 iPhone models, the topic of multitouch hasn't changed since its inception.
However, as a developer, it is still your job to understand these aspects of app development. In this chapter, we're going to cover the following topics:
For the beginning of this chapter, we're going to take a little break from our app, Snippets
, and focus on how and when to use gestures. Since our app has plenty of built-in gesture control from using the UITableView
, we'll be working in a new project to experiment, before coming back and adding 3D touch shortcut support to Snippets
.
When we use software, we expect it to act a certain way based on convention. When we see something that looks like a button, we expect to be able to tap it, and for some event to happen when it is tapped. Part of this comes from the fact that some methods of interaction are universally intuitive, and have been established for a long time. However, most application development environments come with a set of Human Interface Guidelines (HIG), which outline the intended look, feel, and use of the software being created.
Apple, famous for its strict policies over design, has a very thorough set of HIG available for developers that make it easy to understand how they expect your software to function. While the full set of documentation covers many aspects of app interactions, we're going to focus on the standard gestures and what users expect from them.
When using a touch screen device, there are only a handful of basic, intuitive gestures that a user can perform. These basic gestures are based on physical metaphor, and so most people have an expectation of how an app should react to their input. The gestures, as outlined in Apple's HIG, are as follows:
Snippets
, the UITableView
automatically uses the drag gesture to let you scroll up and down. In web browsers and map views, you can drag both vertically and horizontally to move the view in all directions. This gesture can also be referred to as a pan.Delete
button on a cell. In apps with navigation controllers, swiping from the side of the screen can navigate back through the navigation stack. On an iPad, four fingers swiping up allow you to switch apps. Swiping is one of the most versatile gestures in iOS, and thus doesn't have much of a standard use. However, swiping is usually used to move objects on screen to reveal new information.While all of these gestures are possible on a touch screen, when using UIKit
(classes with the UI
prefix, like UITableView
) there's a good chance that you will get this functionality for free. Since Apple wants to make sure that their gestures are consistent, most of these UI
classes have gestures built right in, like how UITableView
in our app already supports tapping, dragging, and flicking automatically.
When implementing support for these basic gestures, Apple has several recommendations, or usage guidelines, for how these gestures should behave to maintain consistency.
The most important rule that should be followed is that you should never associate different actions to these standard gestures. No matter how intuitive you might think it would be to swap one of these gestures with a different activity, users of iOS software expect a standard method of interaction, and even the slightest changes can confuse users.
Second, try not to create alternate gestures that perform the same tasks as one of the standard gestures. This will also confuse users who might not understand why a certain task is now being completed in a different way.
Third, complex gestures should only be used to expedite tasks, and should not be the only way to perform a given action. While you as a developer might not understand why someone would go out of their way to perform a task when they could just use a custom gesture, it is still important to provide alternative ways to accomplish tasks.
Finally, it is usually not a great idea to create new custom gestures at all. Obviously there are exceptions to this rule, especially if you're making a game. However, if you are making a custom gesture to perform a task, you should really consider why it is necessary and if there are other ways to implement the feature.
With a good understanding of what the standard set of gestures are, in addition to the usage guidelines set forth by Apple, we are now in a good place to start learning how to implement these gestures in a development environment.
So far, we've discussed the theory of gestures: what they consist of, what they are expected to do, and how to use them. However, we should also take a little bit of time to understand how they work in practice. Even though you'll see that a lot of the basic gestures have an abstracted implementation provided by Apple, it's worth understanding how they work below the surface.
To understand the technical side of gestures, we need to first take a look at how the view hierarchy interprets touches. At the top of the inheritance chain is the simple UIView
class. Essentially, UIView
is a rectangle that can draw itself to the screen. However, it can also receive touch events. A UIView
class contains a userInteractionEnabled
property, which lets the view know whether it can receive touch information.
If interaction is enabled, UIView
is alerted every time a touch begins, moves, and ends inside of it. You can actually override the methods that handle these events in any UIView
subclass with touchesBegan
(_ touches: Set<UITouch>, with Event event: UIEvent?
), touchesMoved
(_ touches: Set<UITouch>
, with Event event: UIEvent?
), and touchesEnded
(_ touches: Set<UITouch>, with Event event: UIEvent?
).
In the very early days of iOS programming, app developers had to override those methods manually and track the movement of touches to identify gestures. As you can imagine, every developer had different ideas about how to implement those gestures, and functionality varied from app to app, breaking the consistency of interactions. It was also difficult to reuse gestures because you had to program the gesture recognition right into the UIView
subclass.
To solve these issues, Apple created a UIGestureRecognizer
class. This class provides those same touchesBegan()
(and so on) methods, but decouples them from a specific view. This means you can write your gesture code once, and then attach the recognizer to different views. To make it even easier, Apple also provided subclasses for most of the basic gestures; UITapGestureRecognizer
, UISwipeGestureRecognizer
, and UIPinchGestureRecognizer
are some examples.
For the next two sections, we'll be looking at the ways that we can implement these UIGestureRecognizer
classes into an app. Using these provided gesture classes means that adding gestures is not only quick and easy, but consistent with the way Apple (and users!) expect the gestures to behave.
18.219.208.117