Hour 17. Using Advanced Touches and Gestures


What You’ll Learn in This Hour:

Image The multitouch gesture-recognition architecture

Image How to detect taps, swipes, pinches, and rotations

Image How to use the built-in shake gesture

Image Simple ways to add 3D Touch to your multiscene applications


Multitouch and 3D Touch enable applications to use a combination of finger gestures and pressures for operations that would otherwise be hidden behind layers of menus, buttons, and text. From the very first time you use a pinch to zoom in and out on a photo, map, or web page, you realize that’s exactly the right interface for zooming. Nothing is more human than manipulating the environment with your fingers.

iOS provides advanced gesture-recognition capabilities that you can easily implement within your applications. This hour shows you how.

Multitouch Gesture Recognition

While working through this book’s examples, you’ve gotten used to responding to events, such as Touch Up Inside, for onscreen buttons. Multitouch gesture recognition is a bit different. Consider a “simple” swipe. The swipe has direction, it has velocity, and it has a certain number of touch points (fingers) that are engaged. It is impractical for Apple to implement events for every combination of these variables; at the same time, it is extremely taxing on the system to just detect a “generic” swipe event and force you, the developer, to check the number of fingers, direction, and so on each time the event is triggered.

To make life simple, Apple has created gesture-recognizer classes for almost all the common gestures that you may want to implement in your applications, as follows:

Image Tapping (UITapGestureRecognizer): Tapping one or more fingers on the screen

Image “Long” pressing (UILongPressGestureRecognizer): Pressing one or more fingers to the screen for a specific period of time

Image Pinching (UIPinchGestureRecognizer): Pinching to close or expand something

Image Rotating (UIRotationGestureRecognizer): Sliding two fingers in a circular motion

Image Swiping (UISwipeGestureRecognizer): Swiping with one or more fingers in a specific direction

Image Panning (UIPanGestureRecognizer): Touching and dragging

Image Screen-edge panning (UIScreenEdgePanGestureRecognizer): Touching and dragging, but starting from the edge of the screen

Image Shaking: Physically shaking the iOS device

In early versions of iOS, developers had to read and recognize low-level touch events to determine whether, for example, a pinch was happening: Are there two points represented on the screen? Are they moving toward each other?

Today you define what type of recognizer you’re looking for, add the recognizer to a view (UIView), and you automatically receive any multitouch events that are triggered. You even receive values such as velocity and scale for gestures such as pinch. Let’s see what this looks like translated into code.


Tip

Shaking is not a multitouch gesture and requires a slightly different approach. Note that it doesn’t have its own recognizer class.


Adding Gesture Recognizers

You can add gesture recognizers to your projects in one of two ways: either through code or visually using the Interface Builder editor. Although using the editor makes life much easier for us, it is still important to understand what is going on behind the scenes. Consider the code fragment in Listing 17.1.

LISTING 17.1 Example of the Tap Gesture Recognizer


1: var tapRecognizer: UITapGestureRecognizer
2: tapRecognizer=UITapGestureRecognizer(target: self, action:"foundTap:")
3: tapRecognizer.numberOfTapsRequired=1
4: tapRecognizer.numberOfTouchesRequired=1
5: tapView.addGestureRecognizer(tapRecognizer)


This example implements a tap gesture recognizer that will look for a single tap from a single finger within a view called tapView. If the gesture is seen, the method foundTap is called.

Line 1 kicks things off by declaring an instance of the UITapGestureRecognizer object, tapRecognizer. In line 2, tapRecognizer is initialized with initWithTarget:action. (Remember that the initWith part is left out in Swift versions of initialization methods.) Working backward, the action is the method that will be called when the tap occurs. Using the action foundTap:, we tell the recognizer that we want to use a method called foundTap to handle our taps. The target we specify, self, is the object where foundTap lives. In this case, it will be whatever object is implementing this code (probably a view controller).

Lines 3 and 4 set two variable properties of the tap gesture recognizer:

Image numberOfTapsRequired: The number of times the object needs to be tapped before the gesture is recognized

Image numberOfTouchesRequired: The number of fingers that need to be down on the screen before the gesture is recognized

Finally, line 5 uses the UIView method addGestureRecognizer to add the tapRecognizer to a view called tapView. As soon as this code is executed, the recognizer is active and ready for use, so a good place to implement the recognizer is in a view controller’s viewDidLoad method.

Responding to the event is simple: Just implement the foundTap method. An appropriate method stub for the implementation looks like this:

func foundTap(sender: AnyObject) {
    outputLabel.text="Tapped"
}

What happens when the gesture is found is entirely up to you. One could simply respond to the fact the gesture took place, use the parameter provided to the method to get additional details about where the tap happened on the screen, and so on.

All in all, not too bad, don’t you think? What’s even better? In most cases, you can do almost all of this setup entirely within Interface Builder, as shown in Figure 17.1. The tutorial in this hour shows how to do exactly that.

Image

FIGURE 17.1 Gesture recognizers can be added through Interface Builder.

3D Touch Peek and Pop

3D Touch (a.k.a. Force Touch, before Apple Marketing got ahold of it) provides a new way to interact with devices—starting with the iPhone 6s and 6s+. 3D Touch measures the pressure of your finger by watching for minute bending in the glass on your iPhone. Thankfully, rather than just introduce the technology and leave it up to developers to figure out how to implement it, Apple has added three convenient methods of integrating it into your applications: peek and pop gestures and Quick Actions. The latter you’ll learn more about in Hour 22, “Building Background-Ready Applications.”

Peek and pop refer to actions that applications perform with two levels of pressure. When you want more information about an onscreen object, you push it lightly to “peek” at it without navigating away from where you are. If, while peeking, you decide you want to view the information full screen and interact with it, you push harder, and it “pops” into the foreground. The peek and pop transitions are referred to as the Preview and Commit segues in Xcode.

I realize that many people may not yet have devices that support this feature, but the first time you try it you’ll be delighted at how natural it feels.


By the Way: 3D Touch: Is That Really a Gesture?

In case you’re wondering, Apple really does consider 3D Touch to be a gesture. It may seem odd fitting it in with gestures like swiping and pinching, but if Apple considers it a gesture, and it just-so-happens-to-conveniently-fit-into-this-chapter’s-structure-so-that’s-where-I-wanted-to-put-it-anyway, then it’s a gesture!


Adding 3D Touch Peek and Pop

To implement the 3D Touch peek and pop gestures within code, you’ll need to conform to the UIViewControllerPreviewingDelegate protocol within the view controller that wants to respond to peek and pop gestures. This requires the implementation of two methods: previewingContext:viewControllerForLocation and previewingContext:commitViewController. These methods return a view controller for the peek, and then present the view controller for a pop, respectively.

For the application to know what you want to peek at, you will also need to register a view as being “3D Touch ready” with the method registerForPreviewingWithDelegate:sourceView.

Assume for a moment that you have a button, myButton, that you want to use to preview (peek) at content. You’d first register the button using code like this (possibly within your view controller’s viewDidLoad method):

registerForPreviewingWithDelegate(self, sourceView: myButton)

Next, you implement the two methods for handing 3D Touch gestures on myButton. Listing 17.2 shows a possible implementation that assumes a view controller class named previewController has been created, and a scene using has been added to your storyboard with its storyboard ID set to previewController. (This configured within the Identity Inspector.)

LISTING 17.2 Add the Methods to Handle Peek and Pop


 1: func previewingContext(previewingContext: UIViewControllerPreviewing,
 2:     viewControllerForLocation location: CGPoint) -> UIViewController? {
 3:     let viewController: previewController =
 4:         storyboard?
 5:             .instantiateViewControllerWithIdentifier("previewController")
 6:             as? previewController
 7:     return viewController
 8: }
 9:
10: func previewingContext(previewingContext: UIViewControllerPreviewing,
11:     commitViewController viewControllerToCommit: UIViewController) {
12:     showViewController(viewControllerToCommit, sender: self)
13: }


Looks (of code) can be deceiving. This listing is 13 lines long, but the entire implementation of both methods is just 3 (long) lines!

In method previewingContext:viewControllerForLocation, lines 3–6 instantiate previewController from the current storyboard using the storyboard ID previewController.

The view controller is returned in line 7. To cancel the peek action (if something went wrong, or there isn’t anything to preview), you could return nil instead. In addition, you can make use of the method’s location to determine the x and y coordinates where the user touched.

The code for previewingContext:commitViewController is a single line (12) that uses the showViewController method to transition the display to the new view controller. How does it know what view controller to use? It receives the peek view controller in the variable viewControllerToCommit; we just need to display the controller and we’re done.

Now that I’ve shown you how simple doing a peek and pop in code can be, let me blow your mind with Figure 17.2. The Attributes Inspector can be used to configure segues so that they automatically implement peek and pop—no coding needed.

Image

FIGURE 17.2 Peek and pop can be implemented directly in Interface Builder.

In a few minutes, you’ll see how 3D Touch peek and pop gestures can be retrofitted into some of our existing applications with just a click here and there.


Caution: 3D Out of Touch

Apple, Apple, Apple... You give us a new feature, and no way to test it. Despite the ability to simulate Force Touch in the Apple Watch Simulator, and the availability of Force Touch trackpads for Macs, there is currently no way to test 3D Touch within the iOS Simulator. Hopefully, this will change soon.


Using Gesture Recognizers

As people become more comfortable with touch devices, the use of gestures becomes almost natural—and expected. Applications that perform similar functions are often differentiated by their user experience, and a fully touch-enabled interface can be the deciding factor between a customer downloading your app and passing it by.

Perhaps the most surprising element of adding gestures to applications is just how easy it is. I know I say that often throughout the book, but gesture recognizers are one of those rare features that “just works.” Follow along and find out what I mean.

Implementation Overview

In this hour’s application, which we’ll name Gestures, you implement five gesture recognizers (tap, swipe, pinch, rotate, and shake), along with the feedback those gestures prompt. Each gesture updates a text label with information about the gesture that has been detected. Pinch, rotate, and shake take things a step further by scaling, rotating, or resetting an image view in response to the gestures.

To provide room for gesture input, the application displays a screen with four embedded views (UIView), each assigned a different gesture recognizer directly within the storyboard scene. When you perform an action within one of the views, it calls a corresponding action method in our view controller to update a label with feedback about the gesture, and depending on the gesture type, updates an onscreen image view (UIImageView), too.

Figure 17.3 shows the final application.

Image

FIGURE 17.3 The application detects and acts upon a variety of gestures.


Caution: Auto Layout: Our Frenemy

We have to be a bit clever in this application because image views that we add in Interface Builder are subject to Apple’s constraint system. Ideally, we want to be able to take advantage of the Auto Layout system to position our image view in a nice default position, regardless of our screen size (exactly what you learned in the preceding hour). Once the application launches, however, we don’t want any of the constraints enforced because we want to be able to resize and rotate the image view using our gestures.

You can take care of this in any number of ways, including programmatically finding and removing constraints with the removeConstraints NSLayoutConstraint method. The method we take, however, is to add an image view in Interface Builder so that we can position it visually and then replace it with our own constraint-free image view right after the application launches. It’s a relatively simple way to take advantage of Auto Layout for the initial interface object layout and then gain the flexibility of working with a constraint-free object as the application executes.


Setting Up the Project

Start Xcode and create a new single-view iOS application called Gestures. This project requires quite a few outlets and actions, so be sure to follow the setup closely. You’ll also be making connections directly between objects in Interface Builder. So, even if you’re used to the approach we’ve taken in other projects, you might want to slow down for this one.

Adding the Image Resource

Part of this application’s interface is an image that can rotate or scale up and down. We use this to provide visual feedback to users based on their gestures. Included with this hour’s project is an Images folder and a file named flower.png Open the Assets.xcassets asset catalog in your project and drag the Images folder into the column on the left of the catalog.

Planning the Variables and Connections

For each touch gesture that we want to sense, we need a view where it can take place. Often, this would be your main view. For the purpose of demonstration, however, we will add four UIViews to our main view that will each have a different associated gesture recognizer. Surprisingly, none of these require outlets, because we’ll connect the recognizers to them directly in Interface Builder.

We do, however, need two outlets, outputLabel and imageView, instances of the classes UILabel and UIImageView, respectively. The label is used to provide text feedback to the user, while the image view shows visual feedback to the pinch and rotate gestures.

When the application senses a gesture within one of the four views, it needs to invoke an action method that can interact with the label and image. We will connect the gesture recognizers to methods called foundTap, foundSwipe, foundPinch, and foundRotation.


Note

Notice that we don’t mention the shake gesture here? Even though we will eventually add shake recognition to this project, it will be added by implementing a very specific method in our view controller, not through an arbitrary action method that we define upfront.


Adding a Variable Property for the Image View Size

When our gesture recognizers resize or rotate the image view in our user interface (UI), we want to be able to reset it to its default position and size. To make this happen, we need to “know” in our code what the default position for the image was. View positioning and sizing is described using a data structure (not an object) called a CGRect that contains four values: x and y coordinates (origin.x and origin.y), and width and height (size.width and size.height). We will add a variable property to the project that, when the application first launches, stores the size and location of the image view (the CGRect of the view) we added in Interface Builder. We’ll name this originalRect.

Open your ViewController.swift file and add the following line after the class statement:

var originalRect: CGRect!

The originalRect variable property is declared and ready to be used in our implementation, but first we need an interface.

Designing the Interface

Open the Main.storyboard file, change to an appropriate simulated device (or use the Auto Layout/Size Class techniques you learned in the preceding hour), and make room in your workspace. It’s time to create our UI.

To build the interface, start by dragging four UIView instances to the main view. Size the first to a small rectangle in the upper-left portion of the screen; it will capture taps. Make the second a long rectangle beside the first (for detecting swipes). Size the other two views as large rectangles below the first two (for pinches and rotations). Use the Attributes Inspector (Option-Command-4) to set the background color of each view to be something unique.


Tip

The views you are adding are convenient objects that we can attach gestures to. In your own applications, you can attach gesture recognizers to your main application view or the view of any onscreen object.



Tip

Gesture recognizers work based on the starting point of the gesture, not where it ends. In other words, if a user uses a rotation gesture that starts in a view but ends outside the view, it will work fine. The gesture won’t “stop” just because it crosses a view’s boundary.

For you, the developer, this is a big help for making multitouch applications that work well on a small screen.


Next, drag labels into each of the four views. The first label should read Tap Me!. The second should read Swipe Me!. The third label should read Pinch Me!. The fourth label should read Rotate Me!.

Drag a fifth UILabel instance to the main view, and center it at the top of the screen. Use the Attributes Inspector to set it to align center. This will be the label we use to provide feedback to the user. Change the label’s default text to Do something!.

Finally, add a UIImageView layout, and then position it in an appropriately attractive location at the bottom center of the scene; use the Auto Layout constraints if you so desire (see Figure 17.4). Remember that we will not actually be using this image view to display gesture feedback; we want it solely for positioning. So, there is no need to set a default image for the image view.

Image

FIGURE 17.4 Size and position the UIImageView similar to what is shown here.

With the view finished, in most projects we start connecting our interface to our code through outlets and actions—but not this hour. Before we can create our connections, we need to add the gesture recognizers to the storyboard.


Tip

We’re about to do a bunch of dragging and dropping of objects onto the UIViews that you just created. If you often use the document outline to refer to the objects in your view, you may want to use the Label field of the Document group in the Identity Inspector (Option-Command-3) to give them more meaningful names than the default View label they appear with. You can also edit the names directly in the document outline by clicking to select them, then pressing return.

Labels are arbitrary and do not affect the program’s operation at all.


Adding Gesture Recognizers to Views

As you learned earlier, one way to add a gesture recognizer is through code. You initialize the recognizer you want to use, configure its parameters, and then add it to a view and provide a method it will invoke if a gesture is detected. Alternatively, you can drag and drop from the Interface Builder Object Library and barely write any code. We’re going to do this now.

Make sure that Main.storyboard is open and that the document outline is visible.

The Tap Recognizer

Our first step is to add an instance of the UITapGestureRecognizer object to our project. Search the Object Library for the tap gesture recognizer and drag and drop it onto the UIView instance in your project that is labeled Tap Me!, as shown in Figure 17.5. The recognizer will appear as an object at the bottom of the document outline, regardless of where you drop it.

Image

FIGURE 17.5 Drag the recognizer onto the view that will use it.


Caution: Everything Is a View

Be careful not to drag the recognizer onto the label within the view. Remember that every onscreen object is a subclass of UIView, so you could potentially add a gesture recognizer to the label rather than to the intended view. You might find it easier to target the views in the document outline rather than in the visual layout.


Through the simple act of dragging the tap gesture recognizer into the view, you’ve created a gesture-recognizer object and added it to that view’s gesture recognizers. (A view can have as many as you want.)

Next, you need to configure the recognizer so that it knows what type of gesture to look for. Tap gesture recognizers have two attributes to configure:

Image Taps: The number of times the object needs to be tapped before the gesture is recognized

Image Touches: The number of fingers that need to be down on the screen before the gesture is recognized

In this example, we’re defining a tap as one finger tapping the screen once, so we define a single tap with a single touch. Select the tap gesture recognizer, and then open the Attributes Inspector (Option-Command-4), as shown in Figure 17.6.

Image

FIGURE 17.6 Use the Attributes Inspector to configure your gesture recognizers.

Set both the Taps and Touches fields to 1 (or just go nuts); this is a perfect time to play with the recognizer. Just like that, the first gesture recognizer is added to the project and configured. We still need to connect it to an action a bit later, but now we need to add the other recognizers.


Tip

If you look at the connections on the UITapGestureRecognizer object or the view that you dropped it onto, you’ll see that the view references an outlet collection called Gesture Recognizers. An outlet collection is an array of outlets that make it easy to refer to multiple similar objects simultaneously. If you add more than one gesture recognizer to a view, the recognizer is referenced by the same outlet collection.


The Swipe Recognizer

You implement the swipe gesture recognizer in almost the same manner as the tap recognizer. Instead of being able to choose the number of taps, however, you can determine in which direction the swipes can be made—up, down, left, or right—as well as the number of fingers (touches) that must be down for the swipe to be recognized.

Again, use the Object Library to find the swipe gesture recognizer (UISwipeGestureRecognizer) and drag a copy of it in into your view, dropping it on top of the view that contains the Swipe Me! label. Next, select the recognizer and open the Attributes Inspector to configure it, as shown in Figure 17.7. For this tutorial, I configured the swipe gesture recognizer to look for swipes to the right that are made with a single finger.

Image

FIGURE 17.7 Configure the swipe direction and the number of touches required.


Note

If you want to recognize and react to different swipe directions, you must implement multiple swipe gesture recognizers. It is possible, in code, to ask a single swipe gesture recognizer to respond to multiple swipe directions, but it cannot differentiate between the directions.


The Pinch Recognizer

A pinch gesture is triggered when two fingers move closer together or farther apart within a view, and it is often used to make something smaller or larger, respectively. Adding a pinch gesture recognizer requires even less configuration than taps or swipes because the gesture itself is already well defined. The implementation of the action that interprets a pinch, however, will be a bit more difficult because we are also interested in “how much” a user pinched (called the scale of the pinch) and how fast (the velocity), rather than just wanting to know that it happened. More on that in a few minutes.

Using the Object Library, find the pinch gesture recognizer (UIPinchGestureRecognizer) and drag it onto the view that contains the Pinch Me! label. No other configuration is necessary.


Tip

If you look at the Attributes Inspector for a pinch, you’ll see that you can set a scale attribute that corresponds to a scale variable property on the object. The scale, by default, starts at 1. Imagine you move your fingers apart to invoke a pinch gesture recognizer. If you move your fingers twice as far apart as they were, the scale becomes 2 (1 × 2). If you repeat the gesture, moving them twice as far apart again, it becomes 4 (2 × 2). In other words, the scale changes using its previous reading as a starting point.

Usually you want to leave the default scale value to 1, but be aware that you can reset the default in the Attributes Inspector if need be.


The Rotation Recognizer

A rotation gesture is triggered when two fingers move opposite one another as if rotating around a circle. Imagine turning a doorknob with two fingers on the top and bottom and you’ll get the idea of what iOS considers a valid rotation gesture. As with a pinch, the rotation gesture recognizer requires no configuration; all the work occurs in interpreting the results—the rotation (in radians) and the speed (velocity) of the rotation.

Find the rotation gesture recognizer (UIRotationGestureRecognizer) and drag it onto the view that contains the Rotate Me! label. You’ve just added the final object to the storyboard.


Tip

Just like the pinch gesture recognizer’s scale, the rotation gesture recognizer has a rotation variable property that you can set in the Attributes Inspector. This value, representing the amount of rotation in radians, starts at 0 and changes with each successive rotation gesture. If you want, you can override the initial starting rotation of 0 radians with any value you choose. Subsequent rotation gestures start from the value you provide.



Gesture Overload

Be mindful of the built-in iOS gestures when you start using gestures in your own applications. Apple has been increasingly adding gestures throughout iOS, including bottom and side swipes. If your gesture conflicts with those provided by the system, the user experience will likely be poor.


Creating and Connecting the Outlets and Actions

To respond to gestures and access our feedback objects from the main view controller, we need to establish the outlets and actions we defined earlier.

Let’s review what we need, starting with the outlets:

Image The image view (UIImageView): imageView

Image The label for providing feedback (UILabel): outputLabel

And the actions:

Image Respond to a tap gesture: foundTap

Image Respond to a swipe gesture: foundSwipe

Image Respond to a pinch gesture: foundPinch

Image Respond to a rotation gesture: foundRotation

Prepare your workspace for making the connections. Open the Main.storyboard file and switch to the assistant editor mode with ViewController.swift visible. Because you will be dragging from the gesture recognizers in your scene, make sure that the document outline is showing (Editor, Show Document Outline) or that you can tell the difference between them in the object dock below your view.

Adding the Outlets

Control-drag from the Do Something! label to just below the variable property originalRect that you added earlier. When prompted, create a new outlet called outputLabel, as shown in Figure 17.8. Repeat the process for the image view, naming it imageView.

Image

FIGURE 17.8 Connect the label and image view.

Adding the Actions

Connecting the gesture recognizers to the action methods that we’ve identified works as you probably imagine, but with one difference. Usually when you connect an object to an action, you’re connecting a particular event on that object—such as Touch Up Inside, for buttons. In the case of a gesture recognizer, you are actually making a connection from the recognizer’s “selector” to a method. Recall in the earlier code example that the selector is just the name of the method that should be invoked if a gesture is recognized.


Tip

Some gesture recognizers (tap, swipe, and long press) can also trigger segues to other storyboard scenes by using the Storyboard Segues section in the Connections Inspector. You learned about multiscene storyboards in Hour 11, “Implementing Multiple Scenes and Popovers.”


To connect the gesture recognizer to an action method, just Control-drag from the gesture recognizer entry in the document outline to the ViewController.swift file. Do this now with the tap gesture recognizer, targeting just below the variable properties you defined earlier. When prompted, configure the connection as an action with the name foundTap, as shown in Figure 17.9.

Image

FIGURE 17.9 Connect the gesture recognizer to a new action.

Repeat this process for each of the other gesture recognizers—connecting the swipe recognizer to foundSwipe, the pinch recognizer to foundPinch, and the rotation recognizer to foundRotation. To verify your connections, select one of the recognizers (here, the tap recognizer) and view the Connections Inspector (Option-Command-6). You should see the action defined in Sent Actions and the view that uses the recognizer referenced in the Referencing Outlet Collections section, as shown in Figure 17.10.

Image

FIGURE 17.10 Confirm your connections in the Connections Inspector.


Tip

Hover your mouse over a given connection in the Connection Inspector to see that item highlighted in your scene (shown in Figure 17.9). This is a quick way of verifying that your gestures are connected to the right views.


We’re done with our interface and done adding gesture recognizers to our project; now let’s make them do something.

Implementing the Application Logic

To begin the implementation, we address our image view problem: We need to replace the image view that gets added through Interface Builder with one we create programmatically. We also grab the position and size of the image view from its frame variable property (a CGRect) and store it in the originalRect variable property. Where will this happen? In the view controller method viewDidLoad, which is called as soon as the interface loads.

Replacing the Image View

Make sure that the standard editor mode is selected, and then open the ViewController.swift file and update the viewDidLoad the method, as shown in Listing 17.3.

LISTING 17.3 Implementing the viewDidLoad Method


 1: override func viewDidLoad() {
 2:     super.viewDidLoad()
 3:
 4:     originalRect=imageView.frame;
 5:     var tempImageView: UIImageView
 6:     tempImageView=UIImageView(image:UIImage(named: "flower.png"))
 7:     tempImageView.frame=originalRect
 8:     view.addSubview(tempImageView)
 9:     self.imageView=tempImageView
10: }


Line 4 grabs the frame from the image view that we added in Interface Builder. This is a data structure of the type CGRect and consists of four floating-point values: origin.x, origin.y, size.width, and size.height. The original values are stored in originalRect.

Lines 5–6 declare and initialize a new UIImageView (tempImageView) using the flower.png image that we added to our project earlier.

In line 7, we set the frame of the new image view to the frame of the original image view, conveniently stored in originalRect. That finishes up the configuration of the constraint-free image view; it is added to the view controller’s main view (the scene) with the addSubview method in line 8.

As a final step in swapping the image views, line 9 reassigns the imageView variable property to the new tempImageView. We can now access the new image view through the variable property that originally pointed to the image view added in Interface Builder.

Now, let’s move on to the gesture recognizers, beginning with the tap recognizer. What you’ll quickly discover is that after you’ve added one recognizer, the pattern is very, very similar for the others. The only difference is the shake gesture, which is why we’re saving that for last.

Responding to the Tap Gesture Recognizer

Responding to the tap gesture recognizer is just a matter of implementing the foundTap method. Update the method stub in the view controller (ViewController.swift) with the implementation shown in Listing 17.4.

LISTING 17.4 Implementing the foundTap Method


@IBAction func foundTap(sender: AnyObject) {
    outputLabel.text="Tapped"
}


This method doesn’t need to process input or do anything other than provide some indication that it has run. Setting the outputLabel’s text variable property to "Tapped" should suffice nicely.

Ta da! Your first gesture recognizer is done. We’ll repeat this process for the other four, and we’ll be finished before you know it.


Tip

If you want to get the coordinate where a tap gesture (or a swipe) takes place, you add code like this to the gesture handler (replacing <the view> with a reference to the recognizer’s view):

var location: CGPoint = (sender as!
    UITapGestureRecognizer).locationInView(< the view>)

This creates a simple structure named location, with members x and y, accessible as location.x and location.y.


Responding to the Swipe Recognizer

We respond to the swipe recognizer in the same way we did with the tap recognizer, by updating the output label to show that the gesture was recognized. Implement the foundSwipe method as shown in Listing 17.5.

LISTING 17.5 Implementing the foundSwipe Method


@IBAction func foundSwipe(sender: AnyObject) {
    outputLabel.text="Swiped"
}'


So far, so good. Next up, the pinch gesture. This requires a bit more work because we’re going to use the pinch to interact with our image view.

Responding to the Pinch Recognizer

Taps and swipes are simple gestures; they either happen or they don’t. Pinches and rotations are slightly more complex, returning additional values to give you greater control over the user interface. A pinch, for example, includes a velocity variable property (how quickly the pinch happened) and scale (a fraction that is proportional to change in distance between your fingers). If you move your fingers 50% closer together, the scale is .5, for example. If you move them twice as far apart, it is 2.

You’ve made it to the most complex piece of code in this hour’s lesson. The foundPinch method accomplishes several things. It resets the UIImageView’s rotation (just in case it gets out of whack when we set up the rotation gesture), creates a feedback string with the scale and velocity values returned by the recognizer, and actually scales the image view so that the user receives immediate visual feedback.

Implement the foundPinch method as shown in Listing 17.6.

LISTING 17.6 Implementing the foundPinch Method


 1: @IBAction func foundPinch(sender: AnyObject) {
 2:     var recognizer: UIPinchGestureRecognizer
 3:     var feedback: String
 4:     var scale: CGFloat
 5:
 6:     recognizer=sender as! UIPinchGestureRecognizer
 7:     scale=recognizer.scale
 8:     imageView.transform = CGAffineTransformMakeRotation(0.0)
 9:
10:     feedback=String(format: "Pinched, Scale: %1.2f, Velocity: %1.2f",
11:         Float(recognizer.scale),Float(recognizer.velocity))
12:     outputLabel.text=feedback
13:     imageView.frame = CGRectMake(self.originalRect.origin.x,
14:         originalRect.origin.y,
15:         originalRect.size.width*scale,
16:         originalRect.size.height*scale);
17: }


Let’s walk through this method to make sure that you understand what’s going on. Lines 2–4 declare a reference to a pinch gesture recognizer (recognizer), a string object (feedback), and a CGFloat value (scale). These are used to interact with our pinch gesture recognizer, store feedback for the user, and hold the scaling value returned by the pinch gesture recognizer, respectively.

Line 6 takes the incoming sender object of the type AnyObject and casts it as a UIPinchGestureRecognizer, which can then be accessed through the recognizer variable. The reason we do this is simple. When you created the foundPinch action by dragging the gesture recognizer into your ViewController.swift file, Xcode wrote the method with a parameter named sender of the generic “handles any object” type AnyObject. Xcode does this even though the sender will always be, in this case, an object of type UIPinchGestureRecognizer. Line 6 just gives us a convenient way of accessing the object as the type it really is.

Line 7 sets scale to the recognizer’s scale variable property.

Line 8 resets the imageView object to a rotation of 0.0 (no rotation at all) by setting its transform variable property to the transformation returned by the Core Graphics CGAffineTransformMakeRotation function. This function, when passed a value in radians, returns the necessary transformation to rotate a view.

Lines 10–11 initialize the feedback string to show that a pinch has taken place and output the values of the recognizer’s scale and velocity variable properties—after converting them from CGFloat data structures to floating-point values. Line 12 sets the outputLabel in the UI to the feedback string.

For the scaling of the image view itself, lines 13–16 do the work. All that needs to happen is for the imageView object’s frame to be redefined to the new size. To do this, we can use CGRectMake to return a new frame rectangle based on a scaled version of the CGRect stored in the original image view position: originalRect. The top-left coordinates (origin.x, origin.y) stay the same, but we multiply size.width and size.height by the scale factor to increase or decrease the size of the frame according to the user’s pinch.

Building and running the application will now let you enlarge (even beyond the boundaries of the screen) or shrink the image using the pinch gesture within the pinchView object, as shown in Figure 17.11.

Image

FIGURE 17.11 Enlarge or shrink the image in a pinch (ha ha).


Note

If you don’t want to downcast the sender variable to use it as a gesture recognizer, you can also edit Xcode’s method declarations to include the exact type being passed. Just change the method declaration from

@IBAction func foundPinch(sender: AnyObject) {

to

@IBAction func foundPinch(sender: UIPinchGestureRecognizer) {

If you do so, you’ll be able to access sender directly as an instance of UIPinchGestureRecognizer.


Responding to the Rotation Recognizer

The last multitouch gesture recognizer that we’ll add is the rotation gesture recognizer. Like the pinch gesture, rotation returns some useful information that we can apply visually to our onscreen objects, notably velocity and rotation. The rotation returned is the number of radians that the user has rotated his or her fingers, clockwise or counterclockwise.


Tip

Most of us are comfortable talking about rotation in “degrees,” but the Cocoa classes usually use radians. Don’t worry. It’s not a difficult translation to make. If you want, you can calculate degrees from radians using the following formula:

Degrees = Radians × 180 / Pi

There’s not really any reason we need this now, but in your own applications, you might want to provide a degree reading to your users.


I’d love to tell you how difficult it is to rotate a view and about all the complex math involved, but I pretty much gave away the trick to rotation in the foundPinch method earlier. A single line of code will set the UIImageView’s transform variable property to a rotation transformation and visually rotate the view. Of course, we also need to provide a feedback string to the user, but that’s not nearly as exciting, is it?

Add the foundRotation method in Listing 17.7 to your ViewController.swift file.

LISTING 17.7 Adding the foundRotation Method


 1: @IBAction func foundRotation(sender: AnyObject) {
 2:     var recognizer: UIRotationGestureRecognizer
 3:     var feedback: String
 4:     var rotation: CGFloat
 5:
 6:     recognizer=sender as! UIRotationGestureRecognizer
 7:     rotation=recognizer.rotation
 8:
 9:     feedback=String(format: "Rotated, Radians: %1.2f, Velocity: %1.2f",
10:         Float(recognizer.rotation),Float(recognizer.velocity))
11:     outputLabel.text=feedback
12:     imageView.transform = CGAffineTransformMakeRotation(rotation)
13: }


Again, we begin by declaring a reference to a gesture recognizer (recognizer), a string (feedback), and a CGFloat value (rotation), in lines 2–4.

Line 6 takes the incoming sender object of the type AnyObject and casts it as a UIRotationGestureRecognizer, which can then be accessed through the recognizer variable.

Line 7 sets the rotation value to the recognizer’s rotation variable property. This is the rotation in radians detected in the user’s gesture.

Lines 9–10 create the feedback string showing the radians rotated and the velocity of the rotation, and line 11 sets the output label to the string.

Line 12 handles the rotation itself, creating a rotation transformation and applying it to the imageView object’s transform variable property.


Note

The foundPinch method can also be implemented by updating the transform variable property for imageView and using the CGAffineTransformMakeScale method. In essence, you could replace lines 13–16 of foundPinch with a single line:imageView.transform = CGAffineTransformMakeScale(scale,scale)

Why did we update the frame of the imageView instead? Two reasons. First, because it gives you experience with two approaches to manipulating a view. Second, because setting a transformation for the image view doesn’t really change the view’s underlying frame; it changes the appearance instead. If you really want to the view’s size and location to change (not just the appearance of its size and location), applying a transformation isn’t the way to go.


Run and test your application now. You should be able to freely spin the image view using a rotation gesture in the rotate view, as shown in Figure 17.12.

Image

FIGURE 17.12 Spin the image view using the rotation gesture.

Although it might seem like we’ve finished, we still need to cover one more gesture: a shake.

Implementing the Shake Recognizer

Dealing with a shake is a bit different from the other gestures covered this hour. We must intercept a UIEvent of the type UIEventTypeMotion. To do this, our view controller or view must be the first responder in the responder chain and must implement the motionEnded:withEvent method.

Let’s tackle these requirements one at a time.

Becoming a First Responder

For our view controller to be a first responder, we have to allow it through a method called canBecomeFirstResponder that does nothing but return YES, and then ask for first responder status when the view controller loads its view. Start by adding the new method canBecomeFirstResponder, shown in Listing 17.8, to your ViewController.swift implementation file.

LISTING 17.8 Enabling the Ability to Be a First Responder


override func canBecomeFirstResponder() -> Bool {
    return true
}


Next, we need our view controller to become the first responder by sending the message becomeFirstResponder as soon as it has displayed its view. Update the ViewController.swift viewDidLoad method to do this, as shown in Listing 17.9.

LISTING 17.9 Asking to Become a First Responder


override func viewDidLoad() {
    super.viewDidLoad()
    becomeFirstResponder()
    originalRect=imageView.frame;
    var tempImageView: UIImageView
    tempImageView=UIImageView(image:UIImage(named: "flower.png"))
    tempImageView.frame=originalRect
    view.addSubview(tempImageView)
    self.imageView=tempImageView
}


Our view controller is now prepared to become the first responder and receive the shake event. All we need to do now is implement motionEnded:withEvent to trap and react to the shake gesture itself.

Responding to a Shake Gesture

To react to a shake, implement the motionEnded:withEvent method, as shown in Listing 17.10.

LISTING 17.10 Responding to a Shake Gesture


1: override func motionEnded(motion: UIEventSubtype,
2:     withEvent event: UIEvent?) {
3:     if motion==UIEventSubtype.MotionShake {
4:         outputLabel.text="Shaking things up!"
5:         imageView.transform=CGAffineTransformIdentity
6:         imageView.frame=originalRect
7:     }
8: }


First things first: In line 3, we check to make sure that the motion value we received (an object of type UIEventSubtype) is, indeed, a motion event. To do this, we just compare it to the constant UIEventSubtypeMotionShake. If they match, the user just finished shaking the device.

Lines 4–6 react to the shake by setting the output label, rotating the image view back to its default orientation, and setting the image view’s frame back to the original size and location stored in our originalRect variable property. In other words, shaking the device will reset the image to its default state. Pretty nifty, huh?

Building the Application

You can now run the application and use all the multitouch gestures that we implemented this hour. Try scaling the image through a pinch gesture. Shake your device to reset it to the original size. Scale and rotate the image, tap, swipe—everything should work exactly as you’d expect and with a surprisingly minimal amount of coding.

Now that you’ve mastered multitouch, let’s take things to the third dimension with 3D Touch.

Implementing 3D Touch Gestures

You know the drill: For each tutorial, I include an overview, and then describe the different pieces needed to create a project, design the interface, and implement the logic. Not this time, buddy! To implement 3D Touch, you’ll spend less time implementing than the amount of time it has taken you to read this paragraph.

Implementation Overview

As of Xcode 7.1, peek and pop gestures can be added to your view-transitioning segues by selecting the existing segue (referred to as the Action segue), and then using the Attributes Inspector to check the box beside Peek and Pop, as shown in Figure 17.13. This tells the application that we want to define a Preview segue (peek) and a Commit segue (pop).

Image

FIGURE 17.13 Check a box, and you’ve got 3D Touch peek and pop gestures implemented.

By default, the Commit segue (the pop) is the same as the Action segue. Similarly, the Preview segue (peek) is the same as the Commit segue. If this seems confusing, think about it this way: You define segues to take you to a new scene. A peek preview shows you what to expect when you activate a link or press a button (without actually taking you there). Popping the preview, logically, should take you to the same place you’d go to if you didn’t preview it in the first place. In other words, the Commit segue and the Action segue should be the same, and, chances are, the preview should be the same as well.

If you want to be clever and break the mold for your peek and pop views, you can set the Preview and Commit segues to the storyboard ID of a view controller of your choice by setting the corresponding pop-up menus to Custom and then completing the additional fields.

After you’ve configured a segue for peek and pop, you’ll see a visual indicator of this change as a dashed circle that appears around the segue icon in the storyboard, as shown in Figure 17.14.

Image

FIGURE 17.14 A dashed circle around the segue icon indicate that peek and pop support has been activated.

Simple, don’t you think? The most expedient way for me to prove how simple it can be is to modify two of our existing applications to support these 3D Touch features.

Modifying ModalEditor

The first project that we’ll modify is the Hour 11 ModalEditor (non-popover version). When the user pushes on the Edit button, they’ll see a preview of the edit screen. Push a bit harder, and the editor pops onto the screen. Not very useful, but nifty nonetheless.

Go ahead and make a copy of the Hour 11 ModalEditor (non-popover version) project and open it. Then follow these steps:

1. Select the Storyboard file and find the segue that connects the initial view and the editor view. Select it and open the Attributes Inspector.

2. Change the transition for the segue to Default. The transition type I used (Partial Curl) is not compatible with peek and pop because it uses the full screen.

3. Click the check box beside peek and pop. Your display should now look similar to Figure 17.15. You’re done.

Image

FIGURE 17.15 Activate peek and pop for the segue.

You can now run the application on an iPhone that supports 3D Touch (currently the iPhone 6s and 6s+). Trying pushing firmly on the Edit button. A peek preview of the edit screen appears, as demonstrated in Figure 17.16.

Image

FIGURE 17.16 Peeking at the editor screen

Push a little harder and the editor pops onto the screen. No fuss, no muss, and no code. Let’s repeat this process with something a bit more interesting: a table view.

Modifying FlowerDetail

The second project that we’ll 3D Touch-enable is Hour 14’s FlowerDetail. This time, when the user pushes on one of the flowers in the table, a preview will appear showing the Wikipedia page describing the flower. Push harder, and the Wikipedia page goes full screen, and the user can interact with the web view.

Begin by making copy of the Hour 14 FlowerDetail project and then open it. Follow along once the workspace appears:

1. Select the Storyboard file and find the segue that connects the Master table view (Flower Type) and the Navigation controller (this is set as the Show Detail segue). Select it and open the Attributes Inspector.

2. Click the check box beside Peek and Pop. Your display should look similar to Figure 17.17.

Image

FIGURE 17.17 Activate peek and pop for the Show Detail segue.

Unfortunately, pushing on a table cell isn’t the same as tapping to select it, so Apple’s code can’t identify what cell you’re pushing on. If you attempt to run the application now, the peek and pop gestures will take you to a blank screen.

To fix the problem, open MasterViewController.swift and look at the second line in the method prepareForSegue. Change it from this:

if let indexPath = self.tableView.indexPathForSelectedRow {

To this:

if let indexPath = self.tableView.indexPathForCell(sender as! UITableViewCell) {

This change grabs the appropriate cell by using the sender object (whatever the user is pressing on) rather then relying on iOS reporting the cell as being selected.

After making the change, run the application on a 3D Touch-capable device. Pushing firmly on one of the table cells should result in behavior similar to what is shown in Figure 17.18.

Image

FIGURE 17.18 Peeking at the details of a flower.

This second example is a bit more useful than the first, but again demonstrates how simple it is to support 3D Touch in your applications. I’d be very surprised if Apple didn’t modify the Master-Detail application template to handle 3D Touch gestures “out of the box.”

Further Exploration

In addition to the multitouch gestures discussed this hour, you should be able to immediately add three other recognizers to your apps: UILongPressGestureRecognizer, UIPanGestureRecognizer, and UIScreenEdgePanGestureRecognizer. The UIGestureRecognizer class is the parent to all the gesture recognizers that you’ve learned about in this lesson and offers additional base functionality for customizing gesture recognition.

You also might want to learn more about the lower-level handling of touches on iOS. See the “Event Handling” section of the Data Management iOS documentation and the UITouch class for more information. The UITouch class can even let you measure the amount of force of a given “touch” on a 3D Touch-enabled device.

We humans do a lot with our fingers, such as draw, write, play music, and more. Each of these possible gestures has been exploited to great effect in third-party applications. Explore the App Store to get a sense of what’s been done with the iOS multitouch gestures.

Be sure to look at the SimpleGestureRecognizers tutorial project, found within the Xcode documentation. This project provides many additional examples of implementing gestures on the iOS platform and demonstrates how gestures can be added through code. Although the Interface Builder approach to adding gesture recognizers can cover many common scenarios, it’s still a good idea to know how to code them by hand.

Summary

In this hour, we’ve given the gesture recognizer architecture a good workout. Using the gesture recognizers provided through iOS, you can easily recognize and respond to taps, swipes, pinches, rotations, and more—without any complex math or programming logic.

You also learned how to make your applications respond to shaking: Just make them first responders and implement the motionEnded:withEvent method. Your ability to present your users with interactive interfaces just increased dramatically.

Q&A

Q. Why don’t the rotation/pinch gestures include configuration options for the number of touches?

A. The gesture recognizers are meant to recognize common gestures. Although it is possible that you could manually implement a rotation or pinch gesture with multiple fingers, it wouldn’t be consistent with how users expect their applications to work and isn’t included as an option with these recognizers.

Workshop

Quiz

1. The rotation value of the UIRotationGestureRecognizer is returned in what?

a. Integers

b. Radians

c. Degrees

d. Seconds

2. Which gesture recognizer is often used for enlarging or shrinking content?

a. UITabGestureRecognizer

b. UIRotationGestureRecognizer

c. UIPinchGestureRecognizer

d. UIScaleGestureRecognizer

3. Which of the following attributes can you set for a tap gesture recognizer?

a. Number of touches

b. Finger spacing

c. Finger pressure

d. Touch length

4. How many recognizers will you need to recognize left, right, and down swipes in a view?

a. 1

b. 3

c. 6

d. 2

5. Overriding the motionEnded:withEvent method is necessary for recognizing what type of gesture?

a. Panning

b. Swiping

c. Tapping

d. Shaking

6. 3D Touch implements two gestures. What are their names?

a. Peek and poke

b. Peek and pop

c. Preview and pop

d. Preview and show

7. You can hold an object’s frame in which data structure?

a. AnyObject

b. ObjectRect

c. CGFrame

d. CGRect

8. To help differentiate between objects in the document outline, you can set which of the following?

a. Labels

b. Notes

c. Classes

d. Segues

9. To determine how far a user has moved her fingers during a pinch gesture, which variable property do you look at?

a. space

b. scale

c. distance

d. location

10. You can scale or rotate the view without any complex math by using which variable property of a view?

a. transform

b. scale

c. rotate

d. scaleandrotate

Answers

1. B. Rotation is returned in radians, a common unit of measure.

2. C. Use a UIPinchGestureRecognizer to implement scaling gestures within an application.

3. A. You can easily set the number of touches that will be required to trigger a tap gesture.

4. B. You need a gesture recognizer for each of the swipe directions that you want to implement; three directions, three recognizers.

5. D. The shake gesture requires an implementation of the motionEnded:WithEvent method.

6. B. The 3D Touch gestures are known as peek and pop.

7. D. A CGRect data structure can be used to hold an object’s frame.

8. A. Labels are a convenient way to provide custom names of the items listed within the Document Outline.

9. B. The scale variable property will help you determine how far a user has moved her fingers relative to her original position.

10. A. The transform variable property can be used to apply a nondestructive transformation (such as rotation or scaling) to a view.

Activities

1. Expand the Gestures application to include panning and pressing gestures. These are configured almost identically to the gestures you used in this hour’s tutorial.

2. Alter this project to use the image view that you added in Interface Builder rather than the one created programmatically. Practice using the constraints system to see the effect that constraints have on the image view as it changes size and rotates.

3. Improve on the user experience by adding the pinch and rotation gesture recognizers to the UIImageView object itself, enabling users to interact directly with the image rather than another view.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.12.34.178