© Molly Maskrey, Kim Topley, David Mark, Fredrik Olsson and Jeff Lamarche 2016

Molly Maskrey, Kim Topley, David Mark, Fredrik Olsson and JEFF LAMARCHE, Beginning iPhone Development with Swift 3, 10.1007/978-1-4842-2223-2_18

18. Taps, Touches, and Gestures

Molly Maskrey, Kim Topley2, David Mark3, Fredrik Olsson4 and Jeff Lamarche5

(1)Parker, Colorado, USA

(2)Apt 10G, New York, USA

(3)Arlington, Virginia, USA

(4)Stockholm, Sweden

(5)New Hartford, New York, USA

The screens of the iPhone, iPod touch, and iPad—with their crisp, bright, touch-sensitive display—represent masterpieces of engineering. The multitouch screen common to all iOS devices provides one of the key factors in the platform’s tremendous usability. Because the screen can detect multiple touches at the same time and track them independently, applications are able to detect a wide range of gestures, giving the user power that goes beyond the interface.

Suppose you are in the Mail application staring at a long list of junk e-mail that you want to delete. You can tap each one individually to read it, tap the trash icon to delete it, and then wait for the next message to download, deleting each one in turn. This method is best if you want to read each message before you delete it. If you have an iPhone 6s or iPhone 6s Plus, you can even take advantage of its 3D Touch feature to preview an e-mail without actually opening it. Alternatively, from the list of messages, you can tap the Edit button in the upper-right corner, tap each e-mail row to mark it, and then hit the Trash button to delete all marked messages. This method is best if you don’t need to read each message before deleting it. Another alternative is to swipe across a message in the list from right to left. That gesture produces a More button and a Trash button for that message. Tap the Trash button, and the message is deleted.

This example is just one of the many gestures that are made possible by the multitouch display. You can pinch your fingers together to zoom out while viewing a picture or reverse-pinch to zoom in. On the home screen, you can long-press an icon to turn on “jiggly mode,” allowing you to delete applications from your iOS device; on the iPhone 6s and iPhone 6s, you can summon a list of shortcuts for an application that supports 3D Touch. In this chapter, we’re going to look at the underlying architecture that lets you detect gestures. You’ll learn how to detect the most common gestures, as well as how to create and detect a completely new gesture.

Multitouch Terminology

Before we dive into the architecture, let’s cover some basic vocabulary. First, a gestureis any sequence of events that happens from the time you touch the screen with one or more fingers until you lift your fingers off the screen. No matter how long it takes, as long as one or more fingers remain against the screen, you are still within a gesture (unless a system event, such as an incoming phone call, interrupts it). In some sense, a gesture is a verb, and a running app can watch the user input stream to see if one is happening. A gesture is passed through the system inside a series of events. Events are generated when you interact with the device’s multitouch screen. They contain information about the touch or touches that occurred.

The term touchrefers to a finger being placed on the screen, dragging across the screen, or being lifted from the screen. The number of touches involved in a gesture is equal to the number of fingers on the screen at the same time. You can actually put all five fingers on the screen, and as long as they aren’t too close to each other, iOS recognizes and tracks them all. Experimentation has shown that the iPad can handle up to 11 simultaneous touches! This may seem excessive, but could be useful if you’re working on a multiplayer game, in which several players are interacting with the screen at the same time. The newest iOS devices can report how hard the user is pressing on the screen, making it possible for you to implement gestures that depend on that information.

A taphappens when you touch the screen with a finger and then immediately lift your finger off the screen without moving it around. The iOS device keeps track of the number of taps and can tell you if the user double-tapped, triple-tapped, or even 20-tapped. It handles all the timing and other work necessary to differentiate between two single-taps and a double-tap, for example.

A gesture recognizerobject knows how to watch the stream of events generated by a user and recognize when the user is touching and dragging in a way that matches a predefined gesture. The UIGestureRecognizer class and its various subclasses can help take a lot of work off your hands when you want to watch for common gestures. This class encapsulates the work of looking for a gesture and can be easily applied to any view in your application.

In the first part of this chapter, we’ll see the events that are reported when the user touches the screen with one or more fingers, and how to track the movement of fingers on the screen. We can use these events to handle gestures in a custom view or in our application delegate. Next, we’ll look at some of the gesture recognizers that come with the iOS SDK, and finally, you’ll see how to build your own gesture recognizer.

The Responder Chain

Since gestures are passed through the system inside events , and events are passed through the responder chain, you need to have an understanding of how the responder chain works in order to handle gestures properly. If you’ve worked with Cocoa for macOS (or previously OS X), you’re probably familiar with the concept of a responder chain, as the same basic mechanism is used in both Cocoa and Cocoa Touch. If this is new material, don’t worry; we’ll explain how it works.

Responding to Events

Several times in this book, we’ve mentioned the first responder, which is usually the object with which the user is currently interacting. The first responder is the start of the responder chain, but it’s not alone. There are always other responders in the chain as well. In a running application, the responder chain is a changing set of objects that are able to respond to user events. Any class that has UIResponderas one of its superclasses is a responder. UIView is a subclass of UIResponder, and UIControl is a subclass of UIView, so all views and all controls are responders. UIViewController is also a subclass of UIResponder, meaning that it is a responder, as are all of its subclasses, such as UINavigationController and UITabBarController. Responders, then, are so named because they respond to system-generated events, such as screen touches.

If a responder doesn’t handle a particular event, such as a gesture, it usually passes that event up the responder chain. If the next object in the chain responds to that particular event, it will usually consume the event, which stops the event’s progression through the responder chain. In some cases, if a responder only partially handles an event, that responder will take an action and forward the event to the next responder in the chain. That’s not usually what happens, though. Normally, when an object responds to an event, that’s the end of the line for the event. If the event goes through the entire responder chain and no object handles the event, the event is then discarded.

Let’s take a more specific look at the responder chain. An event first gets delivered to the UIApplication object, which in turn passes it to the application’s UIWindow. The UIWindowhandles the event by selecting an initial responder. The initial responder is chosen as follows:

  • In the case of a touch event, the UIWindow object determines the view that the user touched, and then offers the event to any gesture recognizers that are registered for that view or any view higher up in the view hierarchy. If any gesture recognizer handles the event, it goes no further. If not, the initial responder is the touched view and the event will be delivered to it.

  • For an event generated by the user shaking the device (which we’ll say more about in Chapter 20) or from a remote control device, the event is delivered to the first responder.

If the initial responder doesn’t handle the event, it passes the event to its parent view, if there is one, or to the view controller if the view is the view controller’s view. If the view controller doesn’t handle the event, it continues up the responder chain through the view hierarchy of its parent view controller, if it has one.

If the event makes it all the way up through the view hierarchy without being handled by a view or a controller, the event is passed to the application’s window. If the window doesn’t handle the event, the UIApplication object will pass it to the application delegate, if the delegate is a subclass of UIResponder (which it normally is if you create your project from one of Apple’s application templates). Finally, if the app delegate isn’t a subclass of UIResponder or doesn’t handle the event, then the event goes gently into the good night.

This process is important for a number of reasons. First, it controls the way gestur es can be handled. Let’s say a user is looking at a table and swipes a finger across a row of that table. What object handles that gesture? If the swipe is within a view or control that’s a subview of the table view cell, that view or control will get a chance to respond. If it doesn’t respond, the table view cell gets a chance. In an application like Mail, in which a swipe can be used to delete a message, the table view cell probably needs to look at that event to see if it contains a swipe gesture. Most table view cells don’t respond to gestures, however. If they don’t respond, the event proceeds up to the table view, and then up the rest of the responder chain until something responds to that event or it reaches the end of the line.

Forwarding an Event: Keeping the Responder Chain Alive

Let’s consider that table view cell in the Mail application. We don’t know the internal details of the Apple Mail application; however, let’s assume that the table view cell handles the delete swipe and only the delete swipe. That table view cell must implement the methods related to receiving touch events (discussed shortly) so that it can check to see if that event could be interpreted as part of a swipe gesture. If the event matches a swipe that the table view is looking for, then the table view cell takes an action, and that’s that; the event goes no further.

If the event doesn’t match the table view cell’s swipe gesture, the table view cell takes the responsibility for forwarding that event to the next object in the responder chain. If it doesn’t do its forwarding job, the table and other objects up the chain will never get a chance to respond, and the application may not function as the user expects. That table view cell could prevent other views from recognizing a gesture.

Whenever you respond to a touch event, you need to keep in mind that your code doesn’t work in a vacuum. If an object intercepts an event that it doesn’t handle, it needs to pass it along manually. One way to do this is to call the same method on the next responder. We see an example of this in Listing 18-1.

Listing 18-1. Passing Along a Gesture to Be Handled Elsewhere
func respondToFictionalEvent(event: UIEvent) {
    if shouldHandleEvent(event) {
        handleEvent(event)
    } else {
        nextResponder().respondToFictionalEvent(event)
    }
}

Notice that we call the same method on the next responder. That’s how to implement a good responder-chain process. Fortunately, most of the time, methods that respond to an event also consume the event. However, it’s important to know that if that’s not the case, you need to make sure the event is passed along to the next link in the responder chain.

The Multitouch Architecture

Now that we’ve talked a little about the responder chain, let’s look at the process of handling touches. As we’ve indicated, touches are passed along the responder chain, embedded in events. This means that the code to handle any kind of interaction with the multitouch screen needs to be contained in an object in the responder chain. Generally, that means we can choose to either embed that code in a subclass of UIView or embed the code in a UIViewController. So, does this code belong in the view or in the view controller?

If the view needs to do something to itself based on the user’s touches, the code probably belongs in the class that defines that view. For example, many control classes, such as UISwitch and UISlider, respond to touch-related events. A UISwitch might want to turn itself on or off based on a touch. The folks who created the UISwitch class embedded gesture-handling code in the class so the UISwitch can respond to a touch. Often, however, when the gesture being processed affects more than the object being touched, the gesture code really belongs in the relevant view controller class. For example, if the user makes a gesture touching one row that indicates that all rows should be deleted, the gesture should be handled by code in the view controller. The way you respond to touches and gestures in both situations is exactly the same, regardless of the class to which the code belongs.

The Four Touch Notification Methods

Four methods are used to notify a responder about touches. When the user first touches the screen, the system looks for a responder that has a method called touchesBegan(_:withEvent:). To find out when the user first begins a gesture or taps the screen, implement this method in your view or your view controller. An example of what that method might look like can be seen in Listing 18-2.

Listing 18-2. Discover When the Gesture or Tap Began
override func touchesBegan(touches: Set<UITouch>, withEvent event: UIEvent?) {
    if let touch = touches.first {
        let numTaps = touch.tapCount
        let numTouches = event?.allTouches()?.count
    }


    // Do something here
}

Each time a finger touches the screen for the first time, a new UITouch object is allocated to represent that finger and added to the set that is delivered with each UIEvent and can be retrieved by calling its allTouches() method . All future events that report activity for that same finger will contain the same UITouch instance in the allTouches() set (and it will also appear in the touches set if there is new activity to report for the corresponding finger) until that finger is removed from the screen. Thus, to track the activity of any given finger, you need to monitor its UITouch object.

You can determine the number of fingers currently pressed against the screen by getting a count of the objects returned by allTouches(). If the event reports a touch that is part of a series of taps by any given finger, you can get the tap count from the tapCount property of the UITouch object for that finger. If there’s only one finger touching the screen, or if you don’t care which finger you ask about, you can quickly get a UITouch object to query by using the first property of the Set structure. In the preceding example, a numTaps value of 2 tells you that the screen was tapped twice in quick succession by at least one finger. Similarly, a numTouches value of 2 tells you the user has two fingers touching the screen.

Not all of the objects in touches or the allTouches() set may be relevant to the view or view controller in which you’ve implemented this method. A table view cell, for example, probably doesn’t care about touches that are in other rows or that are in the navigation bar. You can get the set of touches that fall within a particular view from the event using let myTouches = event?.touchesForView(self.view).

Every UITouch represents a different finger, and each finger is located at a different position on the screen. You can find out the position of a specific finger using the UITouch object . It will even translate the point into the view’s local coordinate system using let point = touch.locationInView(self.view)  // point is of type CGPoint.

You can get notified while the user is moving fingers across the screen by implementing touchesMoved(_:withEvent:). This method gets called multiple times during a long drag, and each time, you will get another set of touches and another event. In addition to being able to find out each finger’s current position from the UITouch objects, you can also discover the previous location of that touch, which is the finger’s position the last time either touchesMoved(_:withEvent:) or touchesBegan(_:withEvent:)was called.

When any of the user’s fingers is removed from the screen, another method, touchesEnded(_:withEvent:), is invoked. When this method is called, you know that the user is finished with some interaction using the affected finger.

There’s one final touch-related method that responders might implement: touchesCancelled(_:withEvent:). It is called if the user is in the middle of a sequence of operations when something happens to interrupt it, like the phone ringing. This is where you can do any cleanup you might need so you can start fresh with a new gesture. When this method is called, touchesEnded(_:withEvent:) will not be called for the current set of touches.

Creating the TouchExplorer Application

We’re going to build a little application that will give you a better feel for when the four touch-related responder methods are called. In Xcode, create a new project using the Single View Application template. Enter TouchExplorer as the Product Name and select Universal from the Devices pop-up. TouchExplorer prints messages to the screen that indicate the touch and tap count every time a touch-related method is called. On devices that support 3D Touch, it will also show the force applied by the finger that caused the most recent touch event, as shown in Figure 18-1.

A329781_3_En_18_Fig1_HTML.jpg
Figure 18-1. The TouchExplorer application
Note

Although the applications in this chapter will run on the simulator, you won’t be able to see all the available multitouch or 3D Touch functionality unless you run them on a real iOS device. 3D Touch requires an iPhone 6s or iPhone 6s Plus.

We need four labels for this application: one to indicate which method was last called, another to report the current tap count, a third to report the number of touches and a fourth for the 3D Touch force value. Single-click ViewController.swift and add four outlets to the view controller class :

class ViewController: UIViewController {
    @IBOutlet var messageLabel: UILabel!
    @IBOutlet var tapsLabel: UILabel!
    @IBOutlet var touchesLabel: UILabel!
    @IBOutlet var forceLabel: UILabel!

Now select Main.storyboard to create the user interface . You’ll see the usual empty view contained in all new projects of this kind. Drag a label onto the view, using the blue guidelines to place the label toward the upper-left corner of the view. Hold down the Option key and drag three more labels out from the original, spacing them one below the other. This leaves you with four labels (see Figure 18-1). Feel free to play with the fonts and colors if you’re feeling a bit creative. When you’re done, select the bottom label and use the Attributes Inspector to set its Lines property to 0, because we’re going to use it to show more than one line of text.

Now we need to set the auto layout constraints for the labels. In the Document Outline, Control-drag from the first label to the main view and release the mouse. Hold down the Shift key and select Vertical Spacing to Top Layout Guide and Leading Space to Container Margin, and then click Return. Do the same for the other three labels. The next step is to connect the labels to their outlets. Control-drag from the View Controller icon to each of the four labels, connecting the top one to the messageLabel outlet, the second one to the tapsLabel outlet, the third one to the touchesLabel outlet and the bottom one to the forceLabel outlet. Finally, double-click each label and press the Delete key to get rid of its text.

Next, single-click either the background of the main view or the View icon in the Document Outline, and then bring up the Attributes Inspector (see Figure 18-2) . In the Inspector, go to the View section and make sure that both User Interaction Enabled and Multiple Touch are checked. If Multiple Touch is not checked, your controller class’s touch methods will always receive one and only one touch, no matter how many fingers are actually touching the phone’s screen.

A329781_3_En_18_Fig2_HTML.jpg
Figure 18-2. In the View attributes , both User Interaction Enabled and Multiple Touch are checked

When you’re finished, switch back ViewController.swift and make the changes, as shown in Listing 18-3.

Listing 18-3. The ViewController.swift file to Support TouchExplorer
    override func viewDidLoad() {
        super.viewDidLoad()
        // Do any additional setup after loading the view, typically from a nib.
    }


    private func updateLabelsFromTouches(_ touch: UITouch?, allTouches: Set<UITouch>?) {
        let numTaps = touch?.tapCount ?? 0
        let tapsMessage = "(numTaps) taps detected"
        tapsLabel.text = tapsMessage


        let numTouches = allTouches?.count ?? 0
        let touchMsg = "(numTouches) touches detected"
        touchesLabel.text = touchMsg


        if traitCollection.forceTouchCapability == .available {
            forceLabel.text = "Force: (touch?.force ?? 0) Max force: (touch?.maximumPossibleForce ?? 0)"
        } else {
            forceLabel.text = "3D Touch not available"
        }
    }


    override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
        messageLabel.text = "Touches Began"
        updateLabelsFromTouches(touches.first, allTouches: event?.allTouches())
    }


    override func touchesCancelled(_ touches: Set<UITouch>, with event: UIEvent?) {
        messageLabel.text = "Touches Cancelled"
        updateLabelsFromTouches(touches.first, allTouches: event?.allTouches())
    }


    override func touchesEnded(_ touches: Set<UITouch>, with event: UIEvent?) {
        messageLabel.text = "Touches Ended"
        updateLabelsFromTouches(touches.first, allTouches: event?.allTouches())
    }


    override func touchesMoved(_ touches: Set<UITouch>, with event: UIEvent?) {
        messageLabel.text = "Drag Detected"
        updateLabelsFromTouches(touches.first, allTouches: event?.allTouches())
    }

In this controller class, we implement all four of the touch-related methods we discussed earlier. Each one sets messageLabel so the user can see when each method has been called. Next, all four of them call updateLabelsFromTouches()to update the other three labels. The updateLabelsFromTouches() method gets the tap count from the current touch, figures out the number of fingers touching the screen by looking at the count property of the set of touches that it receives (which is taken from the UIEvent object), and updates the labels with that information. It also obtains and displays force information. Let’s take a close look at that part of the code:

        if traitCollection.forceTouchCapability == .available {
            forceLabel.text = "Force: (touch?.force ?? 0) Max force:
                (touch?.maximumPossibleForce ?? 0)"
        } else {
            forceLabel.text = "3D Touch not available"
        }

3D Touch is not available on all devices, so the first line of code uses the forceTouchCapability property of the UITraitCollection class to check whether it is. Every view controller has a trait collection and here we use the trait collection of the application’s only view controller to make the check. If 3D Touch is supported, we use the force property of UITouch to find out how hard the user is currently pressing on the screen and the maximumPossibleForce property to get the largest possible force value. If 3D touch is not available, we simply say so.

Build and run the application. If you’re running in the simulator, try repeatedly clicking the screen to drive up the tap count. You should also try clicking and holding down the mouse button while dragging around the view to simulate a touch and drag. If you have a device that supports 3D Touch, try pressing with varying amounts of force to see the measurements that are reported.

You can emulate a two-finger pinch in the iOS simulator by holding down the Option key while you click with the mouse and drag. You can also simulate two-finger swipes by first holding down the Option key to simulate a pinch, moving the mouse so the two dots representing virtual fingers are next to each other, and then holding down the Shift key (while still holding down the Option key). Pressing the Shift key will lock the position of the two fingers relative to each other, enabling you to do swipes and other two-finger gestures. You won’t be able to do gestures that require three or more fingers, but you can do most two-finger gestures on the simulator using combinations of the Option and Shift keys.

If you’re able to run this program on a device, see how many touches you can get to register at the same time. Try dragging with one finger, followed by two fingers, and then three. Try double- and triple-tapping the screen, and see if you can get the tap count to go up by tapping with two fingers.

Play around with the TouchExplorer application until you feel comfortable with what’s happening and with the way that the four touch methods work. When you’re ready, continue on to see how to detect one of the most common gestures: the swipe.

Creating the Swipes Application

The application we’re about to build does nothing more than detect swipes, both horizontal and vertical. If you swipe your finger across the screen from left to right, right to left, top to bottom, or bottom to top, the app will display a message across the top of the screen for a few seconds, informing you that a swipe was detected (see Figure 18-3).

A329781_3_En_18_Fig3_HTML.jpg
Figure 18-3. The Swipes application detects both vertical and horizontal swipes

Using Touch Events to Detect Swipes

Detecting swipes is relatively easy. We’ll define a minimum gesture length in pixels, which is how far the user needs to swipe before the gesture counts as a swipe. We’ll also define a variance , which is how far from a straight line our user can veer and still have the gesture count as a horizontal or vertical swipe. A diagonal line generally won’t count as a swipe, but one that’s just a little off from horizontal or vertical will.

When the user touches the screen, we’ll save the location of the first touch in a variable. We’ll then check as the user’s finger moves across the screen to see if it reaches a point where it has gone far enough and straight enough to count as a swipe. There’s actually a built-in gesture recognizer that does exactly this, but we’re going to use what we’ve learned about touch events to make one of our own. Let’s build it. Create a new project in Xcode using the Single View Application template, set Devices to Universal, and name the project Swipes. Single-click ViewController.swift and add the following code to class:

class ViewController: UIViewController {
    @IBOutlet var label: UILabel!
    private var gestureStartPoint: CGPoint!

This code declares an outlet for our label and a variable to hold the first spot the user touches.

Select Main.storyboard to open it for editing. Make sure that the view controller’s view is set so User Interaction Enabled and Multiple Touch are both checked using the Attributes Inspector, and drag a label from the library and drop it in the upper portion of the View window. Set the text alignment to center and feel free to play with the other text attributes to make the label easier to read. In the Document Outline, Control-drag from the label to its parent view, release the mouse, hold down Shift and select Vertical Spacing to Top Layout Guide and Center Horizontally in Container, and then press Return. Control-drag from the View Controller icon to the label and connect it to the label outlet. Finally, double-click the label and delete its text. Now switch over to ViewController.swift and update it to that shown in Listing 18-4.

Listing 18-4. Updates to ViewController.swift File for the Touches App
class ViewController: UIViewController {
    @IBOutlet var label: UILabel!
    private var gestureStartPoint: CGPoint!
    private static let minimumGestureLength = Float(25.0)
    private static let maximumVariance = Float(5)


    override func viewDidLoad() {
        super.viewDidLoad()
        // Do any additional setup after loading the view, typically from a nib.
    }


    override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
        if let touch = touches.first {
            gestureStartPoint = touch.location(in: self.view)
        }
    }


    override func touchesMoved(_ touches: Set<UITouch>, with event: UIEvent?) {
        if let touch = touches.first, gestureStartPoint = self.gestureStartPoint {
            let currentPosition = touch.location(in: self.view)


            let deltaX = fabsf(Float(gestureStartPoint.x - currentPosition.x))
            let deltaY = fabsf(Float(gestureStartPoint.y - currentPosition.y))


            if deltaX >= ViewController.minimumGestureLength
                            && deltaY <= ViewController.maximumVariance {
                label.text = "Horizontal swipe detected"
                DispatchQueue.main.after(when: DispatchTime.now() + Double(Int64(2 * NSEC_PER_SEC)) / Double(NSEC_PER_SEC)) {
                    self.label.text = ""
                }
            } else if deltaY >= ViewController.minimumGestureLength
                            && deltaX <= ViewController.maximumVariance {
                label.text = "Vertical swipe detected"
                DispatchQueue.main.after(when: DispatchTime.now() + Double(Int64(2 * NSEC_PER_SEC)) / Double(NSEC_PER_SEC)) {
                    self.label.text = ""
                }
            }
        }
    }
}

Let’s start with the touchesBegan(_:withEvent:) method. All we do there is grab a touch from the touches set and store its touch point. We’re primarily interested in single-finger swipes right now, so we don’t worry about how many touches there are; we just grab the first one in the set:

        if let touch = touches.first {
            gestureStartPoint = touch.location(in: self.view)
        }

We’re using the UITouch objects in the touches argument instead of the ones in the UIEvent because we’re interested in tracking changes as they happen, not in the overall state of all of the active touches. In the next method, touchesMoved(_:withEvent:), we do the real work. First, we get the current position of the user’s finger:

        if let touch = touches.first, gestureStartPoint = self.gestureStartPoint {
            let currentPosition = touch.location(in: self.view)

Here, we’re using a form of the if let statement that lets us check more than one condition—we’re ensuring both that there is a current touch and that we have previously stored a gesture start point. In practice, both of these conditions should always be met, but the fact that the touches.first property, which we use both here and in the touchesBegan(_:withEvent:) method , returns an optional value means that we should make these checks to be sure that we don’t crash our application by trying to unwrap a nil optional value in the event that something unexpected happens.

Next, we calculate how far the user’s finger has moved both horizontally and vertically from its starting position. fabsf()is a function from the standard C math library that returns the absolute value of a float. This allows us to subtract one from the other without needing to worry about which is the higher value:

            let deltaX = fabsf(Float(gestureStartPoint.x - currentPosition.x))
            let deltaY = fabsf(Float(gestureStartPoint.y - currentPosition.y))

Once we have the two deltas, we check to see if the user has moved far enough in one direction without having moved too far in the other to constitute a swipe. If that’s true, we set the label’s text to indicate whether a horizontal or vertical swipe was detected. We also use the GCD DispatchQueue.main.after() function to erase the text after it has been on the screen for 2 seconds. That way, the user can practice multiple swipes without needing to worry whether the label is referring to an earlier attempt or the most recent one:

            if deltaX >= ViewController.minimumGestureLength
                            && deltaY <= ViewController.maximumVariance {
                label.text = "Horizontal swipe detected"
                DispatchQueue.main.after(when: DispatchTime.now() +
                        Double(Int64(2 * NSEC_PER_SEC)) / Double(NSEC_PER_SEC)) {
                    self.label.text = ""
                } else if deltaY >= ViewController.minimumGestureLength
                            && deltaX <= ViewController.maximumVariance {
                label.text = "Vertical swipe detected"
                DispatchQueue.main.after(when: DispatchTime.now() +
                        Double(Int64(2 * NSEC_PER_SEC)) / Double(NSEC_PER_SEC)) {
                    self.label.text = ""
                }
            }

Build and run the application. If you find yourself clicking and dragging with no visible results, be patient. Click and drag straight down or straight across until you get the hang of swiping.

Automatic Gesture Recognition

The procedure we just used for detecting a swipe wasn’t too bad. All the complexity is in the touchesMoved(_:withEvent:) method , and even that wasn’t all that complicated. But there’s an even easier way to do this. iOS includes a class called UIGestureRecognizer, which eliminates the need for watching all the events to see how fingers are moving. You don’t use UIGestureRecognizer directly, but instead create an instance of one of its subclasses, each of which is designed to look for a particular type of gesture, such as a swipe, pinch, double-tap, triple-tap, and so on. Let’s see how to modify the Swipes app to use a gesture recognizer instead of our hand-rolled procedure. As always, you might want to make a copy of your Swipes project folder and start from there. In the example source code archive, you’ll find the completed version of this application in the Swipes 2 folder.

Start by selecting ViewController.swift and deleting both the touchesBegan(_:withEvent:)and touchesMoved(_:withEvent:) method s because you won’t need them and add a couple of new methods in their place:

    func reportHorizontalSwipe(_ recognizer:UIGestureRecognizer) {
        label.text = "Horizontal swipe detected"
        DispatchQueue.main.after(when: DispatchTime.now() +
                Double(Int64(2 * NSEC_PER_SEC)) / Double(NSEC_PER_SEC)) {
            self.label.text = ""
        }
    }


    func reportVerticalSwipe(_ recognizer:UIGestureRecognizer) {
        label.text = "Vertical swipe detected"
            DispatchQueue.main.after(when: DispatchTime.now() +
                Double(Int64(2 * NSEC_PER_SEC)) / Double(NSEC_PER_SEC)) {
            self.label.text = ""
        }
    }

These methods implement the actual functionality (if you can call it that) that’s provided by the swipe gestures, just as the touchesMoved(_:withEvent:)did previously, except that there is no longer any code to detect the actual swipes. Now add the new code shown here to the viewDidLoad method:

        super.viewDidLoad()
        // Do any additional setup after loading the view, typically from a nib.


        let vertical = UISwipeGestureRecognizer(target: self, action: "reportVerticalSwipe:")
        vertical.direction = [.up, .down]
        view.addGestureRecognizer(vertical)


        let horizontal = UISwipeGestureRecognizer(target: self,
                action: "reportHorizontalSwipe:")
        horizontal.direction = [.left, .right]
        view.addGestureRecognizer(horizontal)

All we’re doing here is creating two gesture recognizers—one that will detect vertical movement and another to detect horizontal movement. When one of them recognizes its configured gesture, it will call either the reportVerticalSwipe()or the reportHorizontalSwipe() method and sets the label’s text appropriately. To sanitize things even further, you can also delete the declaration of the gestureStartPoint property and the two constant values from ViewController.swift. Now build and run the application to try out the new gesture recognizers.

In terms of total lines of code, there’s not much difference between these two approaches for a simple case like this. But the code that uses gesture recognizers is undeniably simpler to understand and easier to write. You don’t need to give even a moment’s thought to the issue of calculating a finger’s movement over time because that’s done for you by the UISwipeGestureRecognizer. And better yet, Apple’s gesture recognition system is extendable, which means that if your application requires really complex gestures that aren’t covered by any of Apple’s recognizers, you can make your own, and keep the complex code (along the lines of what we saw earlier) tucked away in the recognizer class instead of polluting your view controller code. We’ll build an example of just such a thing later in this chapter. Meanwhile, run the application and you’ll see that it behaves just like the previous version.

Implementing Multiple Swipes

In the Swipes application, we worried about only single-finger swipes, so we just grabbed the first object in the touches set to figure out where the user’s finger was during the swipe. This approach is fine if you’re interested in only single-finger swipes, the most common type of swipe used. But what if you want to handle two- or three-finger swipes? In the earliest versions of this book, we dedicated about 50 lines of code, and a fair amount of explanation, to achieving this by tracking multiple UITouch instances across multiple touch events. Now that we have gesture recognizers, this is a solved problem. A UISwipeGestureRecognizer can be configured to recognize any number of simultaneous touches. By default, each instance expects a single finger, but you can configure it to look for any number of fingers pressing the screen at once. Each instance responds only to the exact number of touches you specify, so what we’ll do is create a whole bunch of gesture recognizers in a loop.

Make another copy of your Swipes project folder to experiment with this—you’ll find the completed version in the Swipes 3 folder of the example source code archive. Edit ViewController.swift and modify the viewDidLoad method, replacing it with the one shown here:

    override func viewDidLoad() {
        super.viewDidLoad()
        // Do any additional setup after loading the view, typically from a nib.


        for touchCount in 0..<5 {
            let vertical = UISwipeGestureRecognizer(target: self,
                action: #selector(ViewController.reportVerticalSwipe(_:)))
            vertical.direction = [.up, .down]
            vertical.numberOfTouchesRequired = touchCount
            view.addGestureRecognizer(vertical)


            let horizontal = UISwipeGestureRecognizer(target: self,
                action: #selector(ViewController.reportHorizontalSwipe(_:)))
            horizontal.direction = [.left, .right]
            horizontal.numberOfTouchesRequired = touchCount
            view.addGestureRecognizer(horizontal)
        }
    }

What we’re doing here is adding 10 different gesture recognizers to the view—the first one recognizes a vertical swipe with one finger, the second a vertical swipe with two fingers, and so on. All of them call the reportVerticalSwipe() method when they recognize their gesture. The second set of recognizers handle horizontal swipes and call the reportHorizontalSwipe() method instead. Note that in a real application, you might want different numbers of fingers swiping across the screen to trigger different behaviors. You can easily do that using gesture recognizers, simply by having each of them call a different action method.

Now all we need to do is change the logging by adding a method that gives us a handy description of the number of touches, and then using that in the reporting methods, as shown here. Add this method toward the bottom of the ViewController class, just above the two swipe-reporting methods:

    func descriptionForTouchCount(_ touchCount:Int) -> String {
        switch touchCount {
        case 1:
            return "Single"
        case 2:
            return "Double"
        case 3:
            return "Triple"
        case 4:
            return "Quadruple"
        case 5:
            return "Quintuple"
        default:
            return ""
        }
    }

Next, modify the two swipe-reporting methods as shown:

    func reportHorizontalSwipe(_ recognizer:UIGestureRecognizer) {
        label.text = "Horizontal swipe detected"
        let count = descriptionForTouchCount(recognizer.numberOfTouches())
        label.text = "(count)-finger horizontal swipe detected"
        DispatchQueue.main.after(when: DispatchTime.now() +
                Double(Int64(2 * NSEC_PER_SEC)) / Double(NSEC_PER_SEC)) {
            self.label.text = ""
        }
    }


    func reportVerticalSwipe(_ recognizer:UIGestureRecognizer) {
        label.text = "Vertical swipe detected"
        let count = descriptionForTouchCount(recognizer.numberOfTouches())
        label.text = "(count)-finger vertical swipe detected"
        DispatchQueue.main.after(when: DispatchTime.now() +
                Double(Int64(2 * NSEC_PER_SEC)) / Double(NSEC_PER_SEC)) {
            self.label.text = ""
        }
    }

Build and run the app. You should be able to trigger double- and triple-swipes in both directions, yet still be able to trigger single-swipes. If you have small fingers, you might even be able to trigger a quadruple- or quintuple-swipe.

Tip

In the simulator, if you hold down the Option key, a pair of dots, representing a pair of fingers, will appear. Get them close together, and then hold down the Shift key. This will keep the dots in the same position relative to each other, allowing you to move the pair of fingers around the screen. Now click and drag down the screen to simulate a double-swipe.

With a multiple-finger swipe, one thing to be careful of is that your fingers aren’t too close to each other. If two fingers are very close to each other, they may register as only a single touch. Because of this, you shouldn’t rely on quadruple- or quintuple-swipes for any important gestures because many people will have fingers that are too big to do those swipes effectively. Also, on the iPad some four- and five-finger gestures are turned on by default at the system level for switching between apps and going to the home screen. These can be turned off in the Settings app, but you’re probably better off just not using such gestures in your own apps.

Detecting Multiple Taps

In the TouchExplorer application, we printed the tap count to the screen, so you’ve already seen how easy it is to detect multiple taps. It’s not quite as straightforward as it seems, however, because often you will want to take different actions based on the number of taps. If the user triple-taps, you get notified three separate times. You get a single-tap, a double-tap, and finally a triple-tap. If you want to do something on a double-tap but something completely different on a triple-tap, having three separate notifications could cause a problem, since you will first receive notification of a double-tap, and then a triple-tap. Unless you write your own clever code to take this into account, you’ll wind up doing both actions. Fortunately, Apple anticipated this situation, and provided a mechanism to let multiple gesture recognizers work nicely together, even when they’re faced with ambiguous inputs that could seemingly trigger any of them. The basic idea is that you place a restriction on a gesture recognizer, telling it to not trigger its associated method unless some other gesture recognizer fails to trigger its own method.

That seems a bit abstract, so let’s make it real. Tap gestures are recognized by the UITapGestureRecognizer class. A tap recognizer can be configured to do its thing when a particular number of taps occur. Imagine that we have a view for which we want to define distinct actions that occur when the user taps once or double-taps. You might start off with something like the following:

            let singleTap = UITapGestureRecognizer(target: self,
            action: #selector(ViewController.singleTap))  
            singleTap.numberOfTapsRequired = 1
            singleTap.numberOfTouchesRequired = 1
            view.addGestureRecognizer(singleTap)


            let doubleTap = UITapGestureRecognizer(target: self,
             action: #selector(ViewController.doubleTap))
            doubleTap.numberOfTapsRequired = 2
            doubleTap.numberOfTouchesRequired = 1
            view.addGestureRecognizer(doubleTap)

The problem with this piece of code is that the two recognizers are unaware of each other, and they have no way of knowing that the user’s actions may be better suited to another recognizer. If the user double-taps the view in the preceding code, the doDoubleTap() method will be called, but the doSingleMethod()will also be called—twice!—once for each tap.

The way around this is to create a failure requirement. We tell singleTap that it should trigger its action only if doubleTap doesn’t recognize and respond to the user input by adding this single line:

            singleTap.require(toFail: doubleTap)

This means that, when the user taps once, singleTap doesn’t do its work immediately. Instead, singleTap waits until it knows that doubleTap has decided to stop paying attention to the current gesture (that is, the user didn’t tap twice). We’re going to build on this further with our next project.

In Xcode, create a new project with the Single View Application template. Call this new project Taps and use the Devices pop-up to choose Universal. This application will have four labels: one each that informs us when it has detected a single-tap, double-tap, triple-tap, and quadruple-tap (see Figure 18-4).

A329781_3_En_18_Fig4_HTML.jpg
Figure 18-4. The Taps application detects up to four sequential taps

We need outlets for the four labels and we also need separate methods for each tap scenario to simulate what we would have in a real application. We’ll also include a method for erasing the text fields. Open ViewController.swift and add the label outlets to the class:

class ViewController: UIViewController {
    @IBOutlet var singleLabel:UILabel!
    @IBOutlet var doubleLabel:UILabel!
    @IBOutlet var tripleLabel:UILabel!
    @IBOutlet var quadrupleLabel:UILabel!

Save the file and select Main.storyboard to edit the GUI. Once you’re there, add four labels to the view from the library and arrange them one above the other. In the Attributes Inspector, set the text alignment for each label to Center. In the Document Outline, Control-drag from the top label to its parent view and release the mouse. Hold down Shift and select Vertical Spacing to Top Layout Guide and Center Horizontally in Container, and then press Return. Do the same for the other three labels to set their auto layout constraints. When you’re finished, Control-drag from the View Controller icon to each label and connect each one to singleLabel, doubleLabel, tripleLabel, and quadrupleLabel, respectively. Finally, make sure you double-click each label and press the delete key to get rid of any text. Now select ViewController.swift and make the code changes shown in Listing 18-5.

Listing 18-5. The Taps App Changes to the ViewController.swift File
    override func viewDidLoad() {
        super.viewDidLoad()
        // Do any additional setup after loading the view, typically from a nib.


        let singleTap = UITapGestureRecognizer(target: self,
                action: #selector(ViewController.singleTap))
        singleTap.numberOfTapsRequired = 1
        singleTap.numberOfTouchesRequired = 1
        view.addGestureRecognizer(singleTap)


        let doubleTap = UITapGestureRecognizer(target: self,
                action: #selector(ViewController.doubleTap))
        doubleTap.numberOfTapsRequired = 2
        doubleTap.numberOfTouchesRequired = 1
        view.addGestureRecognizer(doubleTap)
        singleTap.require(toFail: doubleTap)
        
        let tripleTap = UITapGestureRecognizer(target: self,
                action: #selector(ViewController.tripleTap))
        tripleTap.numberOfTapsRequired = 3
        tripleTap.numberOfTouchesRequired = 1
        view.addGestureRecognizer(tripleTap)
        doubleTap.require(toFail: tripleTap)


        let quadrupleTap = UITapGestureRecognizer(target: self,
                action: #selector(ViewController.quadrupleTap))
        quadrupleTap.numberOfTapsRequired = 4
        quadrupleTap.numberOfTouchesRequired = 1
        view.addGestureRecognizer(quadrupleTap)
        tripleTap.require(toFail: quadrupleTap)


    }

    func singleTap() {
        showText("Single Tap Detected", inLabel: singleLabel)
    }


    func doubleTap() {
        showText("Double Tap Detected", inLabel: doubleLabel)
    }


    func tripleTap() {
        showText("Triple Tap Detected", inLabel: tripleLabel)
    }


    func quadrupleTap() {
        showText("Quadruple Tap Detected", inLabel: quadrupleLabel)
    }


    private func showText(_ text: String, inLabel label: UILabel) {
        label.text = text
        DispatchQueue.main.after(when: DispatchTime.now() +
                Double(Int64(2 * NSEC_PER_SEC)) / Double(NSEC_PER_SEC)) {
                label.text = ""
        }
    }

The four tap methods do nothing more in this application than set one of the four labels and use DispatchQueue.main.after()to erase that same label after 2 seconds. The interesting part of this is what occurs in the viewDidLoad method. We start off simply enough, by setting up a tap gesture recognizer and attaching it to our view:

        let singleTap = UITapGestureRecognizer(target: self,
                action: #selector(ViewController.singleTap))
        singleTap.numberOfTapsRequired = 1
        singleTap.numberOfTouchesRequired = 1
        view.addGestureRecognizer(singleTap)

Note that we set both the number of taps (touches in the same position, one after another) required to trigger the action and touches (number of fingers touching the screen at the same time) to 1. After that, we set another tap gesture recognizer to handle a double-tap:

        let doubleTap = UITapGestureRecognizer(target: self,
                action: #selector(ViewController.doubleTap))
        doubleTap.numberOfTapsRequired = 2
        doubleTap.numberOfTouchesRequired = 1
        view.addGestureRecognizer(doubleTap)
        singleTap.require(toFail: doubleTap)

This is pretty similar to the previous code, right up until that last line, in which we give singleTap some additional context. We are effectively telling singleTap that it should trigger its action only in case some other gesture recognizer—in this case, doubleTap—decides that the current user input isn’t what it’s looking for.

Let’s think about what this means. With those two tap gesture recognizers in place, a single tap in the view will immediately make singleTap think, “Hey, this looks like it’s for me.” At the same time, doubleTap will think, “Hey, this looks like it might be for me, but I’ll need to wait for one more tap.” Because singleTap is set to wait for doubleTap’s “failure,” it doesn’t trigger its action method right away; instead, it waits to see what happens with doubleTap.

After that first tap, if another tap occurs immediately, doubleTap says, “Hey, that’s mine all right,” and it fires its action. At that point, singleTap will realize what happened and give up on that gesture. On the other hand, if a particular amount of time goes by (the amount of time that the system considers to be the maximum length of time between taps in a double-tap), doubleTap will give up, and singleTap will see the failure and finally trigger its event. The rest of the method goes on to define gesture recognizers for three and four taps, and at each point it configures one gesture to be dependent on the failure of the next:

        let tripleTap = UITapGestureRecognizer(target: self,
                action: #selector(ViewController.tripleTap))
        tripleTap.numberOfTapsRequired = 3
        tripleTap.numberOfTouchesRequired = 1
        view.addGestureRecognizer(tripleTap)
        doubleTap.require(toFail: tripleTap)


        let quadrupleTap = UITapGestureRecognizer(target: self,
                action: #selector(ViewController.quadrupleTap))
        quadrupleTap.numberOfTapsRequired = 4
        quadrupleTap.numberOfTouchesRequired = 1
        view.addGestureRecognizer(quadrupleTap)
        tripleTap.require(toFail: quadrupleTap)

Note that we don’t need to explicitly configure every gesture to be dependent on the failure of each of the higher tap-numbered gestures. That multiple dependency comes about naturally as a result of the chain of failure established in our code. Since singleTap requires the failure of doubleTap, doubleTap requires the failure of tripleTap, and tripleTap requires the failure of quadrupleTap. By extension, singleTap requires that all of the others fail.

Build and run the app. Whether you single-, double-, triple- , or quadruple-tap, you should see only one label displayed at the end of the sequence. After about a second and a half, the label will clear itself and you can try again.

Detecting Pinch and Rotation Gestures

Another common gesture is the two-finger pinch. It’s used in a number of applications (e.g., Mobile Safari, Mail, and Photos) to let you zoom in (if you pinch apart) or zoom out (if you pinch together). Detecting pinches is really easy, thanks to UIPinchGestureRecognizer. This one is referred to as a continuous gesture recognizerbecause it calls its action method over and over again during the pinch. While the gesture is underway, the recognizer goes through a number of states. When the gesture is recognized, the recognizer is in state UIGestureRecognizerState.began and its scale property is set to an initial value of 1.0; for the rest of the gesture, the state is UIGestureRecognizerState.changed and the scale value goes up and down, relative to how far the user’s fingers move from the start. We’re going to use the scale value to resize an image. Finally, the state changes to UIGestureRecognizerState.ended.

Another common gesture is the two-finger rotation. This is also a continuous gesture recognizer and is named UIRotationGestureRecognizer. It has a rotation property that is 0.0 by default when the gesture begins, and then changes from 0.0 to 2.0*PI as the user rotates her fingers. In the next example, we’ll use both pinch and rotation gestures. Create a new project in Xcode, again using the Single View Application template, and call this one PinchMe. First, drag and drop the beautiful yosemite-meadows.png image from the 18 - Image folder in the example source code archive (or some other favorite photo of yours) into your project’s Assets.xcassets. Now make the changes in Listing 18-6 to the ViewController.swift file.

Listing 18-6. Updated ViewController.swift File for the PinchMe App Modifications
class ViewController: UIViewController, UIGestureRecognizerDelegate {
    private var imageView:UIImageView!
    private var scale = CGFloat(1)
    private var previousScale = CGFloat(1)
    private var rotation = CGFloat(0)
    private var previousRotation = CGFloat(0)


    override func viewDidLoad() {
        super.viewDidLoad()
        // Do any additional setup after loading the view, typically from a nib.


        let image = UIImage(named: "yosemite-meadows")
        imageView = UIImageView(image: image)
        imageView.isUserInteractionEnabled = true
        imageView.center = view.center
        view.addSubview(imageView)


        let pinchGesture = UIPinchGestureRecognizer(target: self,
                action: #selector(ViewController.doPinch(_:)))
        pinchGesture.delegate = self
        imageView.addGestureRecognizer(pinchGesture)


        let rotationGesture = UIRotationGestureRecognizer(target: self,
                action: #selector(ViewController.doRotate(_:)))
        rotationGesture.delegate = self
        imageView.addGestureRecognizer(rotationGesture)
    }


    func gestureRecognizer(_ gestureRecognizer: UIGestureRecognizer,
                shouldRecognizeSimultaneouslyWith
                    otherGestureRecognizer: UIGestureRecognizer) -> Bool {
        return true
    }


    func transformImageView() {
        var t = CGAffineTransform(scaleX: scale * previousScale, y: scale * previousScale)
        t = t.rotate(rotation + previousRotation)
        imageView.transform = t
    }


    func doPinch(_ gesture:UIPinchGestureRecognizer) {
        scale = gesture.scale
        transformImageView()
        if gesture.state == .ended {
            previousScale = scale * previousScale
            scale = 1
        }
    }


    func doRotate(_ gesture:UIRotationGestureRecognizer) {
        rotation = gesture.rotation
        transformImageView()
        if gesture.state == .ended {
            previousRotation = rotation + previousRotation
            rotation = 0
        }
    }
}

First, we define four instance variables for the current and previous scale and rotation. The previous values are the values from a previously triggered and ended gesture recognizer; we need to keep track of these values as well because the UIPinchGestureRecognizer for scaling and UIRotationGestureRecognizer for rotation will always start at the default positions of 1.0 scale and 0.0 rotation. Next, in viewDidLoad(), we begin by creating a UIImageView to pinch and rotate, load our Yosemite image into it, and center it in the main view. We must remember to enable user interaction on the image view because UIImageView is one of the few UIKit classes that have user interaction disabled by default.

        let image = UIImage(named: "yosemite-meadows")
        imageView = UIImageView(image: image)
        imageView.isUserInteractionEnabled = true
        imageView.center = view.center
        view.addSubview(imageView)

Next, we set up a pinch gesture recognizer and a rotation gesture recognizer. We tell them to notify us when their gestures are recognized via the doPinch()and doRotation() methods , respectively. We tell both to use self as their delegate:

        let pinchGesture = UIPinchGestureRecognizer(target: self,
                action: #selector(ViewController.doPinch(_:)))
        pinchGesture.delegate = self
        imageView.addGestureRecognizer(pinchGesture)


        let rotationGesture = UIRotationGestureRecognizer(target: self,
               action: #selector(ViewController.doRotate(_:)))
        rotationGesture.delegate = self
        imageView.addGestureRecognizer(rotationGesture)

In the gestureRecognizer(_:shouldRecognizeSimultaneoslyWithGestureRecognizer:) method (which is the only method from the UIGestureRecognizerDelegate protocol that we need to implement) we always return true to allow our pinch and rotation gestures to work together; otherwise, the gesture recognizer that starts first would always block the other:

    func gestureRecognizer(_ gestureRecognizer: UIGestureRecognizer,
                shouldRecognizeSimultaneouslyWith
                    otherGestureRecognizer: UIGestureRecognizer) -> Bool {
        return true
    }

Next, we implement a helper method for transforming the image view according to the current scaling and rotation from the gesture recognizers. Notice that we multiply the scale by the previous scale. We also add to the rotation with the previous rotation. This allows us to adjust for pinch and rotation that has been done previously when a new gesture starts from the default 1.0 scale and 0.0 rotation.

    func transformImageView() {
        var t = CGAffineTransform(scaleX: scale * previousScale, y: scale * previousScale)
        t = t.rotate(rotation + previousRotation)
        imageView.transform = t
    }

Finally we implement the action methods that take the input from the gesture recognizers, and update the transformation of the image view. In both doPinch()and doRotate(), we first extract the new scale or rotation values. Next, we update the transformation for the image view. And finally, if the gesture recognizer reports that its gesture has ended by having a state equal to UIGestureRecognizerState.Ended, we store the current correct scale or rotation values, and then reset the current scale or rotation values to the default 1.0 scale or 0.0 rotation:

    func doPinch(_ gesture:UIPinchGestureRecognizer) {
        scale = gesture.scale
        transformImageView()
        if gesture.state == .ended {
            previousScale = scale * previousScale
            scale = 1
        }
    }


    func doRotate(_ gesture:UIRotationGestureRecognizer) {
        rotation = gesture.rotation
        transformImageView()
        if gesture.state == .ended {
            previousRotation = rotation + previousRotation
            rotation = 0
        }
    }

And that’s all there is to pinch and rotation detection. Build and run the app to give it a try. As you do some pinching and rotation, you’ll see the image change in response (see Figure 18-5). If you’re on the simulator, remember that you can simulate a pinch by holding down the Option key and clicking and dragging in the simulator window using your mouse.

A329781_3_En_18_Fig5_HTML.jpg
Figure 18-5. The PinchMe application detects the pinch and rotation gesture.

Summary

You should now understand the mechanism iOS uses to tell your application about touches, taps, and gestures. You also learned how to detect the most commonly used iOS gestures. We also saw a couple of basic examples of the use of the new 3D Touch feature. There’s quite a bit more to this than we were able to cover here—for the full details, refer to Apple’s document on the subject, which you can find at https://developer.apple.com/library/ios/documentation/UserExperience/Conceptual/Adopting3DTouchOniPhone/ .

The iOS user interface relies on gestures for much of its ease of use, so you’ll want to have these techniques at the ready for most of your iOS development. In the next chapter we’ll tell you how to figure out where in the world you are by using Core Location.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.224.38.3