What We’ve Learned

images/gestures/user-image-scene-clipped.png

In this chapter, we took hold—literally—of the touch gestures that are the hallmark of iOS user interfaces. By giving the user the ability to manipulate an image by dragging it around with one finger and pinch-zooming it with two, we immediately create a sense of close contact with the image. Gesture recognizers make it easy to pick up the most common touch gestures and have them call back to our code when gestures are detected. And because both the recognizers and the onscreen views are concerned with how much movement or scaling is indicated by a gesture, it works well to connect the two by means of affine transforms, which cleanly represent translation, rotation, scaling, and combinations thereof.

Armed with this knowledge, we can bring new touch handling features to scenes throughout our app. It will also be useful if we ever need to create our own custom views, since a view is basically a combination of appearance and interactivity, meaning that a custom view just needs to handle custom drawing and custom event-handling. And we just saw how to do the second of those things.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.219.213.196