Transforming the Image View

Now that we see what affine transforms offer us, the properties and methods exposed by the gesture recognizers start to make more sense. The pinch gesture recognizer provides a scale that we could use to make a scale transform, and the pan recognizer offers translationInView that will be perfect for making a translation.

To make use of these transforms, we have a few options. UIView has a transform property, so we can set that directly. The underlying CALayer that provides the view’s appearance also has a transform property, although that one is of type CATransform3D and works in three dimensions. A more advanced option would be to write our own subclass of UIView or CALayer that draws its own contents; the Core Graphics library used for drawing allows us to set an affine transform on our drawing operations, and is the only way to use multiple transforms. To keep it simple for now, we’ll just reset the UIImageView’s transform property, which it inherits from UIView.

The Pan Transform

images/gestures/pan-gesture-icon.png

Let’s start with a pan transform to move the image around. In the storyboard, go to the user image view detail scene, the one with the 280×280 image and the Done button. In the Object library, find the Pan Gesture Recognizer icon, which looks like a blue circle leaving a streak below it. Drag and drop the pan recognizer on to the image view.

Now we need to give the recognizer a method it can call. Select the pan gesture recognizer in the scene’s object list, and switch to Assistant Editor (), making sure UserImageDetailViewController.swift is in the right pane. Control-drag from the gesture recognizer (either in the scene’s object list or from the bar atop the scene), to any free space inside the class, perhaps down by the closing curly brace. At the end of the drop, a pop-up asks for what kind of connection to make—be sure to change from Outlet to Action—and for a name for the action method. Let’s call it handlePanGesture. Also, before clicking Connect, change Type from the default AnyObject to UIPanGestureRecognizer.

This connection will call handlePanGesture when a pan gesture starts, updates, or ends on the image view. At least it would, if image views processed touch events by default. Just as with the image in the previous view controller, we have to explicitly enable user interaction with this image view to make it respond to touch events. Switch back to the storyboard’s standard editor, select the image view, bring up its Attributes Inspector, and select the User Interaction Enabled check box.

Switch back to the standard editor and bring up UserImageDetailViewController.swift so we can write this method that we just connected. This is where we’re going to ask the gesture recognizer how far it’s moved, and use that to update the image view’s affine transform.

For this to work, we need to understand what the gesture recognizer tells us. If we look up translationInView in the documentation for UIPanGestureRecognizer, we find it returns “a point identifying the new location of a view in the coordinate system of its designated superview.” There’s also an important note in the discussion of the method:

The x and y values report the total translation over time. They are not delta values from the last time that the translation was reported. Apply the translation value to the state of the view when the gesture is first recognized—do not concatenate the value each time the handler is called.

What this is telling us is that as we get new callbacks as handlePanGesture is repeatedly called during the drag, the value reported back to us is relative to the image view’s initial transform, not the last value we set it to. That means we should plan on saving the image view’s transform the first time we get called. Define that as a property up in the top of the class:

 var​ preGestureTransform: ​CGAffineTransform​?

Now we can assign that property the first time we’re called back by the gesture recognizer. When we’re called back, we can ask the gesture recognizer for its state, which can be started, changed, ended, canceled, or a few other administrative and error states. When the value is UIGestureRecognizerState.Began, we’ll save off the initial transform of the image view. Begin the handlePanGesture: like this:

 @IBAction​ ​func​ handlePanGesture(sender: ​UIPanGestureRecognizer​) {
 if​ sender.state == .​Began​ {
  preGestureTransform = userImageView.transform
  }

When a pan gesture begins, this if block saves the image view’s transform to our preGestureTransform property, since all subsequent event coordinates will be relative to this initial transform. Now we’re ready to handle moving the view around. So, finish up handlePanGesture with a second if, as follows:

1: if​ sender.state == .​Began​ ||
2:  sender.state == .​Changed​ {
3: let​ translation = sender.translationInView(userImageView)
4: let​ translatedTransform = ​CGAffineTransformTranslate​(
5:  preGestureTransform!, translation.x, translation.y)
6:  userImageView.transform = translatedTransform
7:  }
8: }

We get the translationInView on line 3. This is a CGPoint whose x and y members represent how far we have moved along each axis from where the pan began. With that information, we can use the CGAffineTransformTranslate function to create a new transform that represents that distance from the original preGestureTransform (lines 4--5). Then, on line 6, we just set that as the new transform property of the image view.

Does this work? Try it. Drill down to an user image detail and try dragging the picture around. You should have total freedom to put it wherever you like, even under the Done button or partially offscreen, as seen in the following figure. Pretty cool, but we should clean up after ourselves before we go further.

images/gestures/user-image-scene-translated.png

The Identity Transform

So it’s great that we can drag the image wherever we like…but that does mean we can drag it completely off the screen. Problem!

Let’s give ourselves a “panic button”: if the user double-taps the image, it’ll go back to its default position.

In the storyboard, add a new tap gesture recognizer to the image view. Select the tap gesture recognizer icon from the scene’s object list or the title bar atop the scene, bring up the Attributes Inspector, and set the number of taps to 2. This means it will take a double-tap for the recognizer to fire.

Next, switch to Assistant Editor, and Control-drag from the tap gesture recognizer into UserImageDetailViewController.swift to create a new action method. When the pop-up appears at the end of the drag, call the method handleDoubleTapGesture, and switch the type from AnyObject to UITapGestureRecognizer.

So how do we write this method? We want to go back to the image view’s original transform, before any of our changes. By default, UIViews have an identity transform, which means no scaling, rotation, or translation. This is a CGAffineTransform where a and d are 1.0, and b, c, tx and ty are all 0.0. Run that through the earlier formulas and we find that makes x’ equal x and y’ equal y. This “do nothing” is provided to us as the constant CGAffineTransformIdentity.

 @IBAction​ ​func​ handleDoubleTapGesture(sender: ​UITapGestureRecognizer​) {
  userImageView.transform = ​CGAffineTransformIdentity
 }

Restoring the identity transform on a UIView is a one-line call. Run the app, drag the image around, and double-tap to send it back to where it started. Easy peasy!

The Scale Transform

The other common gesture we should add to our image viewer is a pinch-to-zoom feature. Again, this naturally links the scale property of the gesture recognizer—in this case a UIPinchGestureRecognizer—to the ability of affine transforms to perform scaling operations.

images/gestures/pinch-gesture-icon.png

Back in the storyboard, go to the Object library and locate the pinch gesture recognizer icon. As before, drag it on to the image view to add it to the scene. Switch to the Assistant Editor with UserImageDetailViewController.swift in the right pane, select the icon in the scene or the title bar, and Control-drag to create a new action method. Name the action handlePinchGesture and change the parameter type to UIPinchGestureRecognizer.

What does the pinch gesture’s scale give us? According to the docs, it’s “the scale factor relative to the points of the two touches in screen coordinates.” And, as was the case with the pan recognizer, this value is relative to the beginning of the gesture, not to the last time we were called. So, once again, we need to make use of the preGestureTransform to hold on to our initial value.

1: @IBAction​ ​func​ handlePinchGesture(sender: ​UIPinchGestureRecognizer​) {
if​ sender.state == .​Began​ {
preGestureTransform = userImageView.transform
}
5: if​ sender.state == .​Began​ ||
sender.state == .​Changed​ {
let​ scaledTransform = ​CGAffineTransformScale​(
preGestureTransform!, sender.scale, sender.scale)
userImageView.transform = scaledTransform
10:  }
}

As with the pan recognizer, we use the start state to save off the image view’s initial transform, on lines 2--4. Then on lines 5--6, we deal with the scale value of a started or changed event. On lines 7--8, we use CGAffineTransformScale to create a new CGAffineTransform by taking the original preGestureTransform and applying the scale value to both the x and y factors of the scaling transform. And then on line 9, we set this as the new value of the image view’s transform.

images/gestures/user-image-scene-scaled.png

Run the app and give it a whirl. To simulate a pinch gesture in the Simulator, hold down the Option key on the keyboard, which will show the pinch points as two circles that move with the mouse or trackpad. By adding the Shift key, we can move the pinch points without registering as a pinch. In the following figure, we’ve panned to the right and pinch-zoomed in to pick out two Neon Genesis Evangelion cosplayers coming off the escalator behind Janie (yes, her Twitter avatar is from an anime convention, how did you guess?).

To better understand the math behind the transform, try changing the x- and y-scaling values sent to CGAffineTransformScale. For example, if we set the last argument, sy, to the constant value 1.0, then the pinch will become a horizontal stretching operation, because the y value will always be the same after the transform (since it’s being multiplied by 1). Another fun trick is to multiply the scaling value by -1.0, which causes the image to flip around the axis, making it an upside-down mirror image.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.221.161.79