Chapter 8: Better Drawing

Your users expect a beautiful, engaging, and intuitive interface. It’s up to you to deliver. No matter how powerful your features, if your interface seems “clunky,” you’re going to have a hard time making the sale. This is about more than just pretty colors and flashy animations. A truly beautiful and elegant user interface is a key part of a user-centric application. Keeping your focus on delighting your user is the key to building exceptional applications.

One of the tools you need in order to create an exceptional user interface is custom drawing. In this chapter, you learn the mechanics of drawing in iOS, with focus on flexibility and performance. This chapter doesn’t cover iOS UI design. For information on how to design iOS interfaces, start with Apple’s iOS Human Interface Guidelines and iOS Application Programming Guide, available in the iOS developer documentation.

In this chapter, you find out about the several drawing systems in iOS, with a focus on UIKit and Core Graphics. By the end of this chapter, you will have a strong grasp of the UIKit drawing cycle, drawing coordinate systems, graphic contexts, paths, and transforms. You’ll know how to optimize your drawing speed through correct view configuration, caching, pixel alignment, and use of layers. You’ll be able to avoid bloating your application bundle with avoidable prerendered graphics.

With the right tools, you can achieve your goal of a beautiful, engaging, and intuitive interface, while maintaining high performance, low memory usage, and small application size.

iOS’s Many Drawing Systems

iOS has several major drawing systems: UIKit, Core Graphics (Quartz), Core Animation, Core Image, and OpenGL ES. Each is useful for a different kind of problem.

UIKit—This is the highest-level interface, and the only interface in Objective-C. It provides easy access to layout, compositing, drawing, fonts, images, animation, and more. You can recognize UIKit elements by the prefix UI, such as UIView and UIBezierPath. UIKit also extends NSString to simplify drawing text with methods like drawInRect:withFont:.

Core Graphics (also called Quartz 2D)—The primary drawing system underlying UIKit, Core Graphics is what you use most frequently to draw custom views. Core Graphics is highly integrated with UIView and other parts of UIKit. Core Graphics data structures and functions can be identified by the prefix CG.

Core Animation—This provides powerful two- and three-dimensional animation services. It is also highly integrated into UIView. Chapter 8 covers Core Animation in detail.

Core Image—A Mac technology first available in iOS 5, Core Image provides very fast image filtering such as cropping, sharpening, warping, and just about any other transformation you can imagine.

OpenGL ES—Most useful for writing high-performance games—particularly 3D games—Open GL ES is a subset of the OpenGL drawing language. For other applications on iOS, Core Animation is generally a better choice. OpenGL ES is portable between most platforms. A discussion of OpenGL ES is beyond the scope of this book, but many good books are available on the subject.

UIKit and the View Drawing Cycle

When you change the frame or visibility of a view, draw a line, or change the color of an object, the change is not immediately displayed on the screen. This sometimes confuses developers who incorrectly write code like this:

  progressView.hidden = NO; // This line does nothing

  [self doSomethingTimeConsuming];

  progressView.hidden = YES;

It’s important to understand that the first line (progressView.hidden = NO) does absolutely nothing useful. This code does not cause the progress view to be displayed while the time-consuming operation is in progress. No matter how long this method runs, you will never see the view displayed. Figure 8-1 shows what actually happens in the drawing loop.

All drawing occurs on the main thread, so as long as your code is running on the main thread, nothing can be drawn. That is one of the reasons you should never execute a long-running operation on the main thread. Not only does it prevent drawing updates but it also prevents event handling (such as responding to touches). As long as your code is running on the main thread, your application is effectively “hung” to the user. This isn’t noticeable as long as you make sure that your main thread routines return quickly.

You may now be thinking, “Well, I’ll just run my drawing commands on a background thread.” You generally can’t do that because drawing to the current UIKit context isn’t thread-safe. Any attempt to modify a view on a background thread leads to undefined behavior, including drawing corruption and crashes. (See the section “Caching and Background Drawing,” later in the chapter, for more information on how you can draw in the background.)

This behavior is not a problem to be overcome. The consolidation of drawing events is one part of iOS’s capability to render complex drawings on limited hardware. As you see throughout this chapter, much of UIKit is dedicated to avoiding unnecessary drawing, and this consolidation is one of the first steps.

So how do you start and stop an activity indicator for a long-running operation? You use dispatch or operation queues to put your expensive work in the background, while making all of your UIKit calls on the main thread, as shown in the following code.

9781118449974-fg0801.eps

Figure 8-1 How the Cocoa drawing cycle consolidates changes

ViewController.m (TimeConsuming)

- (IBAction)doSomething:(id)sender {

  [sender setEnabled:NO];

  [self.activity startAnimating];

  

  dispatch_queue_t bgQueue = dispatch_get_global_queue(

                       DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);

  

  dispatch_async(bgQueue, ^{

    [self somethingTimeConsuming];

    

    dispatch_async(dispatch_get_main_queue(), ^{

      [self.activity stopAnimating];

      [sender setEnabled:YES];

    });

  });

}

When the IBAction is called, you start animating the activity indicator. You then put a call to somethingTimeConsuming on the default background dispatch queue. When that finishes, you put a call to stopAnimating on the main dispatch queue. Dispatch and operation queues are covered in Chapter 13.

To summarize:

iOS consolidates all drawing requests during the run loop, and draws them all at once.

You must not block the main thread to do complex processing.

You must not draw into the main view graphics context except on the main thread. You need to check each UIKit method to ensure it does not have a main thread requirement. Some UIKit methods can be used on background threads as long as you’re not drawing into the main view context. See “CGLayer,” later in this chapter, for examples.

View Drawing Versus View Layout

UIView separates the layout (“rearranging”) of subviews from drawing (or “display”). This is important for maximizing performance because layout is generally cheaper than drawing. Layout is cheap because UIView caches drawing operations onto GPU-optimized bitmaps. These bitmaps can be moved around, shown, hidden, rotated, and otherwise transformed and composited very inexpensively using the GPU.

When you call setNeedsDisplay on a view, it is marked “dirty” and will be redrawn during the next drawing cycle. Don’t call it unless the content of the view has really changed. Most UIKit views automatically manage redrawing when their data is changed, so you generally don’t need to call setNeedsDisplay except on custom views.

When a view’s subviews need to be rearranged because of an orientation change or scrolling, UIKit calls setNeedsLayout. This, in turn, calls layoutSubviews on the affected views. By overriding layoutSubviews, you can make your application much smoother during rotation and scrolling events. You can rearrange your subviews’ frames without necessarily having to redraw them, and you can hide or show views based on orientation. You can also call setNeedsLayout if your data changes in ways that only need layout updates rather than drawing.

Custom View Drawing

Views can provide their content by including subviews, including layers, or implementing drawRect:. Typically if you implement drawRect:, you don’t mix this with layers or subviews, although it’s legal and sometimes useful to do so. Most custom drawing is done with UIKit or Core Graphics, although OpenGL ES has become easier to integrate when needed.

2D drawing generally breaks down into several operations:

Lines

Paths (filled or outlined shapes)

Text

Images

Gradients

2D drawing does not include manipulation of individual pixels because that is destination-dependent. You can achieve this with a bitmap context, but not directly with UIKit or Core Graphics functions.

Both UIKit and Core Graphics use a “painter” drawing model. This means that each command is drawn in sequence, overlaying previous drawings during that event loop. Order is very important in this model, and you must draw back-to-front. Each time drawRect: is called, it’s your responsibility to draw the entire area requested. The drawing “canvas” is not preserved between calls to drawRect:.

Drawing with UIKit

In the “old days” before iPad, most custom drawing had to be done with Core Graphics because there was no way to draw arbitrary shapes with UIKit. In iPhoneOS 3.2, Apple added UIBezierPath and made it much easier to draw entirely in Objective-C. UIKit still lacks support for lines, gradients, shading, and some advanced features like controlling anti-aliasing and precise color management. Even so, UIKit is now a very convenient way to manage the most common custom drawing needs.

The simplest way to draw rectangles is with UIRectFrame or UIRectFill, as shown in the following code:

- (void)drawRect:(CGRect)rect {

  [[UIColor redColor] setFill];

  UIRectFill(CGRectMake(10, 10, 100, 100));

}

Notice how you first set the pen color using –[UIColor setFill]. Drawing is done into a graphics context provided by the system before calling drawRect:. That context includes a lot of information including stroke color, fill color, text color, font, transform, and more. At any given time, there is just one stroke pen and one fill pen, and their colors are used to draw everything. The “Managing Graphics Contexts” section, later in this chapter, covers how to save and restore contexts, but for now just note that drawing commands are order-dependent, and that includes commands that change the pens.

The graphics context provided to drawRect: is specifically a view graphics context. There are other types of graphics contexts, including PDF and bitmap contexts. All of them use the same drawing techniques, but a view graphics context is optimized for drawing onto the screen. This distinction will be important when I discuss CGLayer.

Paths

UIKit includes much more powerful drawing commands than its rectangle functions. It can draw arbitrary curves and lines using UIBezierPath. A Bézier curve is a mathematical way of expressing a line or curve using a small number of control points. Most of the time, you don’t need to worry about the math because UIBezierPath has simple methods to handle the most common paths: lines, arcs, rectangles (optionally rounded), and ovals. With these, you can quickly draw most shapes needed for UI elements. The following code is an example of a simple shape scaled to fill the view, as shown in Figure 8-2. You draw this several ways in the upcoming examples.

FlowerView.m (Paths)

- (void)drawRect:(CGRect)rect {

  CGSize size = self.bounds.size;

  CGFloat margin = 10;

  CGFloat radius = rint(MIN(size.height - margin,

                            size.width - margin) / 4);

  CGFloat xOffset, yOffset;

  CGFloat offset = rint((size.height - size.width) / 2);

  if (offset > 0) {

    xOffset = rint(margin / 2);

    yOffset = offset;

  }

  else {

    xOffset = -offset;

    yOffset = rint(margin / 2);

  }

  

  [[UIColor redColor] setFill];

  UIBezierPath *path = [UIBezierPath bezierPath];

  [path addArcWithCenter:CGPointMake(radius * 2 + xOffset,

                                     radius + yOffset)

                  radius:radius

              startAngle:-M_PI

                endAngle:0

               clockwise:YES];

  [path addArcWithCenter:CGPointMake(radius * 3 + xOffset,

                                     radius * 2 + yOffset)

                  radius:radius

              startAngle:-M_PI_2

                endAngle:M_PI_2

               clockwise:YES];

  [path addArcWithCenter:CGPointMake(radius * 2 + xOffset,

                                     radius * 3 + yOffset)

                  radius:radius

              startAngle:0

                endAngle:M_PI

               clockwise:YES];

  [path addArcWithCenter:CGPointMake(radius + xOffset,

                                     radius * 2 + yOffset)

                  radius:radius

              startAngle:M_PI_2

                endAngle:-M_PI_2

               clockwise:YES];

  [path closePath];

  [path fill];

}

9781118449974-fg0802.tif

Figure 8-2 Output of FlowerView

FlowerView creates a path made up of a series of arcs and fills it with red. Creating a path doesn’t cause anything to be drawn. A UIBezierPath is just a sequence of curves, like an NSString is a sequence of characters. Only when you call fill is the curve drawn into the current context.

Note the use of the M_PI (π) and M_PI_2 (π ⁄2) constants. Arcs are described in radians, so π and fractions of π are important. math.h defines many such constants that you should use rather than recomputing them. Arcs measure their angles clockwise, with 0 radians pointing to the right, π⁄2 radians pointing down, π (or -π) radians pointing left, and -π⁄2 radians pointing up. You can use 3π⁄2 for up if you prefer, but I find -M_PI_2 easier to visualize than 3*M_PI_2. If radians give you a headache, you can make a function out of it:

CGFloat RadiansFromDegrees(CGFloat d) {

  return d * M_PI / 180;

}

Generally, I recommend just getting used to radians rather than doing so much math, but if you need unusual angles, it can be easier to work in degrees.

When calculating radius and offset, you use rint (round to closest integer) to ensure that you’re point-aligned (and therefore pixel-aligned). That helps improve drawing performance and avoids blurry edges. Most of the time, that’s what you want, but in cases where an arc meets a line, it can lead to off-by-one drawing errors. Usually, the best approach is to move the line so that all the values are integers, as discussed in the following section.

Understanding Coordinates

There are subtle interactions among coordinates, points, and pixels that can lead to poor drawing performance and blurry lines and text. Consider the following code:

CGContextSetLineWidth(context, 3.);

// Draw 3pt horizontal line from {10,100} to {200,100}

CGContextMoveToPoint(context, 10., 100.);

CGContextAddLineToPoint(context, 200., 100.);

CGContextStrokePath(context);

  

// Draw 3pt horizontal line from {10,105.5} to {200,105.5}

CGContextMoveToPoint(context, 10., 105.5);

CGContextAddLineToPoint(context, 200., 105.5);

CGContextStrokePath(context);

Figure 8-3 shows the output of this program on a non-Retina display, scaled to make the differences more obvious.

9781118449974-fg0803.tif

Figure 8-3 Comparison of line from {10,100} and line from {10,105.5}

The line from {10, 100} to {200, 100} is much more blurry than the line from {10, 105.5} to {200, 105.5}. The reason is because of how iOS interprets coordinates.

When you construct a CGPath, you work in so-called geometric coordinates. These are the same kind of coordinates that mathematicians use, representing the zero-dimensional point at the intersection of two grid lines. It’s impossible to draw a geometric point or a geometric line, because they’re infinitely small and thin. When iOS draws, it has to translate these geometric objects into pixel coordinates. These are two-dimensional boxes that can be set to a specific color. A pixel is the smallest unit of display area that the device can control.

Figure 8-4 shows the geometric line from {10, 100} to {200, 100}.

9781118449974-fg0804.tif

Figure 8-4 Geometric line from {10, 100} to {200, 100}

When you call CGContextStrokePath, iOS centers the line along the path. Ideally, the line would be three pixels wide, from y = 98.5 to y = 101.5, as shown in Figure 8-5.

9781118449974-fg0805.tif

Figure 8-5 Ideal three-pixel line

This line is impossible to draw, however. Each pixel must be a single color, and the pixels at the top and bottom of the line include two colors. Half is the stroke color, and half is the background color. iOS solves this problem by averaging the two. This is the same technique used in anti-aliasing. This is shown in Figure 8-6.

9781118449974-fg0806.tif

Figure 8-6 Anti-aliased three-pixel line

On the screen, this line will look slightly blurry. The solution to this problem is to move horizontal and vertical lines to the half-point so that when iOS centers the line, the edges fall along pixel boundaries, or to make your line an even width.

You can also encounter this problem with nonintegral line widths, or if your coordinates aren’t integers or half-integers. Any situation that forces iOS to draw fractional pixels will cause blurriness.

Fill is not the same as stroke. A stroke line is centered on the path, but fill colors all the pixels up to the path. If you fill the rectangle from {10,100} to {200,103}, each pixel is filled correctly, as shown in Figure 8-7.

9781118449974-fg0807.tif

Figure 8-7 Filling the rectangle from {10,100} to {200,103}

The discussion so far has equated points with pixels. On a Retina display, these are not equivalent. The iPhone 4 has four pixels per point and a scale factor of two. That subtly changes things, but generally for the better. Because all the coordinates used in Core Graphics and UIKit are expressed in points, all integral line widths are effectively an even number of pixels. For example, if you request a 1-point stroke width, this is the same as a 2-pixel stroke width. To draw that line, iOS needs to fill one pixel on each side of the path. That’s an integral number of pixels, so there’s no anti-aliasing. You can still encounter blurriness if you use coordinates that are neither integers nor half-integers.

Offsetting by a half-point is unnecessary on a Retina display, but it doesn’t hurt. As long as you intend to support iPhone 3GS or iPad 2, you need to apply a half-point offset for drawing horizontal and vertical lines.

All of this applies only to horizontal and vertical lines. Sloping or curved lines should be anti-aliased so that they’re not jagged, so there’s generally no reason to offset them.

Resizing and contentMode

Returning to FlowerView found in the earlier section, “Paths,” if you rotate the device as shown in Figure 8-8, you’ll see that the view is distorted, even though you have code that adjusts for the size of the view.

9781118449974-fg0808.tif

Figure 8-8 Rotated FlowerView

iOS optimizes drawing by taking a snapshot of the view and adjusting it for the new frame. The drawRect: method isn’t called. The property contentMode determines how the view is adjusted. The default, UIViewContentModeScaleToFill, scales the image to fill the new view size, changing the aspect ratio if needed. That’s why the shape is distorted.

There are a lot of ways to automatically adjust the view. You can move it around without resizing it, or you can scale it in various ways that preserve or modify the aspect ratio. The key is to make sure that any mode you use exactly matches the results of your drawRect: in the new orientation. Otherwise, your view will “jump” the next time you redraw. This usually works as long as your drawRect: doesn’t consider its bounds during drawing. In FlowerView, you use the bounds to determine the size of your shape, so it’s hard to get automatic adjustments to work correctly.

Use the automatic modes if you can because they can improve performance. When you can’t, ask the system to call drawRect: when the frame changes by using UIViewContentModeRedraw, as shown in the following code.

- (void)awakeFromNib {

  self.contentMode = UIViewContentModeRedraw;

}

Transforms

iOS platforms have access to a very nice GPU that can do matrix operations very quickly. If you can convert your drawing calculations into matrix operations, you can leverage the GPU and get excellent performance. Transforms are just such a matrix operation.

iOS has two kinds of transforms: affine and 3D. UIView handles only affine transforms, so that’s all we discuss right now. An affine transform is a way of expressing rotation, scaling, shear, and translation (shifting) as a matrix. These transforms are shown in Figure 8-9.

A single transform combines any number of these operations into a 3 × 3 matrix. iOS has functions to support rotation, scaling, and translation. If you want shear, you’ll have to write the matrix yourself. (You can also use CGAffineTransformMakeShear from Jeff LaMarche; see “Further Reading” at the end of the chapter.)

Transforms can dramatically simplify and speed up your code. Often it’s much easier and faster to draw in a simple coordinate space around the origin and then to scale, rotate, and translate your drawing to where you want it. For instance, FlowerView includes a lot of code like this:

CGPointMake(radius * 2 + xOffset, radius + yOffset)

That’s a lot of typing, a lot of math, and a lot of things to keep straight in your head. What if, instead, you just draw it in a 4 × 4 box as shown in Figure 8-10?

9781118449974-fg0809.eps

Figure 8-9 Affine transforms

9781118449974-fg0810.eps

Figure 8-10 Drawing FlowerView in a 4 × 4 box

Now all the interesting points fall on nice, easy coordinates like {0,1} and {1,0}. The following code shows how to draw using this transform. Compare the highlighted sections with the FlowerView code earlier in this chapter.

FlowerTransformView.m (Transforms)

static inline CGAffineTransform

CGAffineTransformMakeScaleTranslate(CGFloat sx, CGFloat sy,

                                    CGFloat dx, CGFloat dy) {

  return CGAffineTransformMake(sx, 0.f, 0.f, sy, dx, dy);

}

- (void)drawRect:(CGRect)rect {

  CGSize size = self.bounds.size;

  CGFloat margin = 10;

  [[UIColor redColor] set];

  UIBezierPath *path = [UIBezierPath bezierPath];

  [path addArcWithCenter:CGPointMake(0, -1)

                  radius:1

              startAngle:-M_PI

                endAngle:0

               clockwise:YES];

  [path addArcWithCenter:CGPointMake(1, 0)

                  radius:1

              startAngle:-M_PI_2

                endAngle:M_PI_2

               clockwise:YES];

  [path addArcWithCenter:CGPointMake(0, 1)

                  radius:1

              startAngle:0

                endAngle:M_PI

               clockwise:YES];

  [path addArcWithCenter:CGPointMake(-1, 0)

                  radius:1

              startAngle:M_PI_2

                endAngle:-M_PI_2

               clockwise:YES];

  [path closePath];

  

  CGFloat scale = floor((MIN(size.height, size.width)

                         - margin) / 4);

  

  CGAffineTransform transform;

  transform = CGAffineTransformMakeScaleTranslate(scale,

                                                  scale,

                                             size.width/2,

                                            size.height/2);

  [path applyTransform:transform];

  [path fill];

}

When you’re done constructing your path, you compute a transform to move it into your view’s coordinate space. You scale it by the size you want divided by the size it currently is (4), and you translate it to the center of the view. The utility function CGAffineTransformMakeScaleTranslate isn’t just for speed (although it is faster). It’s easier to get the transform correct this way. If you try to build up the transform one step at a time, each step affects later steps. Scaling and then translating is not the same as translating and then scaling. If you build the matrix all at once, you don’t have to worry about that.

This technique can be used to draw complicated shapes at unusual angles. For instance, to draw an arrow pointing to the upper right, it’s generally easier to draw it pointing to the right and then rotate it.

You have a choice between transforming the path using applyTransform: and transforming the whole view by setting the transform property. Which is best depends on the situation, but I usually prefer to transform the path rather than the view when practical. Modifying the view’s transform makes the results of frame and bounds more difficult to interpret, so I avoid it when I can. As you see in the following section, you can also transform the current context, which sometimes is the best approach.

Drawing with Core Graphics

Core Graphics, sometimes called Quartz 2D or just Quartz, is the main drawing system in iOS. It provides destination-independent drawing, so you can use the same commands to draw to the screen, layer, bitmap, PDF, or printer. Anything starting with CG is part of Core Graphics. Figure 8-11 and the following code provide an example of a simple scrolling graph.

9781118449974-fg0811.tif

Figure 8-11 Simple scrolling graph

GraphView.h (Graph)

@interface GraphView : UIView

@property (nonatomic, readonly, strong) NSMutableArray *values;

@end

GraphView.m (Graph)

@implementation GraphView {

  dispatch_source_t _timer;

}

const CGFloat kXScale = 5.0;

const CGFloat kYScale = 100.0;

static inline CGAffineTransform

CGAffineTransformMakeScaleTranslate(CGFloat sx, CGFloat sy,

    CGFloat dx, CGFloat dy) {

  return CGAffineTransformMake(sx, 0.f, 0.f, sy, dx, dy);

}

- (void)awakeFromNib {

  [self setContentMode:UIViewContentModeRight];

  _values = [NSMutableArray array];

  __weak id weakSelf = self;

  double delayInSeconds = 0.25;

  _timer =

      dispatch_source_create(DISPATCH_SOURCE_TYPE_TIMER, 0, 0,

          dispatch_get_main_queue());

  dispatch_source_set_timer(

      _timer, dispatch_walltime(NULL, 0),

      (unsigned)(delayInSeconds * NSEC_PER_SEC), 0);

  dispatch_source_set_event_handler(_timer, ^{

    [weakSelf updateValues];

  });

  dispatch_resume(_timer);

}

- (void)updateValues {

  double nextValue = sin(CFAbsoluteTimeGetCurrent())

      + ((double)rand()/(double)RAND_MAX);

  [self.values addObject:

      [NSNumber numberWithDouble:nextValue]];

  CGSize size = self.bounds.size;

  CGFloat maxDimension = MAX(size.height, size.width);

  NSUInteger maxValues =

      (NSUInteger)floorl(maxDimension / kXScale);

  if ([self.values count] > maxValues) {

    [self.values removeObjectsInRange:

        NSMakeRange(0, [self.values count] - maxValues)];

  }

  [self setNeedsDisplay];

}

- (void)dealloc {

  dispatch_source_cancel(_timer);

}

- (void)drawRect:(CGRect)rect {

  if ([self.values count] == 0) {

    return;

  }

  CGContextRef ctx = UIGraphicsGetCurrentContext();

  CGContextSetStrokeColorWithColor(ctx,

                                   [[UIColor redColor] CGColor]);

  CGContextSetLineJoin(ctx, kCGLineJoinRound);

  CGContextSetLineWidth(ctx, 5);

  CGMutablePathRef path = CGPathCreateMutable();

  CGFloat yOffset = self.bounds.size.height / 2;

  CGAffineTransform transform =

      CGAffineTransformMakeScaleTranslate(kXScale, kYScale,

                                          0, yOffset);

  CGFloat y = [[self.values objectAtIndex:0] floatValue];

  CGPathMoveToPoint(path, &transform, 0, y);

  for (NSUInteger x = 1; x < [self.values count]; ++x) {

    y = [[self.values objectAtIndex:x] floatValue];

    CGPathAddLineToPoint(path, &transform, x, y);

  }

  CGContextAddPath(ctx, path);

  CGPathRelease(path);

  CGContextStrokePath(ctx);

}

@end

Every quarter second, this code adds a new number to the end of the data and removes an old number from the beginning. Then it marks the view as dirty with setNeedsDisplay. The drawing code sets various advanced line drawing options not available with UIBezierPath, and creates a CGPath with all the lines. It then transforms the path to fit into the view, adds the path to the context, and strokes it.

Core Graphics uses the Core Foundation memory management rules. Core Foundation objects require manual retain and release, even under ARC. Note the use of CGPathRelease. For full details, see Chapter 27.

You may be tempted to cache the CGPath here so that you don’t have to compute it every time. That’s a good instinct, but in this case, it wouldn’t help. iOS already avoids calling drawRect: except when the view is dirty, which happens only when the data changes. When the data changes, you need to calculate a new path. Caching the old path in this case would just complicate the code and waste memory.

Mixing UIKit and Core Graphics

Within drawRect:, UIKit and Core Graphics can generally intermix without issue, but outside of drawRect:, you may find that things drawn with Core Graphics appear upside down. UIKit uses an upper-left origin (ULO) coordinate system, whereas Core Graphics uses a lower-left origin (LLO) system by default. As long as you use the context returned by UIGraphicsGetCurrentContext inside of drawRect:, everything is fine because this context is already flipped. But if you create your own context using functions like CGBitmapContextCreate, it’ll be LLO. You can either do your math backward or you can flip the context:

CGContextTranslateCTM(ctx, 0.0f, height);

CGContextScaleCTM(ctx, 1.0f, -1.0f);

This moves (translates) the height of the context and then flips it using a negative scale. When going from UIKit to Core Graphics, the transform is reversed:

CGContextScaleCTM(ctx, 1.0f, -1.0f);

CGContextTranslateCTM(ctx, 0.0f, -height);

First flip it, and then translate it.

Managing Graphics Contexts

Before calling drawRect:, the drawing system creates a graphics context (CGContext). A context includes a lot of information such as a pen color, text color, current font, transform, and more. Sometimes you may want to modify the context and then put it back the way you found it. For instance, you may have a function to draw a specific shape with a specific color. There is only one stroke pen, so when you change the color, this would change things for your caller. To avoid side effects, you can push and pop the context using CGContextSaveGState and CGContextRestoreGState.

Do not confuse this with the similar-sounding UIGraphicsPushContext and UIGraphicsPopContext. They do not do the same thing. CGContextSaveGState remembers the current state of a context. UIGraphicsPushContext changes the current context. Here’s an example of CGContextSaveGState.

[[UIColor redColor] setStroke];

CGContextSaveGState(UIGraphicsGetCurrentContext());

[[UIColor blackColor] setStroke];

CGContextRestoreGState(UIGraphicsGetCurrentContext());

UIRectFill(CGRectMake(10, 10, 100, 100)); // Red

This code sets the stroke pen color to red and saves off the context. It then changes the pen color to black and restores the context. When you draw, the pen is red again.

The following code illustrates a common error.

[[UIColor redColor] setStroke];

// Next line is nonsense

UIGraphicsPushContext(UIGraphicsGetCurrentContext());

[[UIColor blackColor] setStroke];

UIGraphicsPopContext();

UIRectFill(CGRectMake(10, 10, 100, 100)); // Black

In this case, you set the pen color to red and then switch context to the current context, which does nothing useful. You then change the pen color to black, and pop the context back to the original (which effectively does nothing). You now will draw a black rectangle, which is almost certainly not what was meant.

The purpose of UIGraphicsPushContext is not to save the current state of the context (pen color, line width, and so on), but to switch contexts entirely. Say you are in the middle of drawing something into the current view context, and now want to draw something completely different into a bitmap context. If you want to use UIKit to do any of your drawing, you’d want to save off the current UIKit context, including all the drawing that had been done, and switch to a completely new drawing context. That’s what UIGraphicsPushContext does. When you finish creating your bitmap, you pop the stack and get your old context back. That’s what UIGraphicsPopContext does. This only matters in cases where you want to draw into the new bitmap context with UIKit. As long as you use Core Graphics functions, you don’t need to push or pop contexts because Core Graphics functions take the context as a parameter.

This is a pretty useful and common operation. It’s so common that Apple has made a shortcut for it called UIGraphicsBeginImageContext. It takes care of pushing the old context, allocating memory for a new context, creating the new context, flipping the coordinate system, and making it the current context. Most of the time, that’s just what you want.

Here’s an example of how to create an image and return it using UIGraphicsBeginImageContext. The result is shown in Figure 8-12.

MYView.m (Drawing)

- (UIImage *)reverseImageForText:(NSString *)text {

  const size_t kImageWidth = 200;

  const size_t kImageHeight = 200;

  CGImageRef textImage = NULL;

  UIFont *font = [UIFont boldSystemFontOfSize:17.0];

    

  UIGraphicsBeginImageContext(CGSizeMake(kImageWidth,

                                         kImageHeight));

  

  [[UIColor redColor] set];

  [text drawInRect:CGRectMake(0, 0,

                              kImageWidth, kImageHeight)

          withFont:font];

  textImage =

       UIGraphicsGetImageFromCurrentImageContext().CGImage;

  

  UIGraphicsEndImageContext();

  

  return [UIImage imageWithCGImage:textImage

                             scale:1.0

                 orientation:UIImageOrientationUpMirrored];

}

9781118449974-fg0812.tif

Figure 8-12 Text drawn with reverseImageForText:

Optimizing UIView Drawing

UIView and its subclasses are highly optimized, and when possible, use them rather than custom drawing. For instance, UIImageView is faster and uses less memory than anything you’re likely to put together in an afternoon with Core Graphics. The following sections cover a few things to keep in mind when using UIView to keep it drawing as well as it can.

Avoid Drawing

The fastest drawing is the drawing you never do. iOS goes to great lengths to avoid calling drawRect:. It caches an image of your view and moves, rotates, and scales it without any intervention from you. Using an appropriate contentMode lets the system adjust your view during rotation or resizing without calling drawRect:. The most common cause for drawRect: running is when you call setNeedsDisplay. Avoid calling setNeedsDisplay unnecessarily. Remember, though, setNeedsDisplay just schedules the view to be redrawn. Calling setNeedsDisplay many times in a single event loop is no more expensive, practically, than calling it once, so don’t coalesce your calls. iOS is already doing that for you.

Those familiar with Mac development may be familiar with partial view drawing using setNeedsDisplayInRect:. iOS does not perform partial view drawing, and setNeedsDisplayInRect: is the same as setNeedsDisplay. The entire view will be redrawn. If you want to partially redraw a view, you should use CALayer (discussed in Chapter 9) or use subviews.

Caching and Background Drawing

If you need to do a lot of calculations during your drawing, cache the results when you can. At the lowest level, you can cache the raw data you need rather than asking for it from your delegate every time. Beyond that, you can cache static elements like CGFont or CGGradient objects so that you generate them only once. Fonts and gradients are useful to cache this way because they’re often reused. Finally, you can cache the entire result of a complex drawing operation. Often the best place to cache such a result is in a CGLayer, which is discussed later in the section “CGLayer.” Alternatively, you can cache the result in a bitmap, generally using UIGraphicsBeginImageContext as discussed in “Managing Graphics Context,” earlier in this chapter.

Much of this caching or precalculation can be done in the background. You may have heard that you must always draw on the main thread, but this isn’t completely true. There are several UIKit functions that must be called only on the main thread, such as UIGraphicsBeginImageContext, but you are free to create a CGBitmapContext object on any thread using CGBitmapCreateContext and draw into it. Since iOS 4, you can use UIKit drawing methods like drawAtPoint: on background threads as long as you draw into your own CGContext and not the main view graphics context (the one returned by UIGraphicsGetCurrentContext). You should only access a given CGContext on a single thread, however.

Custom Drawing Versus Prerendering

There are two major approaches to managing complex drawing. You can draw everything programmatically with CGPath and CGGradient, or you can prerender everything in a graphics program like Adobe Photoshop and display it as an image. If you have an art department and plan to have extremely complex visual elements, Photoshop is often the only way to go.

There are a lot of disadvantages to prerendering, however. First, it introduces resolution dependence. You may need to manage 1-scale and 2-scale versions of your images and possibly different images for iPad and iPhone. This complicates workflow and bloats your product. It can make minor changes difficult and lock you into precise element sizes and colors if every change requires a round trip to the artist. Many artists are still unfamiliar with how to draw stretchable images and how to best provide images to be composited for iOS.

Apple originally encouraged developers to prerender because early iPhones couldn’t compute gradients fast enough. Since the iPhone 3GS, this has been less of an issue, and each generation makes custom drawing more attractive.

Today, I recommend custom drawing when you can do it in a reasonable amount of code. This is usually the case for small elements like buttons. When you do use prerendered artwork, I suggest that you keep the art files fairly “flat” and composite in code. For instance, you may use an image for a button’s background, but handle the rounding and shadows in code. That way, as you want to make minor tweaks, you don’t have to rerender the background.

A middle ground in this is automatic Core Graphics code generation with tools like PaintCode and Opacity. These are not panaceas. Typically, the code generated is not ideal, and you may have to modify it, complicating the workflow if you want to regenerate the code. That said, we recommend investigating these tools if you are doing a lot of UI design. See “Further Reading” at the end of this chapter for links to sites with information on these tools.

Pixel Alignment and Blurry Text

One of the most common causes of subtle drawing problems is pixel misalignment. If you ask Core Graphics to draw at a point that is not aligned with a pixel, it performs anti-aliasing as discussed in “Understanding Coordinates” earlier in this chapter. This means it draws part of the information on one pixel and part on another, giving the illusion that the line is between the two. This illusion makes things smoother but also makes them fuzzy. Anti-aliasing also takes processing time, so it slows down drawing. When possible, you want to make sure that your drawing is pixel-aligned to avoid this.

Prior to the Retina display, pixel-aligned meant integer coordinates. As of iOS 4, coordinates are in points, not pixels. There are two pixels to the point on the current Retina display, so half-points (1.5, 2.5) are also pixel-aligned. In the future, there might be four or more pixels to the point, and it could be different from device to device. Even so, unless you need pixel accuracy, it is easiest to just make sure you use integer coordinates for your frames.

Generally, it’s the frame origin that matters for pixel alignment. This causes an unfortunate problem for the center property. If you set the center to an integral coordinate, your origin may be misaligned. This is particularly noticeable with text, especially with UILabel. Figure 8-13 demonstrates this problem. It is subtle and somewhat difficult to see in print, so you can also demonstrate it with the program BlurryText available with the online files for this chapter.

9781118449974-fg0813.tif

Figure 8-13 Text that is pixel-aligned (top) and unaligned (bottom)

There are two solutions. First, odd font sizes (13 rather than 12, for instance) will typically align correctly. If you make a habit of using odd font sizes, you can often avoid the problem. To be certain you avoid the problem, you need to make sure that the frame is integral either by using setFrame: instead of setCenter: or by using a UIView category like setAlignedCenter::

- (void)setAlignedCenter:(CGPoint)center {

  self.center = center;

  self.frame = CGRectIntegral(self.frame);

}

Because setAlignedCenter: effectively sets the frame twice, it’s not the fastest solution, but it is very easy and fast enough for most problems. CGRectIntegral() returns the smallest integral rectangle that encloses the given rectangle.

As pre-Retina displays phase out, blurry text will be less of an issue as long as you set center to integer coordinates. For now, though, it is still a concern.

Alpha, Opaque, Hidden

Views have three properties that appear related but that are actually orthogonal: alpha, opaque, and hidden.

The alpha property determines how much information a view contributes to the pixels within its frame. So an alpha of 1 means that all of the view’s information is used to color the pixel. An alpha of 0 means that none of the view’s information is used to color the pixel. Remember, nothing is really transparent on an iPhone screen. If you set the entire screen to transparent pixels, the user isn’t going to see the circuit board or the ground. In the end, it’s just a matter of what color to draw the pixel. So, as you raise and lower the alpha, you’re changing how much this view contributes to the pixel versus views “below” it.

Marking a view opaque or not doesn’t actually make its content more or less transparent. Opaque is a promise that the drawing system can use for optimization. When you mark a view as opaque, you’re promising the drawing system that you will draw every pixel in your rectangle with fully opaque colors. That allows the drawing system to ignore views below yours and that can improve performance, particularly when applying transforms. You should mark your views opaque whenever possible, especially views that scroll like UITableViewCell. However, if any partially transparent pixels are in your view, or if you don’t draw every pixel in your rectangle, setting opaque can have unpredictable results. Setting a nontransparent backgroundColor ensures that all pixels are drawn.

Closely related to opaque is clearsContextBeforeDrawing. This is YES by default, and sets the context to transparent black before calling drawRect:. This avoids any garbage data in the view. It’s a pretty fast operation, but if you’re going to draw every pixel anyway, you can get a small benefit by setting it to NO.

Finally, hidden indicates that the view should not be drawn at all and is generally equivalent to an alpha of 0. The hidden property cannot be animated, so it’s common to hide views by animating alpha to 0.

Hidden and transparent views don’t receive touch events. The meaning of transparent is not well defined in the documentation, but through experimentation, I’ve found that it’s an alpha less than 0.1. Do not rely on this particular value, but the point is that “nearly transparent” is generally treated as transparent. You cannot create a “transparent overlay” to catch touch events by setting the alpha very low.

You can make a view transparent and still receive touch events by setting its alpha to 1, opaque to NO, and backgroundColor to nil or [UIColor clearColor]. A view with a transparent background is still considered visible for the purposes of hit detection.

CGLayer

CGLayer is a very effective way to cache things you draw often. This should not be confused with CALayer, which is a more powerful and complicated layer object from Core Animation. CGLayer is a Core Graphics layer that is optimized, often hardware-optimized, for drawing into CGContext.

There are several kinds of CGContext. The most common is a view graphics context, designed to draw to the screen, which is returned by UIGraphicsCurrentContext. Contexts are also used for bitmaps and printing, however. Each of these has different attributes, including maximum resolution, color details, and available hardware acceleration.

At its simplest, a CGLayer is similar to a CGBitmapContext. You can draw into it, save it off, and use it to draw the result into a CGContext later. The difference is that you can optimize CGLayer for use with a particular kind of graphics context. If a CGLayer is destined for a view graphics context, it can cache its data directly on the GPU, which can significantly improve performance. CGBitmapContext can’t do this because it doesn’t know that you plan to draw it on the screen.

The following example demonstrates caching a CGLayer. In this case, it’s cached in a static variable the first time the view is drawn. You can then “stamp” the CGLayer repeatedly while rotating the context. You use UIGraphicsPushContext so that you can use UIKit to draw the text into the layer context, and UIGraphicsPopContext to return to the normal context. This could be done with CGContextShowTextAtPoint instead, but UIKit makes it very easy to draw an NSString. Figure 8-14 shows the output.

LayerView.m (Layer)

- (void)drawRect:(CGRect)rect {

  static CGLayerRef sTextLayer = NULL;

  CGContextRef ctx = UIGraphicsGetCurrentContext();

  

  if (sTextLayer == NULL) {

    CGRect textBounds = CGRectMake(0, 0, 200, 100);

    sTextLayer = CGLayerCreateWithContext(ctx,

                                          textBounds.size,

                                          NULL);

    CGContextRef textCtx = CGLayerGetContext(sTextLayer);

    CGContextSetRGBFillColor (textCtx, 1.0, 0.0, 0.0, 1);

    UIGraphicsPushContext(textCtx);

    UIFont *font = [UIFont systemFontOfSize:13.0];

    [@”Pushing The Limits” drawInRect:textBounds

                             withFont:font];

    UIGraphicsPopContext();

  }

  

  CGContextTranslateCTM(ctx, self.bounds.size.width / 2,

                        self.bounds.size.height / 2);

  

  for (NSUInteger i = 0; i < 10; ++i) {

    CGContextRotateCTM(ctx, 2 * M_PI / 10);

    CGContextDrawLayerAtPoint(ctx,

                              CGPointZero,

                              sTextLayer);

  }

}

9781118449974-fg0814.tif

Figure 8-14 Output of LayerView

Summary

iOS has a rich collection of drawing tools. This chapter focused on Core Graphics and its Objective-C descendant, UIKit. By now, you should have a good understanding of how systems interact and how to optimize your iOS drawing.

Chapter 9 discusses Core Animation, which puts your interface in motion. Also covered is CALayer, a powerful addition to UIView and CGLayer and an important tool for your drawing toolbox even if you’re not animating.

iOS 5 added Core Image to iOS for tweaking pictures. iOS also has ever-growing support for OpenGL ES for drawing advanced 3D graphics and textures. OpenGL ES is a book-length subject of its own, so it isn’t tackled here, but you can get a good introduction in Apple’s “OpenGL ES Programming Guide for iOS” (see the “Further Reading” section).

Further Reading

Apple Documentation

The following documents are available in the iOS Developer Library at developer.apple.com or through the Xcode Documentation and API Reference.

Drawing and Printing Guide for iOS

iOS Human Interface Guidelines

iOS App Programming Guide

OpenGL ES Programming Guide for iOS

Quartz 2D Programming Guide

Technical Q&A QA1708: Improving Image Drawing Performance on iOS

View Programming Guide for iOS

Other Resources

LaMarche, Jeff. “iPhone Development.” Jeff has several articles that provide a lot of insight into using CGAffineTransformiphonedevelopment.blogspot.com/search/label/CGAffineTransform.

PaintCode. Fairly simple vector editor that exports Core Graphics code. Particularly well-suited to common UI elements.www.paintcodeapp.com

Opacity. More powerful vector editor that exports Core Graphics code, and can be used to generate more general vector drawings.likethought.com/opacity

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.147.69.50