Chapter 12. Drawing with Quartz and OpenGL

Every application we've built so far has been constructed from views and controls provided to us as part of the UIKit framework. You can do an awful lot with these stock components, and a great many application interfaces can be constructed using only these stock objects. Some applications, however, can't be fully realized without looking further. For instance, at times, an application needs to be able to do custom drawing. Fortunately for us, we have not one but two separate libraries we can call on for our drawing needs: Quartz 2D, which is part of the Core Graphics framework, and OpenGL ES, which is a cross-platform graphics library. OpenGL ES is a slimmed-down version of another cross-platform graphic library called OpenGL. OpenGL ES is a subset of OpenGL designed specifically for embedded systems such as the iPhone (hence the letters "ES"). In this chapter, we'll explore both of these powerful graphics environments. We'll build sample applications in both and try to get a sense of which environment to use when.

Two Views of a Graphical World

Although Quartz and OpenGL overlap a lot, there are distinct differences between them. Quartz is a set of functions, datatypes, and objects designed to let you draw directly into a view or to an image in memory.

Quartz treats the view or image that is being drawn into as a virtual canvas and follows what's called a painter's model, which is just a fancy way to say that that drawing commands are applied in much the same way as paint is applied to a canvas. If a painter paints an entire canvas red, and then paints the bottom half of the canvas blue, the canvas will be half red and half either blue or purple. Blue if the paint is opaque; purple if the paint is semitransparent.

Quartz's virtual canvas works the same way. If you paint the whole view red, and then paint the bottom half of the view blue, you'll have a view that's half red and half either blue or purple, depending on whether the second drawing action was fully opaque or partially transparent. Each drawing action is applied to the canvas on top of any previous drawing actions.

On the other hand, OpenGL ES, is implemented as a state machine. This concept is somewhat more difficult a concept to grasp, because it doesn't resolve to a simple metaphor like painting on a virtual canvas. Instead of letting you take actions that directly impact a view, window, or image, OpenGL ES maintains a virtual three-dimensional world. As you add objects to that world, OpenGL keeps track of the state of all objects. Instead of a virtual canvas, OpenGL ES gives you a virtual window into its world. You add objects to the world and define the location of your virtual window with respect to the world. OpenGL then draws what you can see through that window based on the way it is configured and where the various objects are in relation to each other. This concept is a bit abstract, so if you're confused, don't worry; it'll make more sense as we make our way through this chapter's code.

Quartz is relatively easy to use. It provides a variety of line, shape, and image drawing functions. Though easy to use, Quartz 2D is limited to two-dimensional drawing. Although many Quartz functions do result in drawing that takes advantage of hardware acceleration, there is no guarantee that any particular action you take in Quartz will be accelerated.

OpenGL, though considerably more complex and conceptually more difficult, offers a lot more power. It has tools for both two-dimensional and three-dimensional drawing and is specifically designed to take full advantage of hardware acceleration. It's also extremely well suited to writing games and other complex, graphically intensive programs.

This Chapter's Drawing Application

Our next application is a simple drawing program (see Figure 12-1). We're going to build this application twice, once using Quartz 2D and once using OpenGL ES, so you get a real feel for the difference between the two.

Our chapter's simple drawing application in action

Figure 12.1. Our chapter's simple drawing application in action

The application features a bar across the top and one across the bottom, each with a segmented control. The control at the top lets you change the drawing color, and the one at the bottom lets you change the shape to be drawn. When you touch and drag, the selected shape will be drawn in the selected color. To minimize the application's complexity, only one shape will be drawn at a time.

The Quartz Approach to Drawing

When using Quartz to do your drawing, you'll usually add the drawing code to the view doing the drawing. For example, you might create a subclass of UIView and add Quartz function calls to that class's drawRect: method. The drawRect: method is part of the UIView class definition and gets called every time a view needs to redraw itself. If you insert your Quartz code in drawRect:, that code will get called then the view redraws itself.

Quartz 2D's Graphics Contexts

In Quartz 2D, as in the rest of Core Graphics, drawing happens in a graphics context, usually just referred to as a context. Every view has an associated context. When you want to draw in a view, you'll retrieve the current context, use that context to make various Quartz drawing calls, and let the context worry about rendering your drawing onto the view.

This line of code retrieves the current context:

CGContextRef context = UIGraphicsGetCurrentContext();

Note

Notice that we're using Core Graphics C functions, rather than Objective-C objects, to do our drawing. Both Core Graphics and OpenGL are C-based APIs, so most of the code we write in this part of the chapter will consist of C function calls.

Once you've defined your graphics context, you can draw into it by passing the context to a variety of Core Graphics drawing functions. For example, this sequence will draw a 2-pixel-wide line in the context:

CGContextSetLineWidth(context, 2.0);
CGContextSetStrokeColorWithColor(context, [UIColor redColor].CGColor);
CGContextMoveToPoint(context, 100.0f, 100.0f);
CGContextAddLineToPoint(context, 200.0f, 200.0f);
CGContextStrokePath(context);

The first call specifies that any drawing we do should create a line that's 2 pixels wide. We then specify that the stroke color should be red. In Core Graphics, two colors are associated with drawing actions: the stroke color and the fill color. The stroke color is used in drawing lines and for the outline of shapes, and the fill color is used to fill in shapes.

Contexts have a sort of invisible "pen" associated with them that does the line drawing. When you call CGContextMoveToPoint(), you move that invisible pen to a new location, without actually drawing anything. By doing this, we are indicating that the line we are about to draw will start at position (100, 100) (see the explanation of positioning in the next section). The next function actually draws a line from the current pen location to the specified location, which will become the new pen location. When we draw in Core Graphics, we're not drawing anything you can actually see. We're creating a shape, a line, or some other object, but it contains no color or anything to make it visible. It's like writing in invisible ink. Until we do something to make it visible, our line can't be seen. So, the next step is tell Quartz to draw the line using CGContextStrokePath(). This function will use the line width and the stroke color we set earlier to actually color (or "paint") the line and make it visible.

The Coordinates System

In the previous chunk of code, we passed a pair of floating-point numbers as parameters to CGContextMoveToPoint() and CGContextLineToPoint(). These numbers represent positions in the Core Graphics coordinates system. Locations in this coordinate system are denoted by their x and y coordinates, which we usually represent as (x, y). The upper-left corner of the context is (0, 0). As you move down, y increases. As you move to the right, x increases.

In that last code snippet, we drew a diagonal line from (100, 100) to (200, 200), which would draw a line that looked like the one shown in Figure 12-2.

The coordinate system is one of the gotchas in drawing with Quartz, because Quartz's coordinate system is flipped from what many graphics libraries use and from what is usually taught in geometry classes. In OpenGL ES, for example, (0, 0) is in the lower-left corner and as the y coordinate increases, you move toward the top of the context or view, as shown in Figure 12-3. When working with OpenGL, you have to translate the position from the view's coordinate system to OpenGL's coordinate system. That's easy enough to do, and you'll see how it's done when we get into working with OpenGL later in the chapter.

To specify a point in the coordinate system, some Quartz functions require two floating-point numbers as parameters. Other Quartz functions ask for the point to be embedded in a CGPoint, a struct that holds two floating-point values, x and y. To describe the size of a view or other object, Quartz uses CGSize, a struct that also holds two floating-point values, width and height. Quartz also declares a datatype called CGRect, which is used to define a rectangle in the coordinate system. A CGRect contains two elements, a CGPoint called origin that identifies the top left of the rectangle and a CGSize called size that identifies the width and height of the rectangle.

Drawing a line in the view's coordinate system

Figure 12.2. Drawing a line in the view's coordinate system

In many graphics libraries, including OpenGL, drawing from (10, 10) to (20, 20) would produce a line that looks like this instead of the line in Figure 12-2.

Figure 12.3. In many graphics libraries, including OpenGL, drawing from (10, 10) to (20, 20) would produce a line that looks like this instead of the line in Figure 12-2.

Specifying Colors

An important part of drawing is color, so understanding the way colors work on the iPhone is important. This is one of the areas where the UIKit does provide an Objective-C class: UIColor. You can't use a UIColor object directly in Core Graphic calls, but since UIColor is just a wrapper around CGColor (which is what the Core Graphic functions require), you can retrieve a CGColor reference from a UIColor instance by using its CGColor property, something we did earlier in this code snippet:

CGContextSetStrokeColorWithColor(context, [UIColor redColor].CGColor);

We created a UIColor instance using a convenience method called redColor, and then retrieved its CGColor property and passed that into the function.

A Bit of Color Theory for Your iPhone's Display

In modern computer graphics, a very common way to represent colors is to use four components: red, green, blue, and alpha. In Quartz 2D, these values are of type CGFloat (which, on the iPhone, is a four byte floating-point value, the same as float) and hold a value between 0.0 and 1.0.

Note

A floating-point value that is expected to be in the range 0.0 to 1.0 is often referred to as a clamped floating-point variable, or sometimes just a clamp.

The first three are fairly easy to understand, as they represent the additive primary colors or the RGB color model (see Figure 12-4). Combining these three colors in different proportions results in different colors. If you add together light of these three shades in equal proportions, the result will appear to the eye as either white or a shade of gray depending on the intensity of the light mixed. Combining the three additive primaries in different proportions, gives you range of different colors, referred to as a gamut.

In grade school, you probably learned that the primary colors are red, yellow, and blue. These primaries, which are known as the historical subtractive primaries or the RYB color model, have little application in modern color theory and are almost never used in computer graphics. The color gamut of the RYB color model is extremely limited, and this model doesn't lend itself easily to mathematical definition. As much as we hate to tell you that your wonderful third grade art teacher, Mrs. Smedlee, was wrong about anything, well, in the context of computer graphics, she was. For our purposes, the primary colors are red, green, and blue, not red, yellow, and blue.

A simple representation of the additive primary colors that make up the RGB color model

Figure 12.4. A simple representation of the additive primary colors that make up the RGB color model

More Than Color Meets the Eye

In addition to red, green, and blue, both Quartz 2D and OpenGL ES use another color component, called alpha, which represents how transparent a color is. Alpha is used, when drawing one color on top of another color, to determine the final color that gets drawn. With an alpha of 1.0, the drawn color is 100 percent opaque and obscures any colors beneath it. With any value less than 1.0, the colors below will show through and mix. When an alpha component is used, the color model is sometimes referred to as the RGBA color model, although technically speaking, the alpha isn't really part of the color; it just defines how the color will interact with other colors when it is drawn.

Although the RGB model is the most commonly used in computer graphics, it is not the only color model. Several others are in use, including hue, saturation, value (HSV); hue, saturation, lightness (HSL); cyan, magenta, yellow, key (CMYK), which is used in four-color offset printing; and grayscale. To make matters even more confusing, there are different versions of some of these, including several variants of the RGB color space. Fortunately, for most operations, we don't have to worry about the color model that is being used. We can just pass the CGColor from our UIColor object and Core Graphics will handle any necessary conversions. If you use UIColor or CGColor when working with OpenGL ES, it's important to keep in mind that they support other color models, because OpenGL ES requires colors to be specified in RGBA.

UIColor has a large number of convenience methods that return UIColor objects initialized to a specific color. In our previous code sample, we used the redColor method to get a color initialized to red. Fortunately for us, the UIColor instances created by these convenience methods all use the RGBA color model.

If you need more control over color, instead of using one of those convenience methods based on the name of the color, you can create a color by specifying all four of the components. Here's an example:

return [UIColor colorWithRed:1.0f green:0.0f blue:0.0f alpha:1.0f];

Drawing Images in Context

Quartz 2D allows you to draw images directly into a context. This is another example of an Objective-C class (UIImage) that you can use as an alternative to working with a Core Graphics data structure (CGImage). The UIImage class contains methods to draw its image into the current context. You'll need to identify where the image should appear in the context by specifying either a CGPoint to identify the image's upper-left corner or a CGRect to frame the image—resized, if necessary, to fit the frame. You can draw a UIImage into the current context like so:

CGPoint drawPoint = CGPointMake(100.0f, 100.0f);
[image drawAtPoint:drawPoint];

Drawing Shapes: Polygons, Lines, and Curves

Quartz 2D provides a number of functions to make it easier to create complex shapes. To draw a rectangle or a polygon, you don't have to calculate angles, draw lines, or do any math at all, really. You can just call a Quartz function to do the work for you. For example, to draw an ellipse, you define the rectangle into which the ellipse needs to fit and let Core Graphics do the work:

CGRect theRect = CGMakeRect(0,0,100,100);
CGContextAddEllipseInRect(context, theRect);
CGContextDrawPath(context, kCGPathFillStroke);

There are similar methods for rectangles. There are also methods that let you create more complex shapes, such as arcs and Bezier paths. To learn more about arcs and Bezier paths in Quartz, check out the Quartz 2D Programming Guide in the iPhone Dev Center at http://developer.apple.com/documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/ or in Xcode's online documentation.

Quartz 2D Tool Sampler: Patterns, Gradients, and Dash Patterns

Although not as expansive as OpenGL, Quartz 2D does offer quite an impressive array of tools. Though many of these tools are beyond the scope of this book, you should know they exist. For example, Quartz 2D supports the filling of polygons with gradients, not just solid colors, and supports not only solid lines but an assortment of dash patterns. Take a look at the screen shots in Figure 12-5, which are taken from Apple's QuartzDemo sample code, to see a sampling of what Quartz 2D can do for you.

Some examples of what Quartz 2D can do, from the Quartz Demo sample project provided by Apple

Figure 12.5. Some examples of what Quartz 2D can do, from the Quartz Demo sample project provided by Apple

Now that you have a basic understanding of how Quartz 2D works and what it is capable of, let's try it out.

Building the QuartzFun Application

In Xcode, create a new project using the view-based application template, and call it QuartzFun. Once it's created, expand the Classes and Resources folders, and single-click the Classes folder so we can add our classes. The template already provided us with an application delegate and a view controller. We're going to be executing our custom drawing in a view, so we need to create a subclass of UIView where we'll do the drawing by overriding the drawRect: method. Create a new Cocoa Touch Class file, and select Objective-C class and then a UIView for Subclass of. Just to repeat, use a Subclass of UIView and not NSObject as we've done in the past. Call the file QuartzFunView.m, and be sure to create the header as well.

We're going to define some constants, as we've done several times, but this time, our constants are going to be needed by more than one class and don't relate to one specific class. We're going to create a header file just for the constants, so create a new file, selecting the Empty File template from the Other heading and calling it Constants.h.

We have two more files to go. If you look at Figure 12-1, you can see that we offer an option to select a random color. UIColor doesn't have a method to return a random color, so we'll have to write code to do that. We could, of course, put that code into our controller class, but because we're savvy Objective-C programmers, we're going to put the code into a category on UIColor. Create two more files using the Empty File template, calling one UIColor-Random.h and the other UIColor-Random.m. Alternatively, use the NSObject subclass template to create UIColor-Random.m, and let the template create UIColor-Random.h for you automatically; then, delete the contents of the two files.

Creating a Random Color

Let's tackle the category first. In UIColor-Random.h, place the following code:

#import <UIKit/UIKit.h>

@interface UIColor(Random)
+(UIColor *)randomColor;
@end

Now, switch over to UIColor-Random.m, and add this:

#import "UIColor-Random.h"

@implementation UIColor(Random)
+(UIColor *)randomColor {
    static BOOL seeded = NO;
if (!seeded) {
        seeded = YES;
        srandom(time(NULL));
    }
    CGFloat red = (CGFloat)random()/(CGFloat)RAND_MAX;
    CGFloat blue = (CGFloat)random()/(CGFloat)RAND_MAX;
    CGFloat green = (CGFloat)random()/(CGFloat)RAND_MAX;
    return [UIColor colorWithRed:red green:green blue:blue alpha:1.0f];
}
@end

This is fairly straightforward. We declare a static variable that tells us if this is the first time through the method. The first time this method is called during an application's run, we will seed the random number generator. Doing this here means we don't have to rely on the application doing it somewhere else, and as a result, we can reuse this category in other iPhone projects.

Once we've made sure the random number generator is seeded, we generate three random CGFloats with a value between 0.0 and 1.0, and use those three values to create a new color. We set alpha to 1.0 so that all generated colors will be opaque.

Defining Application Constants

We're going to define constants for each of the options that the user can select using the segmented controllers. Single-click Constants.h, and add the following:

typedef enum {
    kLineShape = 0,
    kRectShape,
    kEllipseShape,
    kImageShape
} ShapeType;

typedef enum {
    kRedColorTab = 0,
    kBlueColorTab,
    kYellowColorTab,
    kGreenColorTab,
    kRandomColorTab
} ColorTabIndex;

#define degreesToRadian(x) (M_PI * (x) / 180.0)

To make our code more readable, we've declared two enumeration types using typedef. One will represent the available shape options available; the other will represent the various color options available. The values these constants hold will correspond to segments on the two segmented controllers we will create in our application.

Implementing the QuartzFunView Skeleton

Since we're going to do our drawing in a subclass of UIView, let's set up that class with everything it needs except for the actual code to do the drawing, which we'll add later. Single-click QuartzFunView.h, and make the following changes:

#import <UIKit/UIKit.h>
#import "Constants.h"

@interface QuartzFunView : UIView {
    CGPoint        firstTouch;
    CGPoint        lastTouch;
    UIColor        *currentColor;
    ShapeType      shapeType;
    UIImage        *drawImage;
    BOOL           useRandomColor;
}
@property CGPoint firstTouch;
@property CGPoint lastTouch;
@property (nonatomic, retain) UIColor *currentColor;
@property ShapeType shapeType;
@property (nonatomic, retain) UIImage *drawImage;
@property BOOL useRandomColor;
@end

The first thing we do is import the Constants.h header we just created so we can make use of our enumerations. We then declare our instance variables. The first two will track the user's finger as it drags across the screen. We'll store the location where the user first touches the screen in firstTouch. We'll store the location of the user's finger while dragging and when the drag ends in lastTouch. Our drawing code will use these two variables to determine where to draw the requested shape.

Next, we define a color to hold the user's color selection and a ShapeType to keep track of the shape the user wants drawn. After that is a UIImage property that will hold the image to be drawn on the screen when the user selects the rightmost toolbar item on the bottom toolbar (see Figure 12-6). The last property is a Boolean that will be used to keep track of whether the user is requesting a random color.

When drawing a UIImage to the screen, notice that the color control disappears.

Figure 12.6. When drawing a UIImage to the screen, notice that the color control disappears.

Switch to QuartzFunView.m, and make the following changes:

#import "QuartzFunView.h"
#import "UIColor-Random.h"
@implementation QuartzFunView
@synthesize firstTouch;
@synthesize lastTouch;
@synthesize currentColor;
@synthesize shapeType;
@synthesize drawImage;
@synthesize useRandomColor;

- (id)initWithCoder:(NSCoder*)coder
{
    if ( ( self = [super initWithCoder:coder] ) ) {
        self.currentColor = [UIColor redColor];
        self.useRandomColor = NO;
        if (drawImage == nil)
            self.drawImage = [UIImage imageNamed:@"iphone.png"];
    }
    return self;
}
- (id)initWithFrame:(CGRect)frame {
    if (self = [super initWithFrame:frame]) {
        // Initialization code
    }
    return self;
}
- (void)drawRect:(CGRect)rect {
      // Drawing code
}
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
    if (useRandomColor)
        self.currentColor = [UIColor randomColor];
    UITouch *touch = [touches anyObject];
    firstTouch = [touch locationInView:self];
    lastTouch = [touch locationInView:self];
    [self setNeedsDisplay];
}
- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {
    UITouch *touch = [touches anyObject];
    lastTouch = [touch locationInView:self];

    [self setNeedsDisplay];
}
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
    UITouch *touch = [touches anyObject];
    lastTouch = [touch locationInView:self];
[self setNeedsDisplay];
}

- (void)dealloc {
    [currentColor release];
    [drawImage release];
    [super dealloc];
}

@end

Because this view is getting loaded from a nib, we first implement initWithCoder:. Keep in mind that object instances in nibs are stored as archived objects, which is the exact same mechanism we used in the previous chapter to archive and load our objects to disk. As a result, when an object instance is loaded from a nib, neither init: nor initWithFrame: ever gets called. Instead, initWithCoder: is used, so this is where we need to add any initialization code. In our case, we set the initial color value to red, initialize useRandomColor to NO and load the image file that we're going to draw. You don't have to fully understand the rest of the code here. We'll get into the details of working with touches and the specifics of the touchesBegan:withEvent:, touchesMoved:withEvent:, and touchesEnded:withEvent: methods in Chapter 13. In a nutshell, these three methods inherited from UIView (but actually declared in UIView's parent UIResponder) can be overridden to find out where the user is touching the iPhone's screen.

touchesBegan:withEvent: gets called when the user's finger first touch the screen. In that method, we change the color if the user has selected a random color using the new randomColor method we added to UIColor earlier. After that, we store the current location so that we know where the user first touched the screen, and we indicate that our view needs to be redrawn by calling setNeedsDisplay on self.

The next method, touchesMoved:withEvent:, gets continuously called while the user is dragging a finger on the screen. All we do here is store off the new location in lastTouch and indicate that the screen needs to be redrawn.

The last one, touchesEnded:withEvent:, gets called when the user lifts that finger off of the screen. Just like in the touchesMoved:withEvent: method, all we do is store off the final location in the lastTouch variable and indicate that the view needs to be redrawn.

Don't worry if you don't fully grok what the three methods that start with touches are doing; we'll be working on these in much greater detail in the next few chapters.

We'll come back to this class once we have our application skeleton up and running. That drawRect: method, which is currently empty except for a comment, is where we will do this application's real work, and we haven't written that yet. Let's finish setting up the application before we add our drawing code.

Adding Outlets and Actions to the View Controller

If you refer to Figure 12-1, you'll see that our interface includes two segmented controllers, one at the top and one at the bottom of the screen. The one on top, which lets the user select color, is applicable to only three of the four options on the bottom, so we're going to need an outlet to that top segmented controller, so we can hide it when it doesn't serve a purpose. We also need two methods, one that will be called when a new color is selected and another that will be called when a new shape is selected.

Single-click QuartzFunViewController.h, and make the following changes:

#import <UIKit/UIKit.h>

@interface QuartzFunViewController : UIViewController {
    UISegmentedControl *colorControl;
}
@property (nonatomic, retain) IBOutlet UISegmentedControl *colorControl;
- (IBAction)changeColor:(id)sender;
- (IBAction)changeShape:(id)sender;
@end

Nothing there should need explanation at this point, so switch over to QuartzFunViewController.m, and make these changes to the top of the file:

#import "QuartzFunViewController.h"
#import "QuartzFunView.h"
#import "Constants.h"

@implementation QuartzFunViewController
@synthesize colorControl;

- (IBAction)changeColor:(id)sender {
    UISegmentedControl *control = sender;
    NSInteger index = [control selectedSegmentIndex];

    QuartzFunView *quartzView = (QuartzFunView *)self.view;

    switch (index) {
        case kRedColorTab:
            quartzView.currentColor = [UIColor redColor];
            quartzView.useRandomColor = NO;
            break;
        case kBlueColorTab:
            quartzView.currentColor = [UIColor blueColor];
quartzView.useRandomColor = NO;
            break;
        case kYellowColorTab:
            quartzView.currentColor = [UIColor yellowColor];
            quartzView.useRandomColor = NO;
            break;
        case kGreenColorTab:
            quartzView.currentColor = [UIColor greenColor];
            quartzView.useRandomColor = NO;
            break;
        case kRandomColorTab:
            quartzView.useRandomColor = YES;
            break;
        default:
            break;
    }
}
- (IBAction)changeShape:(id)sender {
    UISegmentedControl *control = sender;
    [(QuartzFunView *)self.view setShapeType:[control
        selectedSegmentIndex]];

    if ([control selectedSegmentIndex] == kImageShape)
        colorControl.hidden = YES;
    else
        colorControl.hidden = NO;
}
...

Let's also be good memory citizens by adding the following code to the existing viewDidUnload and dealloc methods:

...
- (void)viewDidUnload {
    // Release any retained subviews of the main view.
    // e.g. self.myOutlet = nil;
    self.colorControl = nil;
    [super viewDidUnload];
}
- (void)dealloc {
    [colorControl release];
    [super dealloc];
}
...

Again, these code changes are pretty straightforward. In the changeColor: method, we look at which segment was selected and create a new color based on that selection. We cast view to QuartzFunView. Next, we set its currentColor property so that it knows what color to use when drawing, except when a random color is selected, in which case, we just set the view's useRandomColor property to YES. Since all the drawing code will be in the view itself, we don't have to do anything else in this method.

In the changeShape: method, we do something similar. However, since we don't need to create an object, we can just set the view's shapeType property to the segment index from sender. Recall the ShapeType enum? The four elements of the enum correspond to the four toolbar segments at the bottom of the application view. We set the shape to be the same as the currently selected segment, and we hide and unhide the colorControl based on whether the Image segment was selected.

Updating QuartzFunViewController.xib

Before we can start drawing, we need to add the segmented controls to our nib and then hook up the actions and outlets. Double-click QuartzFunViewController.xib to open the file in Interface Builder. The first order of business is to change the class of the view, so single-click the View icon in the window labeled QuartzFunViewController.xib, and press

Updating QuartzFunViewController.xib

Next, look for a Navigation Bar in the library. Make sure you are grabbing a Navigation Bar—not a Navigation Controller. We just want the bar that goes at the top of the view. Place the Navigation Bar snugly against the top of the view window, just beneath the status bar.

Next, look for a Segmented Control in the library, and drag that right on top of the Navigation Bar. Drop it in the center of the nav bar, not on the left or right side. Once you drop it, it should stay selected, so grab one of the resize dots on either side of the segmented control and resize it so that it takes up the entire width of the navigation bar. You won't get any blue guide lines, but Interface Builder won't let you make the bar any bigger than you want it in this case, so just drag until it won't expand any further.

With the segmented control still selected, press

Updating QuartzFunViewController.xib
The completed navigation bar

Figure 12.7. The completed navigation bar

Control-drag from the File's Owner icon to the segmented control, and select the colorControl outlet. Make sure you are dragging to the segmented control and not the nav bar. Next, make sure the segmented control is selected, and press

The completed navigation bar

Now look for a Toolbar in the library, and drag one of those over to the bottom of the window. The Toolbar from the library has a button on it that we don't need, so select it and press the delete button on your keyboard. Once it's placed and the button is deleted, grab another Segmented Control, and drop it onto the toolbar.

As it turns out, segmented controls are a bit harder to center in a toolbar, so we'll bring in a little help. Drag a Flexible Space Bar Button Item from the library onto the toolbar, to the left of our segmented control. Next, drag a second Flexible Space Bar Button Item onto the toolbar, to the right of our segmented control. These items will keep the segmented control centered in the toolbar as we resize it. Click the segmented control to select it, and resize it so it fills the toolbar with just a bit of space to the left and right. Interface Builder won't give you guides or stop you from making it wider than the toolbar the way it did with the navigation bar, so you'll have to be a little careful to resize it to the right size.

Next, with the segmented control still selected, press

The completed navigation bar

Note

You may have wondered why we put a navigation bar at the top of the view and a toolbar at the bottom of the view. According to the iPhone Human Interface Guidelines published by Apple, navigation bars were specifically designed to be placed at the top of the screen and toolbars are designed for the bottom. If you read the descriptions of the Toolbar and Navigation Bar in Interface Builder's library window, you'll see this design intention spelled out.

Make sure that everything is in order by compiling and running. You won't be able to draw shapes on the screen yet, but the segmented controls should work, and when you tap the Image segment in the bottom control, the color controls should disappear. Once you've got everything working, let's do some drawing.

Drawing the Line

Back in Xcode, edit QuartzFunView.m, and replace the empty drawRect: method with this one:

- (void)drawRect:(CGRect)rect {
    CGContextRef context = UIGraphicsGetCurrentContext();

    CGContextSetLineWidth(context, 2.0);
    CGContextSetStrokeColorWithColor(context, currentColor.CGColor);


    switch (shapeType) {
        case kLineShape:
            CGContextMoveToPoint(context, firstTouch.x, firstTouch.y);
            CGContextAddLineToPoint(context, lastTouch.x, lastTouch.y);
            CGContextStrokePath(context);
            break;
        case kRectShape:
            break;
        case kEllipseShape:
            break;
        case kImageShape:
            break;
        default:
            break;
    }
}

We start things off by retrieving a reference to the current context so we know where to draw:

CGContextRef context = UIGraphicsGetCurrentContext();

Next, we set the line width to 2.0, which means that any line that we stroke will be 2 pixels wide:

CGContextSetLineWidth(context, 2.0);

After that, we set the color for stroking lines. Since UIColor has a CGColor property, which is what this method needs, we use that property of our currentColor instance variable to pass the correct color onto this function:

CGContextSetStrokeColorWithColor(context, currentColor.CGColor);

We use a switch to jump to the appropriate code for each shape type. We'll start off with the code to handle kLineShape, get that working, and then we'll add code for each shape in turn as we make our way through this chapter:

switch (shapeType) {
    case kLineShape:

To draw a line, we move the invisible pen to the first place the user touched. Remember, we stored that value in the touchesBegan: method, so it will always reflect the first spot touched the last time the user did a touch or drag.

CGContextMoveToPoint(context, firstTouch.x, firstTouch.y);

Next, we draw a line from that spot to the last spot the user touched. If the user's finger is still in contact with the screen, lastTouch contains Mr. Finger's current location. If the user is no longer touching the screen, lastTouch contains the location of the user's finger when it was lifted off the screen.

CGContextAddLineToPoint(context, lastTouch.x, lastTouch.y);

Then, we just stroke the path. This function will stroke the line we just drew using the color and width we set earlier:

CGContextStrokePath(context);

After that, we just finish the switch statement, and we're done for now.

break;
    case kRectShape:
        break;
    case kEllipseShape:
        break;
    case kImageShape:
        break;
    default:
        break;
 }

At this point, you should be able to compile and run. The Rect, Ellipse, and Shape options won't work, but you should be able to draw lines just fine (see Figure 12-8).

The line drawing part of our application is now complete. In this image, we are drawing using a random color.

Figure 12.8. The line drawing part of our application is now complete. In this image, we are drawing using a random color.

Drawing the Rectangle and Ellipse

Let's implement the code to draw the rectangle and the ellipse at the same time, since Quartz 2D implements both of these objects in basically the same way. Make the following changes to your drawRect: method:

- (void)drawRect:(CGRect)rect {
    if (currentColor == nil)
        self.currentColor = [UIColor redColor];

    CGContextRef context = UIGraphicsGetCurrentContext();

    CGContextSetLineWidth(context, 2.0);
    CGContextSetStrokeColorWithColor(context, currentColor.CGColor);

    CGContextSetFillColorWithColor(context, currentColor.CGColor);
    CGRect currentRect = CGRectMake (
              (firstTouch.x > lastTouch.x) ? lastTouch.x : firstTouch.x,
               (firstTouch.y > lastTouch.y) ? lastTouch.y : firstTouch.y,
               fabsf(firstTouch.x - lastTouch.x),
               fabsf(firstTouch.y - lastTouch.y));
    switch (shapeType) {
        case kLineShape:
            CGContextMoveToPoint(context, firstTouch.x, firstTouch.y);
            CGContextAddLineToPoint(context, lastTouch.x, lastTouch.y);
            CGContextStrokePath(context);
            break;
        case kRectShape:
            CGContextAddRect(context, currentRect);
            CGContextDrawPath(context, kCGPathFillStroke);
            break;
        case kEllipseShape:
            CGContextAddEllipseInRect(context, currentRect);
            CGContextDrawPath(context, kCGPathFillStroke);
            break;
        case kImageShape:
            break;
        default:
            break;
    }
}

Because we want to paint both the ellipse and the rectangle in a solid color, we add a call to set the fill color using currentColor:

CGContextSetFillColorWithColor(context, currentColor.CGColor);

Next, we declare a CGRect variable. We'll use currentRect to hold the rectangle described by the user's drag. Remember, a CGRect has two members: size, and origin. A function called CGRectMake() lets us create a CGRect by specifying the x, y, width, and height values, so we use that to make our rectangle. The code to make the rectangle may look a little intimidating at first glance, but it's not that complicated. The user could have dragged in any direction, so the origin will vary depending on the drag direction. We use the lower x value from the two points and the lower y value from the two points to create the origin. Then we figure out the size by getting the absolute value of the difference between the two x values and the two y values.

CGRect currentRect = CGRectMake (
          (firstTouch.x > lastTouch.x) ? lastTouch.x : firstTouch.x,
          (firstTouch.y > lastTouch.y) ? lastTouch.y : firstTouch.y,
          fabsf(firstTouch.x - lastTouch.x),
          fabsf(firstTouch.y - lastTouch.y));

Once we have this rectangle defined, drawing either a rectangle or an ellipse is as easy as calling two functions, one to draw the rectangle or ellipse in the CGRect we defined and the other to stroke and fill it.

case kRectShape:
        CGContextAddRect(context, currentRect);
        CGContextDrawPath(context, kCGPathFillStroke);
        break;
    case kEllipseShape:
        CGContextAddEllipseInRect(context, currentRect);
        CGContextDrawPath(context, kCGPathFillStroke);
        break;

Compile and run your application and try out the Rect and Ellipse tools to see how you like them. Don't forget to change colors now and again and to try out the random color.

Drawing the Image

For our last trick, let's draw an image. There is an image in the 12 QuartzFun folder called iphone.png that you can add to your Resources folder, or you can add any .png file you want to use as long as you remember to change the filename in your code to point to the image you choose.

Add the following code to your drawRect: method:

- (void)drawRect:(CGRect)rect {

    if (currentColor == nil)
        self.currentColor = [UIColor redColor];

    CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetLineWidth(context, 2.0);
    CGContextSetStrokeColorWithColor(context, currentColor.CGColor);

    CGContextSetFillColorWithColor(context, currentColor.CGColor);
    CGRect currentRect;
    currentRect = CGRectMake (
        (firstTouch.x > lastTouch.x) ? lastTouch.x : firstTouch.x,
        (firstTouch.y > lastTouch.y) ? lastTouch.y : firstTouch.y,
        fabsf(firstTouch.x - lastTouch.x),
        fabsf(firstTouch.y - lastTouch.y);

    switch (shapeType) {
        case kLineShape:
            CGContextMoveToPoint(context, firstTouch.x, firstTouch.y);
            CGContextAddLineToPoint(context, lastTouch.x, lastTouch.y);
            CGContextStrokePath(context);
            break;
        case kRectShape:
            CGContextAddRect(context, currentRect);
            CGContextDrawPath(context, kCGPathFillStroke);
            break;
        case kEllipseShape:
            CGContextAddEllipseInRect(context, currentRect);
            CGContextDrawPath(context, kCGPathFillStroke);
            break;
        case kImageShape: {
            CGFloat horizontalOffset = drawImage.size.width / 2;
            CGFloat verticalOffset = drawImage.size.height / 2;
            CGPoint drawPoint = CGPointMake(lastTouch.x - horizontalOffset,
                                        lastTouch.y - verticalOffset);
            [drawImage drawAtPoint:drawPoint];
            break;
        }
        default:
            break;
    }
}

Tip

Notice that, in the switch statement, we added curly braces around the code under case kImageShape:. GCC has a problem with variables declared in the first line after a case statement. These curly braces are our way of telling GCC to stop complaining. We could also have declared horizontalOffset before the switch statement, but this approach keeps the related code together.

First, we calculate the center of the image, since we want the image drawn centered on the point where the user last touched. Without this adjustment, the image would get drawn with the upper-left corner at the user's finger, also a valid option. We then make a new CGpoint by subtracting these offsets from the x and y values in lastTouch.

CGFloat horizontalOffset = drawImage.size.width / 2;
CGFloat verticalOffset = drawImage.size.height / 2;
CGPoint drawPoint = CGPointMake(lastTouch.x - horizontalOffset,
                             lastTouch.y - verticalOffset);

Now, we tell the image to draw itself. This line of code will do the trick:

[drawImage drawAtPoint:drawPoint];

Optimizing the QuartzFun Application

Our application does what we want, but we should consider a bit of optimization. In our application, you won't notice a slowdown, but in a more complex application, running on a slower processor, you might see some lag. The problem occurs in QuartzFunView.m, in the methods touchesMoved: and touchesEnded:. Both methods include this line of code:

[self setNeedsDisplay];

Obviously, this is how we tell our view that something has changed, and it needs to redraw itself. This code works, but it causes the entire view to get erased and redrawn, even if only a tiny little bit changed. We do want to erase the screen when we get ready to drag out a new shape, but we don't want to clear the screen several times a second as we drag out our shape.

Rather than forcing the entire view to be redrawn many times during our drag, we can use setNeedsDisplayInRect: instead. setNeedsDisplayInRect: is an NSView method that marks a just one rectangular portion of a view's region as needing redisplay. By using this, we can be more efficient by marking only the part of the view that is affected by the current drawing operation as needing to be redrawn.

We need to redraw not just the rectangle between firstTouch and lastTouch but any part of the screen encompassed by the current drag. If the user touches the screen and then scribbles all over and we redrew the only section between firstTouch and lastTouch, we'd leave a lot of stuff drawn on the screen that we don't want.

The answer is to keep track of the entire area that's been affected by a particular drag in a CGRect instance variable. In touchesBegan:, we reset that instance variable to just the point where the user touched. Then in touchesMoved: and touchesEnded:, we use a Core Graphics function to get the union of the current rectangle and the stored rectangle, and we store the resulting rectangle. We also use it to specify what part of the view needs to be redrawn. This approach gives us a running total of the area impacted by the current drag.

Right now, we calculate the current rectangle in the drawRect: method for use in drawing the ellipse and rectangle shapes. We'll move that calculation into a new method so that it can be used in all three places without repeating code. Ready? Let's do it. Make the following changes to QuartzFunView.h:

#import <UIKit/UIKit.h>
#import "Constants.h"

@interface QuartzFunView : UIView {
    CGPoint        firstTouch;
    CGPoint        lastTouch;
    UIColor        *currentColor;
    ShapeType      shapeType;
    UIImage        *drawImage;
    BOOL           useRandomColor;
    CGRect         redrawRect;
}
@property CGPoint firstTouch;
@property CGPoint lastTouch;
@property (nonatomic, retain) UIColor *currentColor;
@property ShapeType shapeType;
@property (nonatomic, retain) UIImage *drawImage;
@property BOOL useRandomColor;
@property (readonly) CGRect currentRect;
@property CGRect redrawRect;
@end

We declare a CGRect called redrawRect that we will use to keep track of the area that needs to be redrawn. We also declare a read-only property called currentRect, which will return that rectangle that we were previously calculating in drawRect:. Notice that it is a property with no underlying instance variable, which is okay, as long as we implement the accessor rather than relying on @synthesize to do it for us. We'll still use the @synthesize keyword, but will write the accessor ourselves. @synthesize will create an accessor or mutator only if one doesn't already exist in the class.

Switch over to QuartzFunView.m, and insert the following code at the top of the file:

#import "QuartzFunView.h"

@implementation QuartzFunView
@synthesize firstTouch;
@synthesize lastTouch;
@synthesize currentColor;
@synthesize shapeType;
@synthesize drawImage;
@synthesize useRandomColor;
@synthesize redrawRect;
@synthesize currentRect;
- (CGRect)currentRect {
    return CGRectMake (
        (firstTouch.x > lastTouch.x) ? lastTouch.x : firstTouch.x,
        (firstTouch.y > lastTouch.y) ? lastTouch.y : firstTouch.y,
        fabsf(firstTouch.x - lastTouch.x),
        fabsf(firstTouch.y - lastTouch.y));
}
...

Now, in the drawRect: method, delete the lines of code where we calculated currentRect, and change all references to currentRect to self.currentRect so that the code uses that new accessor we just created.

...
- (void)drawRect:(CGRect)rect {
    if (currentColor == nil)
        self.currentColor = [UIColor redColor];

    CGContextRef context = UIGraphicsGetCurrentContext();

    CGContextSetLineWidth(context, 2.0);
    CGContextSetStrokeColorWithColor(context, currentColor.CGColor);

    CGContextSetFillColorWithColor(context, currentColor.CGColor);
       
Optimizing the QuartzFun Application
switch (shapeType) { case kLineShape: CGContextMoveToPoint(context, firstTouch.x, firstTouch.y); CGContextAddLineToPoint(context, lastTouch.x, lastTouch.y); CGContextStrokePath(context); break; case kRectShape: CGContextAddRect(context, self.currentRect); CGContextDrawPath(context, kCGPathFillStroke); break; case kEllipseShape: CGContextAddEllipseInRect(context, self.currentRect);
CGContextDrawPath(context, kCGPathFillStroke);
            break;
        case kImageShape:
            if (drawImage == nil)
                self.drawImage = [UIImage imageNamed:@"iphone.png"];

            CGFloat horizontalOffset = drawImage.size.width / 2;
            CGFloat verticalOffset = drawImage.size.height / 2;
            CGPoint drawPoint = CGPointMake(lastTouch.x - horizontalOffset,
                             lastTouch.y - verticalOffset);
            [drawImage drawAtPoint:drawPoint];
            break;
        default:
            break;
    }
}
...

We also need to make some changes to in touchesEnded:withEvent: and touchesMoved:withEvent:. We need to recalculate the space impacted by the current operation, and use that to indicate that only portion of our view needs to be redrawn:

...
- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {
    UITouch *touch = [touches anyObject];
    lastTouch = [touch locationInView:self];

    
Optimizing the QuartzFun Application
if (shapeType == kImageShape) { CGFloat horizontalOffset = drawImage.size.width / 2; CGFloat verticalOffset = drawImage.size.height / 2; redrawRect = CGRectUnion(redrawRect, CGRectMake(lastTouch.x - horizontalOffset, lastTouch.y - verticalOffset, drawImage.size.width, drawImage.size.height)); } else redrawRect = CGRectUnion(redrawRect, self.currentRect); redrawRect = CGRectInset(redrawRect, −2.0, −2.0); [self setNeedsDisplayInRect:redrawRect]; } - (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event { UITouch *touch = [touches anyObject]; lastTouch = [touch locationInView:self];
Optimizing the QuartzFun Application
if (shapeType == kImageShape) { CGFloat horizontalOffset = drawImage.size.width / 2;
CGFloat verticalOffset = drawImage.size.height / 2;
    redrawRect = CGRectUnion(redrawRect,
            CGRectMake(lastTouch.x - horizontalOffset,
            lastTouch.y - verticalOffset, drawImage.size.width,
            drawImage.size.height));
    }
    redrawRect = CGRectUnion(redrawRect, self.currentRect);
    [self setNeedsDisplayInRect:redrawRect];
}
...

With only a few additional lines of code, we reduced the amount of work necessary to redraw our view by getting rid of the need to erase and redraw any portion of the view that wasn't been affected by the current drag. Being kind to the iPhone's precious processor cycles like this can make a big difference in the performance of your applications, especially as they get more complex.

Some OpenGL ES Basics

As we mentioned earlier in the chapter, OpenGL ES and Quartz 2D take fundamentally different approaches to drawing. A detailed introduction to OpenGL ES would be a book in and of itself, so we're not going to attempt that here. Instead, we're going to re-create our Quartz 2D application using OpenGL ES, just to give you a sense of the basics and some sample code you can use to kick start your own OpenGL applications.

Note

When you are ready to add OpenGL to your own applications, take a side trip to http://www.khronos.org/opengles/, which is the home base of the OpenGL ES standards group. Even better, visit this page, and search for the word "tutorial": http://www.khronos.org/developers/resources/opengles/

Also, be sure to check out the OpenGL tutorial in Jeff LaMarche's iPhone blog:

http://iphonedevelopment.blogspot.com/2009/05/
Some OpenGL ES Basics
opengl-es-from-ground-up-table-of.html

Let's get started with our application.

Building the GLFun Application

Create a new view-based application in Xcode, and call it GLFun. To save time, copy the files Constants.h, UIColor-Random.h, UIColor-Random.m, and iphone.png from the QuartzFun project into this new project. Open GLFunViewController.h, and make the following changes. You should recognize them, as they're identical to the changes we made to QuartzFunViewController.h earlier:

#import <UIKit/UIKit.h>

@interface GLFunViewController : UIViewController {
 UISegmentedControl *colorControl;
}
@property (nonatomic, retain) IBOutlet UISegmentedControl *colorControl;
- (IBAction)changeColor:(id)sender;
- (IBAction)changeShape:(id)sender;
@end

Switch over to QuartzFunViewController.m, and make the following changes at the beginning of the file. Again, these changes should look very familiar to you:

#import "GLFunViewController.h"
#import "Constants.h"
#import "GLFunView.h"
#import "UIColor-Random.h"

@implementation GLFunViewController
@synthesize colorControl;

- (IBAction)changeColor:(id)sender {
    UISegmentedControl *control = sender;
    NSInteger index = [control selectedSegmentIndex];

    GLFunView *glView = (GLFunView *)self.view;

    switch (index) {
        case kRedColorTab:
            glView.currentColor = [UIColor redColor];
            glView.useRandomColor = NO;
            break;
        case kBlueColorTab:
            glView.currentColor = [UIColor blueColor];
            glView.useRandomColor = NO;
            break;
        case kYellowColorTab:
            glView.currentColor = [UIColor yellowColor];
            glView.useRandomColor = NO;
            break;
        case kGreenColorTab:
            glView.currentColor = [UIColor greenColor];
            glView.useRandomColor = NO;
            break;
    case kRandomColorTab:
glView.useRandomColor = YES;
            break;
        default:
            break;
    }
}
- (IBAction)changeShape:(id)sender {
    UISegmentedControl *control = sender;
    [(GLFunView *)self.view setShapeType:[control selectedSegmentIndex]];
    if ([control selectedSegmentIndex] == kImageShape)
        [colorControl setHidden:YES];
    else
        [colorControl setHidden:NO];
}
...

Let's not forget to deal with memory cleanup:

...
- (void)viewDidUnload {
    // Release any retained subviews of the main view.
    // e.g. self.myOutlet = nil;
    self.colorControl = nil;
    [super viewDidUnload];
}
- (void)dealloc {
    [colorControl release];
    [super dealloc];
}
...

The only difference between this and QuartzFunController.m is that we're referencing a view called GLFunView instead of one called QuartzFunView. The code that does our drawing is contained in a subclass of UIView. Since we're doing the drawing in a completely different way this time, it makes sense to use a new class to contain that drawing code.

Before we proceed, you'll need to add a few more files to your project. In the 12 GLFun folder, you'll find four files named Texture2D.h, Texture2D.m, OpenGLES2DView.h, and OpenGLES2DView.m. The code in the first two files was written by Apple to make drawing images in OpenGL ES much easier than it otherwise would be. The second file is a class we've provided based on sample code from Apple that configures OpenGL to do two-dimensional drawing. OpenGL configuration is a complex topic that entire books have been written on, so we've done that configuration for you. You can feel free to use any of these files in your own programs if you wish.

OpenGL ES doesn't have sprites or images, per se; it has one kind of image called a texture. Textures have to be drawn onto a shape or object. The way you draw an image in OpenGL ES is to draw a square (technically speaking, it's two triangles), and then map a texture onto that square so that it exactly matches the square's size. Texture2D encapsulates that relatively complex process into a single, easy-to-use class.

OpenGLES2DView is a subclass of UIView that uses OpenGL to do its drawing. We set up this view so that the coordinate systems of OpenGL ES and the coordinate system of the view are mapped on a one-to-one basis. OpenGL ES is a three-dimensional system. OpenGLES2DView maps the OpenGL 3-D world to the pixels of our 2-D view. Note that, despite the one-to-one relationship between the view and the OpenGL context, the y coordinates are still flipped, so we have to translate the y coordinate from the view coordinate system, where increases in y represent moving down, to the OpenGL coordinate system, where increases in y represent moving up.

To use the OpenGLES2DView class, first subclass it, and then implement the draw method to do your actual drawing, just as we do in the following code. You can also implement any other methods you need in your view, such as the touch-related methods we used in the QuartzFun example.

Create a new file using the Cocoa Touch Class template, select Objective-C class and NSObject for Subclass of, and call it GLFunView.m, making sure to have it create the header file.

Single-click GLFunView.h, and make the following changes:

#import <Foundation/Foundation.h>
#import "Constants.h"
#import "OpenGLES2DView.h"

@class Texture2D;
Building the GLFun Application
@interface GLFunView : OpenGLES2DView { CGPoint firstTouch; CGPoint lastTouch; UIColor *currentColor; BOOL useRandomColor; ShapeType shapeType; Texture2D *sprite; } @property CGPoint firstTouch; @property CGPoint lastTouch; @property (nonatomic, retain) UIColor *currentColor; @property BOOL useRandomColor;
@property ShapeType shapeType;
@property (nonatomic, retain) Texture2D *sprite;
@end

This class is similar to QuartzFunView.h, but instead of using UIImage to hold our image, we use a Texture2D to simplify the process of drawing images into an OpenGL ES context. We also change the superclass from UIView to OpenGLES2DView so that our view becomes an OpenGL ES–backed view set up for doing two-dimensional drawing.

Switch over to GLFunView.m, and make the following changes.

#import "GLFunView.h"
#import "UIColor-Random.h"
#import "Texture2D.h"

@implementation GLFunView
@synthesize firstTouch;
@synthesize lastTouch;
@synthesize currentColor;
@synthesize useRandomColor;
@synthesize shapeType;
@synthesize sprite;

- (id)initWithCoder:(NSCoder*)coder {
    if (self = [super initWithCoder:coder]) {
        self.currentColor = [UIColor redColor];
        self.useRandomColor = NO;
        self.sprite = [[Texture2D alloc] initWithImage:[UIImage
                                            imageNamed:@"iphone.png"]];
        glBindTexture(GL_TEXTURE_2D, sprite.name);
    }
    return self;
}

- (void)draw {
    glLoadIdentity();

    glClearColor(0.78f, 0.78f, 0.78f, 1.0f);
    glClear(GL_COLOR_BUFFER_BIT);

    CGColorRef color = currentColor.CGColor;
    const CGFloat *components = CGColorGetComponents(color);
    CGFloat red = components[0];
    CGFloat green = components[1];
    CGFloat blue = components[2];

    glColor4f(red,green, blue, 1.0);
switch (shapeType) {
    case kLineShape: {
        glDisable(GL_TEXTURE_2D);
        GLfloat vertices[4];

        // Convert coordinates
        vertices[0] = firstTouch.x;
        vertices[1] = self.frame.size.height - firstTouch.y;
        vertices[2] = lastTouch.x;
        vertices[3] = self.frame.size.height - lastTouch.y;
        glLineWidth(2.0);
        glVertexPointer(2, GL_FLOAT, 0, vertices);
        glDrawArrays (GL_LINES, 0, 2);
        break;
    }
    case kRectShape: {
        glDisable(GL_TEXTURE_2D);
        // Calculate bounding rect and store in vertices
        GLfloat vertices[8];
        GLfloat minX = (firstTouch.x > lastTouch.x) ?
        lastTouch.x : firstTouch.x;
        GLfloat minY = (self.frame.size.height - firstTouch.y >
                        self.frame.size.height - lastTouch.y) ?
        self.frame.size.height - lastTouch.y :
        self.frame.size.height - firstTouch.y;
        GLfloat maxX = (firstTouch.x > lastTouch.x) ?
        firstTouch.x : lastTouch.x;
        GLfloat maxY = (self.frame.size.height - firstTouch.y >
                        self.frame.size.height - lastTouch.y) ?
        self.frame.size.height - firstTouch.y :
        self.frame.size.height - lastTouch.y;

        vertices[0] = maxX;
        vertices[1] = maxY;
        vertices[2] = minX;
        vertices[3] = maxY;
        vertices[4] = minX;
        vertices[5] = minY;
        vertices[6] = maxX;
        vertices[7] = minY;

        glVertexPointer (2, GL_FLOAT , 0, vertices);
        glDrawArrays (GL_TRIANGLE_FAN, 0, 4);
        break;
    }
    case kEllipseShape: {
        glDisable(GL_TEXTURE_2D);
GLfloat vertices[720];
        GLfloat xradius = (firstTouch.x > lastTouch.x) ?
        (firstTouch.x - lastTouch.x)/2 :
        (lastTouch.x - firstTouch.x)/2;
        GLfloat yradius = (self.frame.size.height - firstTouch.y >
                           self.frame.size.height - lastTouch.y) ?
        ((self.frame.size.height - firstTouch.y) -
         (self.frame.size.height - lastTouch.y))/2 :
        ((self.frame.size.height - lastTouch.y) -
         (self.frame.size.height - firstTouch.y))/2;
        for (int i = 0; i < 720; i+=2) {
            GLfloat xOffset = (firstTouch.x > lastTouch.x) ?
            lastTouch.x + xradius
            : firstTouch.x + xradius;
            GLfloat yOffset = (self.frame.size.height - firstTouch.y >
                               self.frame.size.height - lastTouch.y) ?
            self.frame.size.height - lastTouch.y + yradius :
            self.frame.size.height - firstTouch.y + yradius;
            vertices[i] = (cos(degreesToRadian(i/2))*xradius) + xOffset;
            vertices[i+1] = (sin(degreesToRadian(i/2))*yradius) +
            yOffset;
        }
        glVertexPointer(2, GL_FLOAT , 0, vertices);
        glDrawArrays (GL_TRIANGLE_FAN, 0, 360);
        break;
        }
    case kImageShape:
        glEnable(GL_TEXTURE_2D);
        [sprite drawAtPoint:CGPointMake(lastTouch.x,
                        self.frame.size.height - lastTouch.y)];
        break;
    default:
        break;
    }
    glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
    [context presentRenderbuffer:GL_RENDERBUFFER_OES];
}
- (void)dealloc {
    [currentColor release];
    [sprite release];
    [super dealloc];
}
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
    if (useRandomColor)
        self.currentColor = [UIColor randomColor];

    UITouch* touch = [[event touchesForView:self] anyObject];
    firstTouch = [touch locationInView:self];
    lastTouch = [touch locationInView:self];
    [self draw];
}
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {

    UITouch *touch = [touches anyObject];
    lastTouch = [touch locationInView:self];

    [self draw];
}

- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {
    UITouch *touch = [touches anyObject];
    lastTouch = [touch locationInView:self];

    [self draw];
}
@end

You can see that using OpenGL isn't, by any means, easier or more concise than using Quartz 2D. Although it's more powerful than Quartz, you're also closer to the metal, so to speak. OpenGL can be daunting at times.

Because this view is being loaded from a nib, we added an initWithCoder: method, and in it, we create and assign a UIColor to currentColor. We also defaulted useRandomColor to NO. and created our Texture2D object.

After the initWithCoder: method, we have our draw method, which is where you can really see the difference in the approaches between the two libraries. Let's take a look at process of drawing a line. Here's how we drew the line in the Quartz version (we've removed the code that's not directly relevant to drawing):

CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetLineWidth(context, 2.0);
CGContextSetStrokeColorWithColor(context, currentColor.CGColor);
CGContextMoveToPoint(context, firstTouch.x, firstTouch.y);
CGContextAddLineToPoint(context, lastTouch.x, lastTouch.y);
CGContextStrokePath(context);

Here are the steps we had to take in OpenGL to draw that same line. First, we reset the virtual world so that any rotations, translations, or other transforms that might have been applied to it are gone:

glLoadIdentity();

Next, we clear the background to the same shade of gray that was used in the Quartz version of the application:

glClearColor(0.78, 0.78f, 0.78f, 1.0f);
 glClear(GL_COLOR_BUFFER_BIT);

After that, we have to set the OpenGL drawing color by dissecting a UIColor and pulling the individual RGB components out of it. Fortunately, because we used the convenience class methods, we don't have to worry about which color model the UIColor uses. We can safely assume it will use the RGBA color space:

CGColorRef color = currentColor.CGColor;
const CGFloat *components = CGColorGetComponents(color);
CGFloat red = components[0];
CGFloat green = components[1];
CGFloat blue = components[2];
glColor4f(red,green, blue, 1.0);

Next, we turn off OpenGL ES's ability to map textures:

glDisable(GL_TEXTURE_2D);

Any drawing code that fires from the time we make this call until there's a call to glEnable(GL_TEXTURE_2D) will be drawn without a texture, which is what we want. If we allow a texture to be used, the color we just set won't show.

To draw a line, we need two vertices, which means we need an array with four elements. As we've discussed, a point in two-dimensional space is represented by two values, x and y. In Quartz, we used a CGPoint struct to hold these. In OpenGL, points are not embedded in structs. Instead, we pack an array with all the points that make up the shape we need to draw. So, to draw a line from point (100, 150) to point (200, 250) in OpenGL ES, we would create a vertex array that looked like this:

vertex[0] = 100;
vertex[1] = 150;
vertex[2] = 200;
vertex[3] = 250;

Our array has the format {x1, y1, x2, y2, x3, y3}. The next code in this method converts two CGPoint structs into a vertex array:

GLfloat vertices[4];
 vertices[0] = firstTouch.x;
 vertices[1] = self.frame.size.height - firstTouch.y;
 vertices[2] = lastTouch.x;
 vertices[3] = self.frame.size.height - lastTouch.y;

Once we've defined the vertex array that describes what we want to draw (in this example, a line), we specify the line width, pass the array into OpenGL ES using the method glVertexPointer(), and tell OpenGL ES to draw the arrays:

glLineWidth(2.0);
 glVertexPointer (2, GL_FLOAT , 0, vertices);
 glDrawArrays (GL_LINES, 0, 2);

Whenever we finish drawing in OpenGL ES, we have to tell it to render its buffer, and tell our view's context to show the newly rendered buffer:

glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
 [context presentRenderbuffer:GL_RENDERBUFFER_OES];

To clarify, the process of drawing in OpenGL consists of three steps. First, you draw in the context. Second, once all your drawing is done, you render the context into the buffer. And third, you present your render buffer, which is when the pixels actually get drawn onto the screen.

As you can see, the OpenGL example is considerably longer. The difference between Quartz 2D and OpenGL ES becomes even more dramatic when we look at the process of drawing an ellipse. OpenGL ES doesn't know how to draw an ellipse. OpenGL, the big brother and predecessor to OpenGL ES, has a number of convenience functions for generating common two- and three-dimensional shapes, but those convenience functions are some of the functionality that was stripped out of OpenGL ES to make it more streamlined and suitable for use in embedded devices like the iPhone. As a result, a lot more responsibility falls into the developer's lap.

As a reminder, here is how we drew the ellipse using Quartz 2D:

CGContextRef context = UIGraphicsGetCurrentContext();
 CGContextSetLineWidth(context, 2.0);
 CGContextSetStrokeColorWithColor(context, currentColor.CGColor);
 CGContextSetFillColorWithColor(context, currentColor.CGColor);
 CGRect currentRect;
 CGContextAddEllipseInRect(context, self.currentRect);
 CGContextDrawPath(context, kCGPathFillStroke);

For the OpenGL ES version, we start off with the same steps as before, resetting any movement or rotations, clearing the background to white, and setting the draw color based on currentColor:

glLoadIdentity();
 glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
 glClear(GL_COLOR_BUFFER_BIT);
 glDisable(GL_TEXTURE_2D);
 CGColorRef color = currentColor.CGColor;
 const CGFloat *components = CGColorGetComponents(color);
 CGFloat red = components[0];
 CGFloat green = components[1];
 CGFloat blue = components[2];
 glColor4f(red,green, blue, 1.0);

Since OpenGL ES doesn't know how to draw an ellipse, we have to roll our own, which means dredging up painful memories of Ms. Picklebaum's geometry class. We'll define a vertex array that holds 720 GLfloats, which will hold an x and a y position for 360 points, one for each degree around the circle. We could change the number of points to increase or decrease the smoothness of the circle. This approach looks good on any view that'll fit on the iPhone screen but probably does require more processing than strictly necessary if all you are drawing is smaller circles.

GLfloat vertices[720];

Next, we'll figure out the horizontal and vertical radii of the ellipse based on the two points stored in firstTouch and lastTouch:

GLfloat xradius = (firstTouch.x > lastTouch.x) ?
          (firstTouch.x - lastTouch.x)/2 :
          (lastTouch.x - firstTouch.x)/2;
 GLfloat yradius = (self.frame.size.height - firstTouch.y >
          self.frame.size.height - lastTouch.y) ?
        ((self.frame.size.height - firstTouch.y) ñ
         (self.frame.size.height - lastTouch.y))/2 :
        ((self.frame.size.height - lastTouch.y) ñ
         (self.frame.size.height - firstTouch.y))/2;

Next, we'll loop around the circle, calculating the correct points around the circle:

for (int i = 0; i < 720; i+=2) {
    GLfloat xOffset = (firstTouch.x > lastTouch.x) ?
          lastTouch.x + xradius : firstTouch.x + xradius;
    GLfloat yOffset = (self.frame.size.height - firstTouch.y >
                 self.frame.size.height - lastTouch.y) ?
                 self.frame.size.height - lastTouch.y + yradius :
                 self.frame.size.height - firstTouch.y + yradius;
vertices[i] = (cos(degreesToRadian(i/2))*xradius) + xOffset;
    vertices[i+1] = (sin(degreesToRadian(i/2))*yradius) + yOffset;
 }

Finally, we'll feed the vertex array to OpenGL ES, tell it to draw it and render it, and then tell our context to present the newly rendered image:

glVertexPointer (2, GL_FLOAT , 0, vertices);
 glDrawArrays (GL_TRIANGLE_FAN, 0, 360);
 glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
 [context presentRenderbuffer:GL_RENDERBUFFER_OES];

We won't review the rectangle method, because it uses the same basic technique; we define a vertex array with the four vertices to define the rectangle, and then we render and present it. There's also not much to talk about with the image drawing, since that lovely Texture2D class from Apple makes drawing a sprite just as easy as it is in Quartz 2D. There is one important thing to notice there, though:

glEnable(GL_TEXTURE_2D);

Since it is possible that the ability to draw textures was previously disabled, we have to make sure it's enabled before we attempt to use the Texture2D class.

After the draw method, we have the same touch-related methods as the previous version. The only difference is that instead of telling the view that it needs to be displayed, we just the draw method. We don't need to tell OpenGL ES what parts of the screen will be updated; it will figure that out and leverage hardware acceleration to draw in the most efficient manner.

Design the Nib, Add the Frameworks, Run the App

Now, you can double-click GLFunViewController.xib and design the interface. We're not going to walk you through it this time, but if you get stuck, you can refer to the earlier section called "Updating QuartzFunViewController.xib" for the specific steps. Be sure to change the class to GLFunView instead of QuartzFunView.

Once you're done, save and go back to Xcode.

Before we can compile and run this program, you'll need to link in two frameworks to your project. Follow the instruction from Chapter 7 for adding the Audio Toolbox framework but instead of selecting AudioToolbox.framework, select OpenGLES.framework and QuartzCore.framework.

Frameworks added? Good. Go run your project. It should look just like the Quartz version.

You've now seen enough OpenGL ES to get you started. If you're interested in using OpenGL ES in your iPhone applications, you can find the OpenGL ES specification along with links to books, documentation, and forums where OpenGL ES issues are discussed at http://www.khronos.org/opengles/.

Tip

If you want to create a full-screen OpenGL ES application, you don't have to build it manually. Xcode has a template you can use. It sets up the screen and the buffers for you and even puts some sample drawing and animation code into the class so you can see where to put your code. Want to try this out? Create a new iPhone OS application, and choose the OpenGL ES Application template.

Drawing a Blank

In this chapter, we've really just scratched the surface of the iPhone's drawing ability. You should feel pretty comfortable with Quartz 2D now, and with some occasional references to Apple's documentation, you can probably handle most any drawing requirement that comes your way. You should also have a basic understanding of what OpenGL ES is and how it integrates with iPhone's view system.

Next up? You're going to learn how to add gestural support to your applications.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.128.206.91