Chapter 11. Media: images and the camera

 

This chapter covers

  • Accessing and manipulating images
  • Using the camera
  • Creating a simple collage application
  • Using AirPrint for images

 

So far, our focus has mainly been on text. Sure, we’ve displayed the occasional UIImage, such as the mountain drawing in the previous chapter, but we’ve considered only the simplest means for doing so.

The iPhone, iPod Touch, and iPad offer an experience that’s potentially much richer and more engaging. Cameras, a microphone, a complete library of photos, and a speaker are just some of the utilities built into these devices. Less the camera, the first-generation iPad contains all the aforementioned libraries. In this chapter and the next, we’ll look at these features as part of a general exploration of media. We’ll provide deep coverage of images as well as how to use the camera.

More complex questions are beyond the scope of this chapter. We’re saving the topic of image editing for a later chapter, when we look at the graphic libraries.

11.1. An introduction to images

We’ve touched on using images a few times, beginning in chapter 3, where one of the earliest SDK examples included an image. You’ve created a UIImageView in Xcode, attached it to a filename, and not worried about the details.

We’re now ready to consider the details. We’ll look at some of the options available when you dive into Xcode.

When you look more closely, you’ll discover that using images is a two-step process. First, you load data into a UIImage, and then you make use of that UIImage via some other means. There are two major ways to use UIImages, as shown in figure 11.1.

Figure 11.1. Images can be shown in UIImageViews or in UIViews.

We’re going to explore the primary method of displaying images, using UIImageView, in this section, and in section 11.2 we’ll examine the more complex means available for drawing images onto the back layer of a UIView.

11.1.1. Loading a UIImage

The UIImage class offers seven different ways to create an instance of an image. The four factory methods are probably the easiest to use, and they’re the ones we’ve listed in table 11.1. You can also use some equivalent init methods if you prefer.

Table 11.1. Class methods for creating a UIImage

Class method

Summary

imageNamed: Creates a UIImage based on a file in the main bundle. In iOS 4 and later, you may omit the filename’s extension.
imageWithCGImage: Creates a UIImage from a Quartz 2D object. This is the same as initWithCGImage:.
imageWithContentsOfFile: Creates a UIImage from a complete file path that you specify, as discussed in chapter 8. This is the same as initWithContentsOfFile:.
imageWithData: Creates a UIImage from NSData. This is the same as initWithData:.

The image data can be of several file types, including BMP, CUR, GIF, JPEG, PNG, and TIFF. In this book, we use mostly JPEGs (because they’re small) and PNGs (because they look good and are accelerated on the hardware). You can also create a UIImage from a Quartz 2D object; this is the SDK’s fundamental graphics package, which we’ll talk about more in chapter 13. To support the retina display, the system uses the suffix of the image filename to load the best-matching image. For example, if you have two image files for one icon, one standard image and one higher-resolution image for retina display, name the standard file icon.png and HD version file [email protected]. During the loading time, the UIImage class will handle which image to load automatically. If you don’t have this HD version image file, the UIImage class will load the standard file and scale it up to fit in the higher-resolution screen display.

After you import an image into your program, you can display it. If you’re going to stay entirely within the simple methods of UIKit, you should use the UIImageView class to display the image.

11.1.2. Drawing a UIImageView

You’ve already used the UIImageView in your programs when displaying pictures. We’re now ready to talk about the details of how it works.

You can initialize a UIImageView two ways. First, you can use the initWithImage: method, which allows you to pass a UIImage, as follows:

UIImage *myImage1 = [UIImage imageNamed:@"sproul1.jpg"];
UIImageView *myImageView =
    [[UIImageView alloc] initWithImage:myImage1];
[self.view addSubview:myImageView];

Alternatively, you can use a plain initWithFrame: method and modify the object’s properties by hand. Table 11.2 shows a few of the properties and methods you’re most likely to use when doing more extensive work with a UIImageView.

Table 11.2. A few properties and methods of note for UIImageView

Method or property

Type

Summary

animationDuration Property Specifies how often an animation cycles
animationImages Property Identifies an NSArray of images to load into the UIImageView
animationRepeatCount Property Specifies how many times to run an animation cycle
image Property Identifies a single image to load into a UIImageView
startAnimating: Method Starts the animation
stopAnimating: Method Stops the animation

To load a normal image, you can use the image property, but there’s usually little reason to use it rather than the initWithImage: method—unless you’re dynamically changing your image. If you want to create a set of images to animate, it’s useful to take advantage of the other UIImageView methods and properties.

You can load an array of images into a UIImageView, declare how fast and how often they should animate, and start and stop them as you see fit. A simple example of this is shown in the following listing.

Listing 11.1. Using UIImageView to animate images
- (void)viewDidLoad {
    UIImage *myImage1 =
         [UIImage imageNamed:@"sproul1.jpg"];
    UIImage *myImage2 =
         [UIImage imageNamed:@"sproul2.jpg"];
    UIImage *myImage3 =
         [UIImage imageNamed:@"sproul3.jpg"];
    UIImage *myImage4 =
         [UIImage imageNamed:@"sproul4.jpg"];
    UIImageView *myImageView =
        [[UIImageView alloc]
            initWithFrame:[[UIScreen
                mainScreen] bounds]];
    myImageView.animationImages =
         [NSArray arrayWithObjects:myImage1,
        myImage2,myImage3,myImage4,nil];
    myImageView.animationDuration = 4;
    [myImageView startAnimating];
    [self.view addSubview:myImageView];
    [myImageView release];
    [super viewDidLoad];
}

This code first loads the images, then creates a UIView, and finally starts the animation. Taking advantage of UIImageView’s animation capability is one of the main reasons you may want to load images by hand.

11.1.3. Modifying an image in UIKit

You’ve seen how to create images and load them into image views programmatically. The next thing to do is to start modifying them.

Unfortunately, you have only a limited ability to do so while working with UIImageView. You can make some changes, based on simple manipulations of the view. For example, if you resize your UIImageView, it automatically resizes the picture it contains. Likewise, you can decide where to draw your UIImageView by setting its frame to something other than the whole screen. You can even layer multiple images by using multiple UIImageViews.

This starts to get unwieldy quickly, though, and you can’t do anything fancier, like transforming images or modifying how they stack through blending or alpha transparency options. To do that sort of work (and to stack graphics, not just views), you need to learn about Core Graphics.

UIImage offers some simple ways to access Core Graphics functionality that doesn’t require going out to the Core Graphics framework (or learning about contexts or the other complexities that underlie its use). We’ll talk about those briefly here, but for the most part, Core Graphics will wait for the next chapter, which concentrates on the entire Quartz 2D graphics engine.

11.2. Drawing simple images with Core Graphics

Although it doesn’t give access to the entire Core Graphics library of transformations and other complexities, the UIImage class includes five simple methods that take advantage of the way Core Graphics works. They’re described in table 11.3.

Table 11.3. Instance methods for drawing a UIImage

Method

Summary

drawAsPatternInRect: Draws the image inside the rectangle, unscaled, but tiled as necessary
drawAtPoint: Draws the complete unscaled image with the CGPoint as the upper-left corner
drawAtPoint:blendMode:alpha: A more complex form of drawAtPoint:
drawInRect: Draws the complete image inside the CGRect, scaled appropriately
drawInRect:blendMode:alpha: A more complex form of drawInRect:

The trick is that these methods can’t be used as part of viewDidLoad: or whatever other method you usually use to load up your objects. That’s because they depend on a graphical context to work. We’ll talk about contexts more in chapter 13; for now, keep in mind that a graphical context is a destination you’re drawing to, like a window, a PDF file, or a printer.

On the iPhone and iPad, UIViews automatically create a graphical context as part of their CALayer, which is a Core Animation layer associated with each UIView. You can access this layer by writing a drawRect: method for the UIView (or rather, for a new subclass that you’ve created). You usually have to capture a special context variable to do this type of work, but the UIView methods take care of this for you, to keep things simple.

Here’s how to collage together a few pictures using this method:

- (void)drawRect:(CGRect)rect {
     UIImage *myImage1 = [UIImage imageNamed:@"sproul1.jpg"];
     UIImage *myImage2 = [UIImage imageNamed:@"sproul2.jpg"];
     UIImage *myImage3 = [UIImage imageNamed:@"sproul3.jpg"];
    [myImage1 drawAtPoint:CGPointMake(0,0) blendMode:kCGBlendModeNormal
        alpha:.5];
    [myImage2 drawInRect:CGRectMake(10, 10, 140, 210)];
    [myImage3 drawInRect:CGRectMake(170, 240, 140, 210)];
}

Note that the drawAtPoint: method gives you access to more complex possibilities, such as blending your pictures (using Photoshop-like options such as color dodge and hard light) and making them partially transparent. Here you’re using a normal blend but only 50 percent transparency (hence the use of the drawAtPoint: method). Using singular draw commands is simpler than going through the effort of creating multiple UIImageView objects.

There’s still a lot that you can’t do until we dive fully into the Core Graphics framework; but for now you have some control, which should be sufficient for most common media needs. If you need more control, skip right ahead to chapter 13.

We’ve talked a lot about images, and we’ve presumed so far that you’re loading them from your project’s bundle. But what if you want to let a user select photographs? That’s the topic of the next section.

11.3. Accessing photos

You can use the SDK to access pictures from the photo library or the camera roll. You can also allow a user to take new photos. This is all done with the UIImagePicker-Controller, another modal controller that manages a fairly complex graphical interface without much effort on your part. Figure 11.2 shows what it looks like.

Figure 11.2. The image picker is another preprogrammed controller for your use.

11.3.1. Using the image picker

By default, the UIImagePickerController lets users access the pictures in their photo library. You load the UIImagePickerController by creating the object, setting a few variables, and presenting it. On the iPhone, you present it as a modal view controller; on the iPad, you need to display it in a UIPopoverController. Make sure your class implements the UIImagePickerControllerDelegate protocol in order to use its methods.

To display the picker on the iPhone, you can use the following code snippet:

UIImagePickerController *myImagePicker =
    [[UIImagePickerController alloc] init];
myImagePicker.delegate = self;
myImagePicker.allowsImageEditing = NO;
[self presentModalViewController:myImagePicker animated:YES];

As we mentioned, the iPad requires that you display the UIImagePickerController inside a UIPopoverController. One great thing about this is that you can specify the location on the screen in which the picker appears. The following code displays the UIImagePickerController on the iPad:

UIImagePickerController *myImagePicker =
       [[UIImagePickerController alloc] init];
    myImagePicker.delegate = self;
    myImagePicker.allowsEditing = NO;

    UIPopoverController *popover = [[UIPopoverController alloc]
        initWithContentViewController:myImagePicker];
    [popover presentPopoverFromRect:CGRectMake(0,0,320,480)
        inView:self.view permittedArrowDirections:
        UIPopoverArrowDirectionAny animated:YES];

After you’ve created your UIImagePickerController, you need to have its delegate respond to two methods: imagePickerController:didFinishPickingMediaWithInfo: and imagePickerControllerDidCancel:. For the first method, you dismiss the modal view controller (or hide the popover on the iPad) and respond appropriately to the user’s picture selection; for the second, you only need to dismiss the controller.

Overall, the UIImagePickerController is easy to use because you’re mainly reacting to a picture that was selected. Section 11.4 presents a complete example of its use.

11.3.2. Taking photos

As we noted earlier, the UIImagePickerController has three possible sources, represented by these constants:

  • UIImagePickerControllerSourceTypePhotoLibrary—A picture from the photo library
  • UIImagePickerControllerSourceTypeSavedPhotosAlbum—A picture from the camera roll
  • UIImagePickerControllerSourceTypeCamera—A new picture taken by the camera

You should always make sure that the source is available before you launch a UIImagePickerController, although this is most important for the camera. You can confirm that the source exists with the isSourceTypeAvailable: class method:

if ([UIImagePickerController
    isSourceTypeAvailable:UIImagePickerControllerSourceTypeCamera]) {

After you’ve verified the existence of a source, you can tell the image picker to use it with the sourceType property. For example, to use the camera, do the following:

myImagePicker.sourceType = UIImagePickerControllerSourceTypeCamera;

Note that pictures taken in a program go only to that program. If you want them to go into the photo album, your program has to save them there (as we’ll discuss momentarily).

 

Note

In our experience, the camera is a bit of a resource hog. More than anything else, this means you need to think about saving your program’s state when using the camera, because it could cause you to run out of memory.

 

We’ll present an example of using the camera in section 11.4.

11.3.3. Saving to the photo album

You may wish to save a new photograph to the photo album, or you may wish to place a graphic created by your program there. In either case, you use the UIImageWriteToSavedPhotosAlbum function. It has four variables: the first lists the image, and the other three reference an optional asynchronous notification function to call when the save has been completed. Usually you call the function like this:

UIImageWriteToSavedPhotosAlbum(yourImage,nil,nil,nil);

If you instead want to take advantage of the asynchronous notification, look at the UIKit function reference, which is where this function is hidden, or look at the example in chapter 13.

You can use this function (and a bit of trickery) to save the CALayer of a UIView to your photo album, which, for example, lets you save the draw commands that you wrote straight to the CALayer earlier. This again depends on graphical contexts, which we’ll explain in the next chapter, but here’s how to do it:

UIGraphicsBeginImageContext(myView.bounds.size);
[myView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *collageImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(collageImage,nil,nil,nil);

In order for this to work correctly, you must add the Quartz Core framework to your project.

With all the fundamentals of images now covered, we’re ready to put them together in our big example for this chapter. It’s a program that collages together multiple pictures, first selecting them with a UIImagePickerController, then allowing them to be moved about with a UIImageView, and finally drawing them to a CALayer that can be saved.

11.4. Collage: an image example

The collage program depends on three objects. The collageViewController, as usual, does most of the work. It writes out to a collageView object, which exists mainly as a CALayer to be written upon. Finally, you’ll have a tempImageView object that allows the user to position an image after it’s been selected but before it’s permanently placed.

For this example, the code will be written to deploy the collage application to the iPhone. To learn how to port it to the iPad, be sure to read appendix D; it contains step-by-step instructions for porting this app as well as your own apps to the iPad.

11.4.1. The collage view controller

The collage view controller is built with a few objects: the view controller itself; a toolbar called myTools, which will be filled over the course of the program; and the collageView UIView class, which exists as its own class file and is referred to in the program as self.view. You also need to add the Quartz Core framework to your project because you’ll use the save-picture trick that we just discussed.

The next listing shows the complete view controller, which is the most extensive file in this program.

Listing 11.2. A view controller, which manages most of the collage’s tasks

Although long, this code is simple to follow in bite-size chunks. It starts with viewDid-Load:, which sets up the UIToolBar. You can’t efficiently fill the UIToolBar in Xcode because you’ll be changing it based on the program’s state. You place buttons on the toolbar that call three methods: choosePic:, takePic: (when a camera’s available), and savePic:.

choosePic: and takePic: are similar methods. Each calls up the image picker controller, but the first one accesses the photo library and the second one lets the user take a new picture. The wonder of these modal controllers is that you don’t have to do a thing between the time when you create the picker and the point at which the user either selects a picture or cancels.

When the user selects a picture, imagePickerControl:didFinishPickingImage:editingInfo: is called , returning control to your program. Here you do four things:

  1. Dismiss the modal view controller.
  2. Look at the picture you’ve been handed, and resize it to fill a quarter or less of the screen.
  3. Instantiate the image as a tempImageView object, which is a subclass of UIImageView.
  4. Change the toolbar so a Done button is available, along with a slider.

At this point, the user can do three things:

  • Use UITouches to move the image view (which is covered in the tempImageView class, because that’s where the touches go, as you saw in chapter 6).
  • Use the slider to change the size of the picture.
  • Tap Done to accept the image size and location.

The results of what can be produced are shown in figure 11.3.

Figure 11.3. The collager displays many photos simultaneously.

Note that if the user instead cancels the image picker, your imagePickerControllerDidCancel: method correctly shuts down the modal controller .

The UISlider is hooked up to the rescalePic: method. It redraws the frame of the UIImageView, which automatically resizes the picture inside. Meanwhile, the Done button activates the finishPic: method. This sends a special addPic:at: message to the collageView, which is where the CALayer drawing is done, and which we’ll return to momentarily. finishPic: also dismisses the UISlider and the tempImageView and resets the toolbar to its original setup.

That original toolbar has one more button that we haven’t covered yet: Save. It activates the savePic: method, which saves a CALayer to the photo library. Note that this method temporarily hides the toolbar in the process. Because the toolbar is a sub-view of the UIView, it would be included in the picture if you didn’t do this.

The last method, scaleImage:, is the utility that sets each image to fill about a quarter of the screen.

This code has two dangling parts: the methods in the tempImageView, which allow a user to move the UIImageView, and the methods in the collageView, which later draw the image into a CALayer.

11.4.2. The collage temporary image view

The tempImageView class has only one purpose: to intercept UITouches that indicate that the user wants to move the new image to a different part of the collage. This simple code is shown in the following listing.

Listing 11.3. Moving a temporary image by touches

This is similar to the touch code that you wrote in chapter 6. Recall that locationIn-View: gives a CGPoint internal to the view’s coordinate system and needs to be converted into the global coordinate system of the application.

In testing, we discovered that when run on an iPhone (but not in the iPhone Simulator), the result is sometimes out of bounds; you need to double-check the coordinates before you move the temporary image view.

11.4.3. The collage view

Last up we have the collageView, which is the background UIView that needs to respond to the addPic:at: message and draw on the CALayer with drawRect:. The code to do this is shown in the following listing.

Listing 11.4. Background view managing low-level drawing when an image is set
-(void)addPic:(UIImage *)newPic at:(CGRect)newLoc {
    if (! myPics) {
         myPics = [[NSMutableArray alloc] initWithCapacity:0];
        [myPics retain];
    }
    [myPics addObject:[NSDictionary dictionaryWithObjectsAndKeys:
        newPic,@"picture",
         [NSNumber numberWithFloat:newLoc.origin.x],@"xpoint",
         [NSNumber numberWithFloat:newLoc.origin.y],@"ypoint",
        [NSNumber numberWithFloat:newLoc.size.width],@"width",

        [NSNumber numberWithFloat:newLoc.size.height],@"height",
            nil]];
    [self setNeedsDisplay];
}
- (void)drawRect:(CGRect)rect {
    if (myPics) {
        for (int i = 0 ; i < myPics.count ; i++) {
            UIImage *thisPic = [[myPics objectAtIndex:i]
                objectForKey:@"picture"];
             float xpoint = [[[myPics objectAtIndex:i]
                 objectForKey:@"xpoint"] floatValue];
             float ypoint = [[[myPics objectAtIndex:i]
                 objectForKey:@"ypoint"] floatValue];
             float height = [[[myPics objectAtIndex:i]
                 objectForKey:@"height"] floatValue];
             float width = [[[myPics objectAtIndex:i]
                objectForKey:@"width"] floatValue];
            [thisPic drawInRect:CGRectMake(xpoint,ypoint,width,height)];
        }
    }
}

This code is broken into two parts. The addPic:at: method saves its information into an instance variable, adding a myPics dictionary to the NSMutableArray. Note that you have to convert values into NSNumbers so that you can place them in the dictionary. This method then calls setNeedsDisplay on the view. You should never call drawRect: directly. Instead, when you want it to be executed, call the setNeedsDisplay method, and everything else will be done for you.

drawRect: is called shortly afterward. It reads through the whole NSMutableArray, breaks it apart, and draws each image onto the CALayer using the techniques you learned earlier.

We haven’t shown the few header files and the unchanged app delegate, but this is everything important needed to write a complete collage program.

11.4.4. Further exploration of this example

This was one of our longer examples, but it could still bear some expansion to turn it into a fully featured application.

First, it’s a little unfriendly with memory. It would be better to maintain references to filenames, rather than keep the UIImages around. In addition, the NSArray that the CALayer is drawn from should be saved out to a file so it won’t get lost if memory is low. But the program as it exists should work fine.

The program could be made more usable. An option to crop the pictures would be nice, but it may require access to Core Graphics functions. An option to move pictures around after they’ve been locked in would be relatively simple: you could test for touches in the collageView and read backward through the NSArray to find which object the user was touching. Reinstantiating it as a UIImageView would then be simple.

11.5. Printing images

AirPrint comes with iOS 4.2 and is available to both the iPhone and iPad. The AirPrint user interface on the iPhone and iPad is shown in figure 11.4. Generally, the print button is bar button item. When the user taps the Print button, the view controller to assign the printing task will present as a modal view controller on the iPhone and a popover view on the iPad. Once the print task is assigned, it will be printed right away or it will wait in the print queue. Users can check on the status by accessing the Print Center under the multitasking UI.

Figure 11.4. Printing UI on the iPad and iPhone

AirPrint is handled by the iOS system’s UIKit, and no extra framework is required for the project.

The UIWebView, UITextView, and data such as UIImage and PDF files are print ready and can be handled by the print controller directly.

In this section you’ll learn how to print an image from the application with the UIPrintInteractionController on the iPhone and iPad. Before we start coding, let’s examine the printing workflow.

11.5.1. Printing workflow

Inside the AirPrint API, you can create the UIPrintInteractionController and present it as a modal view controller in the iPhone or a popover view on the iPad. It works the same as the system printing UI, as shown in figure 11.4.

UIPrintInteractionController is the key class in iOS for printing. You can create a printing user interface by calling the following code:

UIPrintInteractionController *controller = [UIPrintInteractionController
     sharedPrintController];

To make sure the print view controller is available in the current system, you can use the method [UIPrintInteractionController isPrintingAvailable] to check the availability.

Next, you need to define or customize the print task by setting the properties of the controller. There are some important properties for the controller listed in table 11.4.

Table 11.4. A few properties in UIPrintInteractionController

Property

Summary

printingItem A single UIImage, NSData, NSURL, or ALAsset object containing or referencing image data or PDF data.
printInfo A UIPrintInfo object to customize the printItem.
printingItems An array of objects either containing or referencing image data or PDF data. These objects are directly printable.
printFormatter A UIPrintFormatter object handles the printing format.
printPageRender An instance of a custom class of UIPrintPageRenderer draws each page of printable content partially or entirely.

In order to define the printInfo, you need to create an instance of UIPrintInfo. UIPrintInfo is a class that allows you to customize the printing job’s information. UIPrintInfo includes properties such as the print-job name, the printer identifier, the orientation of the printed content, the duplex mode, and the kind of content (general, photo, or grayscale).

Similar to the UIImagePickerController, make sure you are implementing the UIPrintInteractionController’s delegate method to handle the callback messages. For example, when the print task is assigned, show an alert view to notify the end user.

In order to present the print view controller on the iPhone, you must create the completion handler and present it with method present-Animated:completionHandler:; on the iPad, present the popover controller with the method presentFromBarButtonItem:animated:completionHandler:.

11.5.2. Simulating printing

Luckily, the iOS SDK after 4.2 comes with the AirPrint simulator app for Mac OS in case you don’t have a printer for testing. You can find this print simulator app at <Xcode>/Platforms/iPhoneOS.platform/Developer/Applications/Printer Simulator, as shown in figure 11.5.

Figure 11.5. Printer Simulator under the iOS SDK

Launch the Printer Simulator app on your Mac. You will see a message similar to the one shown in figure 11.6.

Figure 11.6. Printer Simulator screenshot

With the Printer Simulator running, you can test the printing tasks directly from the iOS Simulator. Now we will start coding for printing.

11.5.3. Creating a demo app-printing image

In this section, we will create a simple view-based application for the iPhone and iPad containing an image in the center, which will print when the user taps the Print button.

Fire up Xcode and create a new project with View-Based Application template under iOS. Name it iPrint. Drag a photo you would like to print to this project’s Resources folder.

Select the iPrintViewController header file and add in the changes shown in the following listing.

Listing 11.5. iPrintViewController header file
#import <UIKit/UIKit.h>
@interface iPrintViewController :
UIViewController<UIPrintInteractionControllerDelegate> {
     IBOutlet UIBarButtonItem *printButton;
     IBOutlet UIImageView *myPhoto;
}
-(IBAction)printPhoto:(id)sender;
@end

With the image view, Print button, and printPhoto: method added, let’s drag and hook up the two subviews to iPrintViewController’s nib file visually. Connect the method printPhoto: to the Print button’s action.

Now add in the following code to the view controller’s implementation file to complete the print task.

Listing 11.6. iPrintViewController implementation file

In the viewDidLoad: method, you first check the availability of the print view controller . If it’s not currently available on iOS, an alert view will pop up to notify the user. The print job is defined in the printPhoto: method . First, create the print view controller, and then define the delegate and the print info. In this example, the printing item is the image from the image view. Then, define the block for the completion handler. You want to monitor the error message in this example:

void (ˆcompletionHandler)(UIPrintInteractionController *, BOOL, NSError *) =
ˆ(UIPrintInteractionController *pic, BOOL completed, NSError *error) {
     if (!completed && error)
         NSLog(@"FAILED! due to error in domain %@ with error code %u",
         error.domain, error.code); };"
};

On the iPad, the print view controller will show as a popover controller from the bar button; on the iPhone, the print view controller will present as a modal view controller.

When the printing job is finished, the delegate: method will be called, so you notify the user with an alert view. That’s all!

11.5.4. Launching the printer app on the Simulator

Now save all the changes. Before you build and run this iPrint app, make sure the printing simulator app is running. When the app is launched in the Simulator, tap the Print button. You will see that the simulator printer is available on the print view controller, as shown in figure 11.7.

Figure 11.7. iPrint app running on the Simulator for the iPhone and iPad

You can play around with this app. For example, you can set the print info’s property to change the content to grayscale. Even better, you can change the input image to one of the photos from the photo library.

That’s all! Now you’ve learned how to print out image with AirPrint and test it in iOS.

11.6. Summary

Dealing with media is a huge topic that probably could fill a book on its own. Fortunately, there are relatively easy (if limited) ways to utilize each major sort of media. In this chapter, we discussed the various ways to manage and manipulate images on the iPhone and iPad. We first discussed how to load them from disk. This includes images saved in an application’s directory as well as from the camera roll.

We also showed you how the UIImagePickerController can be slightly modified to allow the user to take a photo and use it in an application.

You’ve seen how all these pictorial fundamentals work together, so we’re now ready to move on to the next major types of media: audio and video.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.137.220.92