10. AV Foundation

We wrap up our discussion on the Media layer of iOS by covering AV Foundation and some related high-level abstractions. As you’ve learned, Quartz provides access to various frameworks that facilitate the creation and presentation of audio- and video-based media types. In many ways, AV Foundation operates as the backend for these technologies. Implementing AV Foundation directly, developers can capture media through an AVCaptureSession, or manipulate and render video data on an AVPlayerLayer (a subclass of the CALayer class defined in Quartz Core and covered in Chapter 8, Core Animation). To simplify matters, however, iOS has created some high-level abstractions of the AV Foundation framework that provide drop-in objects for media playback and capture.

Getting Started with AV Foundation

While AV Foundation leverages some of the Quartz Core technologies outlined in Chapter 8, Core Animation, it is in fact a separate framework. The AV Foundation framework operates using a collection of Objective-C classes that facilitate the creation, management, and manipulation of audiovisual media. During playback, this media is referred to as an asset, or AVAsset, and can represent local media stored on an iOS device, a media asset progressively downloaded from the Internet, or an HTTP Live Stream referenced from the Internet. During capture, developers attach an AVCaptureDevice (one of the cameras, a microphone, or such) to an AVCaptureSession. Once a session is established, media can be extracted through an AVCaptureOutput subclass such as AVCaptureStillImageOutput or AVCaptureVideoDataOutput, where individual video frames are output to a delegate so they can be processed or written to a file as necessary.

Why Use AV Foundation?

The AV Foundation framework gives developers the power to create custom capture and playback solutions for media types such as audio, video, and still images. The advantage of using AV Foundation over an out-of-the-box solution is that through AV Foundation developers have access to the raw multimedia data throughout the capture and playback process. This means you can pre-process video frames and apply real-time effects either while the file is being recorded or while it’s output to the preview that’s being displayed to the user. Since you have access to each individual frame, you can easily apply some of the filters we learned about in Chapter 7, Core Image, such as face detection or white balance adjustment—all real time, in camera.

Like some of the other frameworks we’ve discussed, implementing AV Foundation directly can be a little daunting for new developers. The AV Foundation framework is extremely powerful in part because it assumes very little. When you use AV Foundation to play or capture media, you start with nothing. For example, when working with media playback, an app must create and render the video data to an AVPlayerLayer. The AVPlayerLayer is a subclass of the CALayer class found in Chapter 8, Core Animation, and is specifically designed to render video content. This AVPlayerLayer, however, does not contain any buttons, controls, or gestures; the player layer will only display the content of the media. If you want to pause, stop, or advance your media through on-screen controls, you must create your own buttons and controls to handle these actions.

You can almost think of the AV Foundation framework as a raw API—very low level compared to some of the other frameworks we’ve covered. AV Foundation gives you all of the tools and APIs you need to create your own solutions, but sometimes building a house from the ground up can be a lot more work than needed.

For those who don’t need a custom solution, Apple has created two standard media capture and media player view controllers, UIImagePickerController and MPMoviePlayerController. These classes are built on AV Foundation and operate as high-level abstractions, allowing the majority of developers to incorporate media playback and media capture into their apps without writing a lot of code (Figure 10.1).

Image

Figure 10.1. UIImagePickerController (left) as seen in the Messages app, and MPMoviePlayerController (right) as seen implemented in KelbyTraining.com.

While your control over these classes is significantly limited compared to AV Foundation, they do provide out-of-the-box functionality by automatically creating buttons, timeline scrubbers, and the APIs needed to present fullscreen videos with seamless animations (all of which must be created manually when implementing AV Foundation directly).


image Tip

While not covered in this chapter, you can download an example custom media player built in AV Foundation at iOSCoreFrameworks.com/download#chapter-10 as well as a full tutorial at iOSCoreFrameworks.com/tutorials#custom-media-player.


In this chapter, first we’ll cover some of the high-level abstractions of AV Foundation that provide easy access to audiovisual media through UIKit and the Media Player framework. Next, we’ll discuss how to implement AV Foundation directly by creating a custom image capture solution.


image Note

As an added bonus, these high-level abstractions of AV Foundation provide a sense of consistency across iOS apps by allowing developers to use the same visual styles for presenting media used in native iOS apps. Unless otherwise required by your app’s user experience, you should consider using these standard classes (or at least similar visual styles) to help maintain a user’s consistent experience from app to app.


AV Foundation and Other Media-based Frameworks

While AV Foundation is responsible for a large percentage of audio and video playback and capture, there are actually a variety of frameworks involved in the practical implementation of a custom solution. Because AV Foundation assumes nothing, when you create a custom solution you’ll need to incorporate additional frameworks to help you define things like video codecs, color spaces, and even the formatting of media timecodes.

Implementations of media in iOS apps typically fall into one of three scenarios:

• Capturing images and video using UIImagePickerController (defined in the UIKit framework).

• Playing video using MPMoviePlayerController and MPMoviePlayerViewController (defined in the Media Player framework).

• Creating custom solutions using AV Foundation that require custom UI elements or access to raw camera/frame data during playback or capture.

Most people are able to meet their audiovisual needs using either the first or second scenario. Only in cases where a custom capture or playback solution is needed should you implement AV Foundation directly (for example, with custom control layouts and styles, custom playhead scrubbers, or custom presentation animations).

Using Out-of-the-Box Solutions

Apple engineers wanted to ensure that any developer could have access to high-quality media playback and capture. Using native solutions to deal with media has its advantages. While your control over some of the finer points might be limited, these native solutions are extremely simple to implement and extremely efficient in power consumption and memory management. There are two primary uses of media capture and media playback. Naturally, iOS provides two separate classes for dealing with these scenarios: UIImagePickerController for media capture, and MPMoviePlayerController for media playback.

UIImagePickerController

Figure 10.3 illustrates the UIImagePickerController as seen in the native Messages app for iOS. The UIImagePickerController is a subclass of UINavigationController (which as you know is a subclass of UIViewController). The nicest thing about UIImagePickerController is that you don’t need to import any additional frameworks to use it in your applications because it exists as a part of UIKit.

Image

Figure 10.3. UIImagePickerController demonstrated in the native Messages app.

Characterized by the source type, the UIImagePickerController is used either to select media from the local device or to capture new media using the camera. When a user selects or captures new media, the UIImagePickerController calls didFinishPickingMediaWithInfo: on its corresponding delegate.

Selecting Photos from the Photo Library

The following example demonstrates how to use the UIImagePickerController to present a user with the photo library on their local device.

 1   - (void)showPhotoLibrary{
 2     UIImagePickerController *p;
 3     p = [[UIImagePickerController alloc] init];
 4     [p setSourceType:UIImagePickerControllerSourceTypePhotoLibrary];
 5     [p setDelegate:self];
 6     [self presentViewController:p animated:YES completion:nil];
 7   }
 8
 9   - (void)imagePickerController:(UIImagePickerController *)picker
       didFinishPickingMediaWithInfo:(NSDictionary *)info{
10
11     UIImage *image;
12     image = [info objectForKey:UIImagePickerControllerOriginalImage];
13
14   }

This code block shows the implementation of two methods. The first method, showPhotoLibrary (lines 1 through 7) creates a new UIImagePickerController and presents it as a child to the current view controller. The most important line in this method is line 4, where we define the source type of the image picker as UIImagePickerControllerSourceTypePhotoLibrary.

There are three choices for the source type of a UIImagePickerController:

• UIImagePickerControllerSourceTypePhotoLibrary

• UIImagePickerControllerSourceTypeSavedPhotosAlbum

• UIImagePickerControllerSourceTypeCamera

Each of these source types presents the user with a different set of options. The first source type presents a user with their photo library (as seen in the code example). The photo library source type lets a user choose from their entire photo library, unlike the second source type that only lets a user choose from their Saved Photos album. The final source type is the camera. When this source type is set, users see the standard camera interface rather than a list of photos.

Capture Media Using UIImagePickerController

If the source type of a UIImagePickerController is set to a user’s camera, then the UIImagePickerController allows an additional set of options including control over which camera is used (front or back), the type of media capture (still image or video), camera flash settings, and even the quality of media capture.

Additionally, the UIImagePickerController allows developers to provide an overlay view on top of the camera’s preview view. You can use this view to provide a custom viewfinder or any other custom UI elements of your application. Unlike AV Foundation, the UIImagePickerController does not give you access to raw pixels from the camera before the capture operation is completed. When using UIImagePickerController, you must wait until a user has finalized the camera operation before you get access to the image data.

The following code sample demonstrates how to create a camera recorder, as seen in Figure 10.4. Here we present the same UIImagePickerController, but this time we use the source type camera. Further, this code sample sets up a custom overlay view giving it a unique look and feel.

Image

Figure 10.4. UIImagePickerController with a custom camera overlay.

 1   - (void)showCustomCamera{
 2
 3    // Create UIImagePickerController
 4    picker = [[UIImagePickerController alloc] init];
 5
 6    // Set the source type to the camera
 7    [picker setSourceType:UIImagePickerControllerSourceTypeCamera];
 8
 9    // Set ourselves as delegate
10    [picker setDelegate:self];
11
12    // Force the camera to only use the front facing camera
13    // Set the camera as front facing
14    // Disable camera controls (which prevents users from
15    // changing cameras)
16    [picker setCameraDevice:UIImagePickerControllerCameraDeviceFront];
17    [picker setShowsCameraControls:NO];
18
19     // Create our overlay view
20    UIView *myOverlay = [[UIView alloc]
                                   initWithFrame:self.view.bounds];
21    UIImage *overlayImg = [UIImage imageNamed:@"overlay.png"];
22    UIImageView *overlayBg;
23    overlayBg = [[UIImageView alloc] initWithImage:overlayImg];
24    [myOverlay addSubview:overlayBg];
25
26    // Add a custom snap button to our overlay view
27    UIButton *snapBtn = [UIButton buttonWithType:UIButtonTypeCustom];
28    [snapBtn setImage:[UIImage imageNamed:@"takePic.png"]
               forState:UIControlStateNormal];
29
30    // Add an action to our button
31    // The action will be called on self, which is this class
32    [snapBtn addTarget:self
                  action:@selector(pickerCameraSnap:)
        forControlEvents:UIControlEventTouchUpInside];
33    snapBtn.frame = CGRectMake(74, 370, 178, 37);
34    [myOverlay addSubview: snapBtn];
35
36    // Set the camera overlay view on our picker
37    [picker setCameraOverlayView:myOverlay];
38
39    // Present the picker
40    [self presentViewController:picker animated:YES completion:nil];
41   }
42
43   // Respond to our custom button by telling the picker
44   // to take a picture
45   - (void)pickerCameraSnap:(id)sender{
46     [picker takePicture];
47   }

This long code block might look intimidating, but if you walk through the setup one step at a time, it’s really quite simple. In lines 1 through 10 we create our UIImagePickerController just as we did before, only this time (in line 7) we set the source type to be a camera instead of a photo library. In lines 16 and 17 we set up some custom options for the camera. Specifically, line 16 tells the UIImagePickerController to launch using the front-facing camera. In line 17 we disable the normal camera controls. This hides the native camera heads-up display and toolbar buttons that allows the user to adjust the flash select between forward- and rear-facing cameras and take a picture. This means that before we present our picker, we must add a way for the user to take a picture.

Next, in lines 20 through 34 we set up a simple UIView overlay. This overlay contains two subviews, a UIImageView—initialized with a PNG image file that has a transparent cutout for our viewfinder—and a UIButton that we configure to trigger our take-picture action. Once we set up the overlay subview, we add it as the camera overlay view in line 37. Finally in line 40 we present our picker as a child view controller to self, which is our window’s root view controller.

Notice lines 43 through 47. When we configured our custom capture button in Line 32, we added a target to self (which is this view controller) for the pickerCameraSnap: selector. When the user taps our custom button in the overlay view, it triggers this method. So in line 46 we simply tell the picker to take a picture. Once the picture is finished capturing, the picker then automatically calls the method didFinishPickingImageWithMediaInfo on the delegate, same as before.


image Tip

You should never assume that a user has specific hardware. In this example we force our camera to launch using the front-facing camera. If a user does not have a front-facing camera, this code sample would crash. To prevent this, you can configure your app’s info.plist to specify required device capabilities. By adding the required device capability front-facing camera, a device without a front-facing camera would not be able to install your app. You can read more about other required device capability keys at iOSCoreFrameworks.com/reference#required-capabilities.


Working with Video

The UIImagePickerController can also be used to capture video. By default, the UIImagePickerController is configured for still images only, but by changing a few simple properties you can enable capture video as well. When working with video in the UIImagePickerController, all other operations remain the same as when working with still images. Simply configure the appropriate options in Table 10.1 before you present your image picker and you’re all set.

Table 10.1. UIImagePickerController Video-Specific Properties

Image

image Note

To download an example of UIImagePickerController demonstrating all available features and options, visit iOSCoreFrameworks.com/download#chapter-10.


Using MPMoviePlayerController

Like the UIImagePickerController, the MPMoviePlayerController is an out-of-the-box solution for working with media. In essence, it’s the native iOS media player. MPMoviePlayerController is designed to give you easy access to media playback from either a local media source, or a remote media source from the Internet. The MPMoviePlayerController is full featured, meaning it will play any media type available on the iOS platform while giving you the necessary APIs to present fullscreen media, and enabling AirPlay control through a simple Boolean property (Figure 10.5).

Image

Figure 10.5. An example of MPMoviePlayerController as seen in KelbyTraining.com for iPad.


image Note

To play video using the MPMoviePlayerController, you must link MediaPlayer.framework to your project. You do not need to link AV Foundation if you only plan on using the pre-packaged MPMoviePlayerController class.


There are actually two important classes when working with the native iOS media player, MPMoviePlayerController and MPMoviePlayerViewController. The MPMoviePlayerController is your primary player while MPMoviePlayerViewController is simply a subclass of UIViewController that contains a moviePlayer property that’s of type MPMoviePlayerController. The MPMoviePlayerController by itself is very useful when you need to embed a movie player within an existing view controller. Simply create a new MPMoviePlayerController and add its view as a subview of your current view hierarchy.

The MPMoviePlayerViewController works very well as a standalone view controller. Additionally, the MPMoviePlayerViewController extends UIViewController to allow a special presentMoviePlayerViewControllerAnimated method used to animate videos to fullscreen. This method will animate the current window off screen by pushing it down, revealing the MPMoviePlayerViewController. When fullscreen is exited or playback is finished, the MPMoviePlayerViewController animates off screen by bringing the main window back up from the bottom.

Loading Content into the MPMoviePlayerController

Since the MPMoviePlayerViewController and MPMoviePlayerController are so closely related, both can be initialized with a simple content URL. This URL can represent the path to a local file in the app bundle, a reference to a progressive download video, or even an HTTP Live Stream dynamic-bitrate M3U8 index file. Once a content URL is loaded, you can quite easily change video files by calling setContentURL on the appropriate movie player object.

It’s very important to note that if your app loads video from the Internet, you must follow Apple’s video streaming requirements. Quoted from Apple’s App Store guidelines:

If your app delivers video over cellular networks, and the video exceeds either 10 minutes duration or 5 MB of data in a five minute period, you are required to use HTTP Live Streaming. (Progressive download may be used for smaller clips.)

If your app uses HTTP Live Streaming over cellular networks, you are required to provide at least one stream at 64 Kbps or lower bandwidth (the low-bandwidth stream may be audio-only or audio with a still image).

While it takes a little more work server side (on your end), setting up an HTTP Live Stream does have its advantages for users. For one, media delivered over HTTP Live Streaming is random access, meaning a user can scrub to anywhere in the clip before the clip is actually downloaded. Also, HTTP Live Streaming supports dynamic-bitrate switching, which means if a user’s Internet connection speed changes (either slows down or speeds up), iOS will automatically switch the video stream to the optimum bit rate (video quality) as defined by the M3U8 index file. All of this functionality is handled out of the box with the MPMoviePlayerController.


image Note

You can read more about the requirements of HTTP Live Streaming versus progressive download by visiting developer.apple.com or iOSCoreFrameworks.com/reference#http-live.


The following code block sets up a simple video player as seen in the example in Figure 10.6. Here, we want to show a movie player that’s embedded in our current UIViewController. When our segmented control is changed, we simply reload the content URL of our movie player with a new media source.

Image

Figure 10.6. MPMoviePlayerController with embedded and fullscreen control styles.


image Tip

Starting in iOS 4.3, the MPMoviePlayerController also provides AirPlay functionality by simply setting the Boolean, allowsAirPlay. This option is enabled by default.


 1   - (void)viewDidLoad{
 2     [super viewDidLoad];
 3     // Create a new URL for our local video file
 4     NSString *moviePath = [[NSBundle mainBundle]
                                    pathForResource:@"myMovie"
                                             ofType:@"mp4"];
 5     NSURL *contentURL = [NSURL fileURLWithPath:moviePath];
 6
 7     // Create Movie Player with content URL
 8    moviePlayer = [[MPMoviePlayerController alloc]
                        initWithContentURL:contentURL];
 9
10     // Since we are embedding moviePlayer on our current view
11     // controller, set the control style to embedded
12     moviePlayer.controlStyle = MPMovieControlStyleEmbedded;
13
14
15     // Set up the frame of our movie player and add
16     // it as a subview of this view controller's associated view
17     CGFloat width = self.view.bounds.size.width;
18     moviePlayer.view.frame = CGRectMake(0, 0, width, 480);
19     moviePlayer.view.autoresizingMask =
                                    UIViewAutoresizingFlexibleWidth|
                             UIViewAutoresizingFlexibleBottomMargin;
20
21     [self.view addSubview:moviePlayer.view];
22   }
23
24   // Called when the value of our segmented control changes
25   - (void)segmentControlChange:(id)sender{
26
27     // Stop the current movie so we can load in a new one
28     // You can load in a new movie when the video is not stopped
29     // but this demonstrates where you could take actions before
30     // changing clips
31     [moviePlayer stop];
32
33     // Set the movie player's content URL based on the selected
34     // segment index of our UISegmentedControl
35    switch(segmentControl.selectedSegmentIndex){
36     case 0:{
37         NSString *moviePath = [[NSBundle mainBundle]
                                        pathForResource:@"myMovie"
                                                 ofType:@"mp4"];
38         NSURL *contentURL = [NSURL fileURLWithPath:moviePath];
39         [moviePlayer setContentURL:contentURL];
40         [moviePlayer play];
41         break;}
42       case 1:{
43         NSString *urlString = @"http://bit.ly/ProgressiveExample";
44         NSURL *contentURL = [NSURL URLWithString:urlString];
45         [moviePlayer setContentURL:contentURL];
46         [moviePlayer play];
47         break;}
48       case 2:{
49         NSString *urlString = @"http://bit.ly/HTTPLiveExample";
50         NSURL *contentURL = [NSURL URLWithString:urlString];
51         [moviePlayer setContentURL:contentURL];
52         [moviePlayer play];
53         break;}
54     }
55   }

As you can see in this code block, creating an MPMoviePlayerController is simple. The first section, lines 1 through 22, implements the viewDidLoad method of our custom UIViewController. Here we create a new MPMoviePlayerController in line 8 and set up the control styles as embedded in line 12. Next, we simply set up its frame and autoResizingMask (lines 17 through 19) and add its view as a subview of our view controller’s associated view.

In Figure 10.6 you can see we have a UISegmentedControl set up to let us change between various media types. When the value of that segmented control is changed, the method defined in lines 25 through 55 is called. Here, we simply set up a switch statement based on the current selected index and then load a new content URL accordingly. In lines 37 through 40 we set up the same local video file, and in lines 42 through 53 we set up content URLs that reference remote files stored on the Internet. Case 1 (lines 43 through 47) simply defines a URL string that deep links to an MOV file stored on a web server. Case 2 (lines 49 through 53) sets up a content URL based on a URL string for an M3U8 index file. iOS will automatically detect that the content URL is an M3U8 index file and load the appropriate HTTP Live Stream accordingly. As an added bonus, if your Internet connection speed changes, the MPMoviePlayerController will automatically change the active HTTP Live Stream based on the bit rate conditions defined in the M3U8 index.

Presenting MPMoviePlayerViewController

As mentioned, the MPMoviePlayerViewController is simply a UIViewController subclass that manages its own MPMoviePlayerController. For added convenience, the MPMoviePlayerViewController can be initialized using the same content URL that you would use to initialize an MPMoviePlayerController. In this case, the MPMoviePlayerViewController will create its MPMoviePlayerController property using this content URL.

The added advantage of the MPMoviePlayerViewController is the fact that it has its own custom fullscreen presentation animation. The following code block demonstrates how to create a new MPMoviePlayerViewController and present it using the UIViewController presentMoviePlayerViewControllerAnimated API.

1   MPMoviePlayerViewController *mpvc;
2   mpvc = [[MPMoviePlayerViewController alloc]
                             initWithContentURL:contentURL];
3   [self presentMoviePlayerViewControllerAnimated:mpvc];

As you can see, it’s very easy to create a new MPMoviePlayerViewController and present it fullscreen. Remember, the MPMoviePlayerViewController contains an MPMoviePlayerController as a property that can be accessed through the property name moviePlayer. So, if in this example we wanted to turn off autoplay before we present the movie player view controller, we can do so by simply using [mpvc.moviePlayer setShouldAutoplay:NO].


image Note

To download a full project using the MPMoviePlayerController and MPMoviePlayerViewController, visit iOSCoreFrameworks.com/download#chapter-10.


Creating a Custom Media Capture Solution

Creating a custom media capture solution involves implementing AV Foundation directly. Unlike using UIImagePickerController, one of the benefits of a custom capture solution is that you have access to raw camera data before it’s processed into an image. This lets you perform additional in-camera operations like face detection and filter application. Before we begin, however, it’s important to understand what goes into a custom capture solution.


image Note

The following code samples require you to link the following frameworks to your project: AVFoundation.framework, QuartzCore.framework, AssetsLibrary.framework, MobileCoreService.framework, and CoreMedia.framework.


The AVCaptureSession

The AVCaptureSession is used to control the flow of audio- and video-based data from an input device (AVCaptureDeviceInput) to an output buffer (AVCaptureOutput). The process for setting up an AVCaptureSession is as follows:

1. Create a new AVCaptureSession.

2. Set session presets for audio and video recording quality.

3. Add the necessary input capture devices (created from an AVCaptureDevice, which can be a camera, microphone, or the like).

4. Add the necessary data output buffers (such as AVCaptureStillImageOutput or AVCaptureVideoDataOutput).

5. Start the AVCaptureSession.

Once the AVCaptureSession is started, it collects information from the attached input devices and outputs information to the appropriate data buffers when necessary.

The AVCaptureVideoPreviewLayer

The next step in creating a custom capture solution is to create an AVCapture-VideoPreviewLayer. When initialized with an AVCaptureSession, the capture video preview layer renders the output of attached video devices. If you’re making a custom camera application, the AVCaptureVideoPreviewLayer is used to show the viewfinder of your camera; it’s simply a preview of what’s seen by the video input devices.

By default, the AVCaptureVideoPreviewLayer displays the raw data from your input capture device; you don’t have to do any additional work to set the contents of the layer from the input device. However, if you want to apply in-camera filters or draw additional objects on top of this layer (for example, boxes indicating faces detected), then you should do so by capturing the frame data from a video output buffer and processing it yourself. Once processed, you can either output the pixel data to a separate layer or OpenGL context.

Setting Up a Custom Image Capture

The following example demonstrates how to set up a custom image capture solution using AV Foundation, as seen in Figure 10.7. In this example, we set up an AVCaptureSession followed by attaching a camera device input and a still image output.

Image

Figure 10.7. Custom Image Capture Solution using AV Foundation.


image Tip

The following code sample demonstrates image capture only. To download a complete project with video capture as well, visit iOSCoreFrameworks.com/download#chapter-10.


 1   - (void)setupAVCapture{
 2       // Create a new AVCapture Session
 3       AVCaptureSession *capSession = [AVCaptureSession new];
 4
 5       // Set the capture session preset
 6       // If you use a higher quality capture setting, you need
 7       // to make sure your hardware supports it. For example,
 8       // on the iPhone 4, the max capture preset for the front
 9       // facing camera is only 640x480
10       [capSession setSessionPreset:AVCaptureSessionPreset640x480];
11
12       // Create a capture device that supports images and video
13       AVCaptureDevice *capDevice = [AVCaptureDevice
                           defaultDeviceWithMediaType:AVMediaTypeVideo];
14
15       // Create a capture device input from our capture device
16       // This input will be attached to our capture session
17       NSError *error = nil;
18       AVCaptureDeviceInput *capDeviceInput = [AVCaptureDeviceInput
                          deviceInputWithDevice:capDevice error:&error];
19       if(error!=nil)
20           NSLog(@"Bad Device Input:%@",[error localizedDescription]);
21       else{
22
23           // If our capture session can accept a new device input
24           // add the new capture device input
25           if([capSession canAddInput:capDeviceInput])
26               [capSession addInput:capDeviceInput];
27           else
28               NSLog(@"could not add input");
29
30           // Create a new still image output
31           stillImageOutput = [AVCaptureStillImageOutput new];
32
33           // Add a KVO observer for the property capturingStillImage
34           // When this value changes, KVO will notify us so we can
35           // simulate the camera flash
36           [stillImageOutput addObserver:self
                                forKeyPath:@"capturingStillImage"
                                   options:NSKeyValueObservingOptionNew
             context:@"AVCaptureStillImageIsCapturingStillImageContext"];
37
38           // Add our still image output to the capture session
39           if([capSession canAddOutput:stillImageOutput])
40              [capSession addOutput:stillImageOutput];
41
42
43           // Set up Preview Layer
44           preview = [[AVCaptureVideoPreviewLayer alloc]
                                           initWithSession:capSession];
45           preview.frame = videoPreview.bounds;
46           preview.videoGravity = AVLayerVideoGravityResizeAspectFill;
47
48           // Add our video preview layer to our view's layer
49           [self.view.layer addSublayer:preview];
50
51           // Start the capture session
52           [capSession startRunning];
53       }
54   }

Recall from the previous section the steps to create a new capture session and apply those steps to the code block above:

1. In line 3, create a new AVCaptureSession.

2. In line 10, set session presets for audio and video recording quality.

3. In lines 13 through 28, add the necessary input capture devices (created from an AVCaptureDevice which can be a camera, microphone, or the like).

4. In lines 31 through 40, add the necessary data output buffers (such as AVCaptureStillImageOutput or AVCaptureVideoDataOutput).

5. In line 52, start the AVCaptureSession.

Additionally, we took this opportunity to set up our AVCaptureVideoPreviewLayer and add it as a sublayer to our view controller’s associated view (lines 44 through 46).

There are a few things to consider about this code block. First, in line 10 we set the capture preset to 640×480. This adjusts the quality of the video recorded by the capture session. It’s important that when you set this option you’re aware of your current hardware. The lowest quality setting is 640×480, so it should work on all hardware configurations. However, if we were to set this setting higher—such as an HD recording setting—the application would crash if the user tried to switch cameras from the rear-facing camera to the front-facing camera (because the front-facing camera cannot record in HD). So be mindful of your specific hardware configurations when setting this preset.


image Tip

If you decide you want to record HD when on the back camera and 640×480 when on the front camera, you can do so by changing your AVCaptureSession’s preset when you toggle between device inputs. This practice is demonstrated in the code sample available at iOSCoreFrameworks.com/download#chapter-10.


Next, line 36 does something special with Key Value Observing (KVO). A pretty simple concept, every object in iOS has a set of properties, like the backgroundColor of a view or the selectedSegmentIndex of a UISegmentedControl. KVO allows developers to add observers to these properties using a simple key-based naming system. If the value of an observed property changes, we’ll be notified. In this example, line 36 adds an observer for the property capturingStillImage on our AVCaptureStillImageOutput; when the still image output starts capturing an image (and this Boolean value changes to YES), we’re notified. In this way, we can simulate a camera flash and provide the user with feedback that we’re capturing an image.

AVCaptureConnection

So how do we take an image? Now that our AVCaptureSession is running, we simply need to call captureStillImageAsynchronouslyFromConnection: on our AVCaptureStillImageOutput. When this operation is completed, it sends the image data from the camera to a completion handler where we can simply create a new UIImage and save it to our photo library.

The AVCaptureConnection is a class used to connect an AVCaptureInput with an AVCaptureOutput. While the AVCaptureSession coordinates the interaction, we have to actually pull data from the AVCaptureConnection.

 1   // Get our AVCapture Connection
 2   AVCaptureConnection *c;
 3   c = [stillImageOutput connectionWithMediaType:AVMediaTypeVideo];
 4
 5   // Rotate the connection based on the device orientation
 6   // This makes sure images are rotated properly when saved
 7   c = [self setConnectionOrientation:c];
 8
 9    [stillImageOutput
       captureStillImageAsynchronouslyFromConnection:connection
       completionHandler:
        ^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {
10       if(error)
11         NSLog(@"Take picture failed");
12       else{
13         NSData *jpegData = [AVCaptureStillImageOutput
              jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
14         UIImage *newImage = [UIImage imageWithData:jpegData];
15
16         // At this point we have our image from the camera.
17         // If you wanted to apply Core Image Filters, you could
18         // simply convert from UIImage to CIImage!
19
20         // Save the image to our photo library
21         UIImageWriteToSavedPhotosAlbum(newImage, nil, nil, nil);
22       }
23     }
24   ];

This code block performs the following steps:

1. In lines 2 and 3, obtains a reference to the AVCaptureConnection between our AVCaptureInput and an AVCaptureOutput. In this case, we want a connection for video, but if we were recording audio we could just as easily create a connection with that type.

2. In line 7, rotates the connection based on the device orientation.

3. In lines 9 through 24, captures a still image from our AVCaptureConnection.


image Note

The rotation method used on line 7 is self-implemented; it is not a method included as part of the AV Foundation framework. In this method, we use device orientation to set the AVCaptureConnection video orientation. For full details on the rotation method, download the complete project at iOSCoreFrameworks.com/download#chapter-10.


Lines 9 through 24 may look confusing because a large percentage of the code actually takes place within a completion block (lines 10 through 24). Remember the block syntax from our discussion on GCD in Chapter 1. In this example, when captureStillImageAsynchronouslyFromConnection finishes, it passes the image data buffer and any error messages to the completion block provided. For our purposes, we simply create an NSData object from that image data buffer (line 13) and then create a new UIImage with the image data (line 14). Once we have a new UIImage we save it to the photo library in line 21. If we were creating an Instagram-style application, we could instead have applied various image filters to the image in lines 16 through 18 before saving it to the photo library.

Remember we added our KVO observer to the AVCaptureStillImageOutput property capturingStillImage. Once we make the call captureStillImageAsynchronously FromConnection we are notified through KVO and we can simulate the camera flash.  This method is demonstrated in the full project download available at iOSCoreFrameworks.com/download#chapter-10.

In-Camera Effects and Working with Video

An example that demonstrates how to use AV Foundation to capture video and work with in-camera effects is available online at iOSCoreFrameworks.com/download#chapter-10. Additionally, Apple has created a couple of rock-solid examples with more complex techniques using image filters and OpenGL. You can view these examples in the iOS sample code by visiting developer.apple.com or iOSCoreFrameworks.com/reference#av-foundation.

In a nutshell, though, let me take a second and talk through what differs between capturing a still image and capturing video. The setup is almost identical. First you create an AVCaptureSession, and then you attach your inputs and outputs. An AVCaptureSession can have multiple AVCaptureOutputs associated with it; there’s no reason why you can’t add an AVCaptureVideoDataOutput to our existing project to capture video as well as audio (in fact, this is exactly what the sample on this book’s website does). The difference is that when an AVCaptureVideoDataOutput is enabled, you do not have to capture the frame data from an AVCaptureConnection yourself. Instead, AV Foundation will call the delegate method.

1   - (void)captureOutput:(AVCaptureOutput *)captureOutput
    didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
           fromConnection:(AVCaptureConnection *)connection{
2
3       // Handle Video Processing //
4
5   }

This delegate method is defined in the protocol AVCaptureVideoDataOutput SampleBufferDelegate, and allows you to process frame data in real time as it comes in from the capture device.

Remember, the video capture output buffer is not only designed to let you capture video-based media and store it to a file. Using AVCaptureVideoDataOutput you can process frame data as it comes in from the camera. If you’re making a still image camera that uses face detection for auto-focus, you can use the AVCaptureVideoDataOutput to pre-process frames as they come off the capture device and draw necessary face “boxes” on your preview layer indicating the focus areas to the user. When the still image is taken, the AVCaptureStillImageOutput simply grabs the most recent frame data as we did before. Additionally, if you wanted to apply in-camera filters such as a white-balance adjustment using Core Image, you could grab the individual frame data from an AVCaptureVideoDataOuput buffer, process the frames with Core Image, and present them in your own custom preview layer. This is how Apple engineers created the Photo Booth application on Mac OS X with live image filter previews.

To download a full project using AV Foundation, visit iOSCoreFrameworks.com/download#chapter-10.

Wrapping Up

AV Foundation provides you with all of the tools you need for media playback and capture. The iOS SDK provides various options for implementing AV Foundation, including high-level abstractions available in the frameworks UIKit and the Media Player. For those who need more control over data throughout playback and capture, AV Foundation allows developers to create their own solutions from the ground up.

The iOS SDK provides the UIImagePickerController for simple media capture. Using sourceType and captureMode, the UIImagePickerController can be configured to capture either still images or video at varying resolutions. For media playback, the iOS SDK provides the easy to use MPMoviePlayerController. This controller is initialized with a content URL and can be used to play various media types. iOS also provides a simple UIViewController subclass, MPMoviePlayerViewController, that is designed to manage a single MPMoviePlayerController.

Finally, developers can use AV Foundation to create their own custom capture solutions. The advantage of creating your own capture and playback solutions is that developers have access to frame data as soon as it comes off the capture device instead of waiting for image data to be finalized. This allows developers pre-process image data offering in-camera effects and other filter operations.

Full project examples demonstrating the UIImagePickerController, the MPMoviePlayerController, a custom video/image capture with AV Foundation, and a custom media player built with AV Foundation can be downloaded at iOSCoreFrameworks.com/download#chapter-10.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.16.50.71