Chapter 15. Video

Video playback is performed using classes such as AVPlayer provided by the AV Foundation framework (import AVFoundation). An AVPlayer is not a view; rather, an AVPlayer’s content is made visible through a CALayer subclass, AVPlayerLayer, which can be added to your app’s interface.

An AV Foundation video playback interface can be wrapped in a simple view controller, AVPlayerViewController (introduced in iOS 8): you provide an AVPlayer, and the AVPlayerViewController automatically hosts an associated AVPlayerLayer in its own main view, providing standard playback transport controls so that the user can start and stop play, seek to a different frame, and so forth. AVPlayerViewController is provided by the AVKit framework; you’ll need to import AVKit.

Note

AVPlayerViewController effectively supersedes the Media Player framework’s MPMoviePlayerController and MPMoviePlayerViewController, which were deprecated in iOS 9 and are not discussed in this edition.

A simple interface for letting the user trim video (UIVideoEditorController) is also supplied. Sophisticated video editing can be performed through the AV Foundation framework, as I’ll demonstrate later in this chapter.

If an AVPlayer produces sound, you may need to concern yourself with your application’s audio session; see Chapter 14. AVPlayer deals gracefully with the app being sent into the background: it will pause when your app is backgrounded and resume when your app returns to the foreground.

A movie file can be in a standard movie format, such as .mov or .mp4, but it can also be a sound file. An AVPlayerViewController is thus an easy way to play a sound file, including a sound file obtained in real time over the Internet, along with standard controls for pausing the sound and moving the playhead — unlike AVAudioPlayer, which, as I pointed out in Chapter 14, lacks a user interface.

A mobile device does not have unlimited power for decoding and presenting video in real time. A video that plays on your computer might not play at all on an iOS device. See the “Media Layer” chapter of Apple’s iOS Technology Overview for a list of specifications and limits within which video is eligible for playing.

A web view (Chapter 11) supports the HTML5 <video> tag. This can be a simple lightweight way to present video and to allow the user to control playback. Both web view video and AVPlayer support AirPlay.

AVPlayerViewController

An AVPlayerViewController is a view controller; thus, you already know (from Chapter 6) how to work with it. The only other thing you need to know, in order to get started, is that an AVPlayerViewController must be assigned a player, which is an AVPlayer, and that an AVPlayer can be initialized directly from the URL of the video it is to play, with init(url:). Thus, you’ll instantiate AVPlayerViewController, create and set its AVPlayer, and get the AVPlayerViewController into the view controller hierarchy; AVPlayerViewController adapts intelligently to its place in the hierarchy.

Tip

You can instantiate an AVPlayerViewController from a storyboard; look for the AVKit Player View Controller object in the Object library. However, you will then need to link your target manually to the AVKit framework: edit the target and add AVKit.framework under Linked Frameworks and Libraries in the General tab.

The absolute rock-bottom simplest approach is to use an AVPlayerViewController as a presented view controller. In this example, I present a video from the app bundle:

let av = AVPlayerViewController()
let url = Bundle.main.url(forResource:"ElMirage", withExtension: "mp4")!
let player = AVPlayer(url: url)
av.player = player
self.present(av, animated: true)

The AVPlayerViewController knows that it’s being shown as a fullscreen presented view controller, so it provides fullscreen video controls, including a Done button which automatically dismisses the presented view controller! Thus, there is literally no further work for you to do.

Figure 15-1 shows a fullscreen presented AVPlayerViewController. Exactly what controls you’ll see depends on the circumstances; in my case, at the top there’s the Done button and the current playhead position slider, and at the bottom there are the three standard transport buttons and a volume slider. (If my network were more interesting, we would also see an AirPlay button.) The user can hide or show the controls by tapping the video.

pios 2810
Figure 15-1. A presented AVPlayerViewController

If the movie file is in fact a sound file, the central region is replaced by a QuickTime symbol (Figure 15-2), and the controls can’t be hidden.

pios 2811
Figure 15-2. The QuickTime symbol

If you want the convenience and the control interface that come from using an AVPlayerViewController, while displaying its view as a subview of your own view controller’s view, make your view controller a parent view controller with the AVPlayerViewController as its child, adding the AVPlayerViewController’s view in good order (see “Container View Controllers”):

let url = Bundle.main.url(forResource:"ElMirage", withExtension:"mp4")!
let player = AVPlayer(url:url)
let av = AVPlayerViewController()
av.player = player
av.view.frame = CGRect(10,10,300,200)
self.addChildViewController(av)
self.view.addSubview(av.view)
av.didMove(toParentViewController:self)

Once again, the AVPlayerViewController behaves intelligently, reducing its controls to a minimum to adapt to the reduced size of its view. On my device, at the given view size, there is room for a play button, a playhead position slider, a full-screen button, and nothing else (Figure 15-3). However, the user can enter full-screen mode, either by tapping the full-screen button or by pinching outward on the video view, and now the full complement of controls is present.

pios 2812
Figure 15-3. An embedded AVPlayerViewController’s view

Other AVPlayerViewController Properties

An AVPlayerViewController has very few properties:

player

The view controller’s AVPlayer, whose AVPlayerLayer will be hosted in the view controller’s view. You can set the player while the view is visible, to change what video it displays (though you are more likely to keep the player and tell it to change the video). It is legal to assign an AVQueuePlayer, an AVPlayer subclass; an AVQueuePlayer has multiple items, and the AVPlayerViewController will treat these as chapters of the video. New in iOS 10, an AVPlayerLooper object can be used in conjunction with an AVQueuePlayer to repeat play automatically. (I’ll give an example of using an AVQueuePlayer in Chapter 16.)

showsPlaybackControls

If false, the controls are hidden. This could be useful, for example, if you want to display a video for decorative purposes, or if you are substituting your own controls.

contentOverlayView

A UIView to which you are free to add subviews. These subviews will appear overlaid in front of the video but behind the playback controls. This is a great way to cover that dreadful QuickTime symbol (Figure 15-2).

videoGravity

How the video should be positioned within the view. Possible values are:

  • AVLayerVideoGravityResizeAspect (the default)

  • AVLayerVideoGravityResizeAspectFill

  • AVLayerVideoGravityResize (fills the view, possibly distorting the video)

videoBounds
isReadyForDisplay

The video position within the view, and the ability of the video to display its first frame and start playing, respectively. If the video is not ready for display, we probably don’t yet know its bounds either. In any case, isReadyForDisplay will initially be false and the videoBounds will initially be reported as .zero. This is because, with video, things take time to prepare. I’ll explain in detail later in this chapter.

Everything else there is to know about an AVPlayerViewController comes from its player, an AVPlayer. I’ll discuss AVPlayer in more detail in a moment.

Picture-in-Picture

Starting in iOS 9, an iPad that supports iPad multitasking also supports picture-in-picture video playback. This means that the user can move your video into a small system window that floats in front of everything else on the screen. This floating window persists even if your app is put into the background. Your iPad app will support picture-in-picture if it supports background audio, as I described in Chapter 14: you check the checkbox in the Capabilities tab of the target editor (Figure 14-3), and your audio session’s policy must be active and must be Playback.

Tip

If you want to do those things without supporting picture-in-picture, set the AVPlayerViewController’s allowsPictureInPicturePlayback to false. Note that even if you do support picture-in-picture, the user can turn it off in the Settings app.

The result is that, on an iPad that supports picture-in-picture, an extra button appears among the lower set of playback controls (Figure 15-4). When the user taps this button, the video is moved into the system window (and the AVPlayerViewController’s view displays a placeholder). The user is now free to leave your app while continuing to see and hear the video. Moreover, if you are using a fullscreen AVPlayerViewController and the user leaves your app while the video is playing, the video is moved into the picture-in-picture system window automatically.

pios 2812b
Figure 15-4. The picture-in-picture button appears

The user can move the system window to any corner. Buttons in the system window, which can be shown or hidden by tapping, allow the user to play and pause the video, to dismiss the system window, or to dismiss the system window plus return to your app.

If you’re using a presented AVPlayerViewController, and the user takes the video into picture-in-picture mode, then when the user taps the button that dismisses the system window and returns to your app, the presented view controller, by default, has also been dismissed — there is no AVPlayerViewController any longer. If that isn’t what you want, declare yourself the AVPlayerViewController’s delegate (AVPlayerViewControllerDelegate) and deal with it in a delegate method. You have two choices:

Don’t dismiss the presented view controller

Implement playerViewControllerShouldAutomaticallyDismissAtPictureInPictureStart(_:) to return false. Now the presented view controller remains, and the video has a place in your app to which it can be restored.

Recreate the presented view controller

Implement playerViewController(_:restoreUserInterfaceForPictureInPictureStopWithCompletionHandler:). Do what the name tells you: restore the user interface! The first parameter is your original AVPlayerViewController; all you have to do is get it back into the view controller hierarchy. At the end of the process, call the completion function.

I’ll demonstrate the second approach:

func playerViewController(_ pvc: AVPlayerViewController,
    restoreUserInterfaceForPictureInPictureStopWithCompletionHandler
    ch: @escaping (Bool) -> Void) {
        self.present(pvc, animated:true) {
            ch(true)
        }
}

Other delegate methods inform you of various stages as picture-in-picture mode begins and ends. Thus you could respond by rearranging the interface. There is good reason for being conscious that you’ve entered picture-in-picture mode: once that happens, you are effectively a background app, and you should reduce resources and activity so that playing the video is all you’re doing until picture-in-picture mode ends.

Introducing AV Foundation

The video display performed by AVPlayerViewController is supplied by classes from the AV Foundation framework. This is a big framework with a lot of classes; the AV Foundation Framework Reference lists about 150 classes and 20 protocols. This may seem daunting, but there’s a good reason for it: video has a lot of structure and can be manipulated in many ways, and AV Foundation very carefully and correctly draws all the distinctions needed for good object-oriented encapsulation.

Because AV Foundation is so big, all I can do here is introduce it. I’ll point out some of the principal classes, features, and techniques associated with video. Further AV Foundation examples will appear in Chapters 16 and 17. Eventually you’ll want to read Apple’s AV Foundation Programming Guide for a full overview.

Some AV Foundation Classes

The heart of AV Foundation video playback is AVPlayer. It is not a UIView; rather, it is the locus of video transport (and the actual video, if shown, appears in an AVPlayerLayer associated with the AVPlayer). For example, AVPlayerViewController provides a play button, but what if you wanted to start video playback in code? You’d tell the AVPlayerViewController’s player (an AVPlayer) to play or set its rate to 1.

An AVPlayer’s video is its currentItem, an AVPlayerItem. This may come as a surprise, because in the examples earlier in this chapter we initialized an AVPlayer directly from a URL, with no reference to any AVPlayerItem. That, however, was just a shortcut. AVPlayer’s real initializer is init(playerItem:); when we called init(url:), the AVPlayerItem was created for us.

An AVPlayerItem, too, can be initialized from a URL with init(url:), but again, this is just a shortcut. AVPlayerItem’s real initalizer is init(asset:), which takes an AVAsset. An AVAsset is an actual video resource, and comes in one of two subclasses:

AVURLAsset

An asset specified through a URL.

AVComposition

An asset constructed by editing video in code. I’ll give an example later in this chapter.

Thus, to configure an AVPlayer using the complete “stack” of objects that constitute it, you could say something like this:

let url = Bundle.main.url(forResource:"ElMirage", withExtension:"mp4")!
let asset = AVURLAsset(url:url)
let item = AVPlayerItem(asset:asset)
let player = AVPlayer(playerItem:item)

Once an AVPlayer exists and has an AVPlayerItem, that player item’s tracks, as seen from the player’s perspective, are AVPlayerItemTrack objects, which can be individually enabled or disabled. That’s different from an AVAssetTrack, which is a fact about an AVAsset. This distinction is a good example of what I said earlier about how AV Foundation encapsulates its objects correctly: an AVAssetTrack is a hard and fast reality, but an AVPlayerItemTrack lets a track be manipulated for purposes of playback on a particular occasion.

Things Take Time

Working with video is time-consuming. Just because you give an AVPlayer a command or set a property doesn’t mean that reaction time is immediate. All sorts of operations, from reading a video file and learning its metadata to transcoding and saving a video file, take a significant amount of time. The user interface must not freeze while a video task is in progress, so AV Foundation relies heavily on threading (Chapter 24). In this way, AV Foundation covers the complex and time-consuming nature of its operations; but your code must cooperate. You’ll frequently use key–value observing and callbacks to run your code at the right moment.

Here’s an example; it’s slightly artificial, but it illustrates the principles and techniques you need to know about. There’s an elementary interface flaw when we create an embedded AVPlayerViewController:

let url = Bundle.main.url(forResource:"ElMirage", withExtension:"mp4")!
let asset = AVURLAsset(url:url)
let item = AVPlayerItem(asset:asset)
let player = AVPlayer(playerItem:item)
let av = AVPlayerViewController()
av.view.frame = CGRect(10,10,300,200)
av.player = player
self.addChildViewController(av)
self.view.addSubview(av.view)
av.didMove(toParentViewController: self)

There are two issues here:

  • The AVPlayerViewController’s view is initially appearing empty in the interface, because the video is not yet ready for display. There is then a visible flash when the video appears, because now it is ready for display.

  • The proposed frame of the AVPlayerViewController’s view doesn’t fit the actual aspect ratio of the video, which results in the video being letterboxed within that frame (visible in Figure 15-3).

To prevent the flash, we can start out with the AVPlayerViewController’s view hidden, and not show it until isReadyForDisplay is true. But how will we know when that is? Not by repeatedly polling the isReadyForDisplay property! That sort of behavior is absolutely wrong. Rather, we should use KVO to register as an observer of this property:

// ... as before ...
self.addChildViewController(av)
av.view.isHidden = true // *
self.view.addSubview(av.view)
av.didMove(toParentViewController: self)
av.addObserver(self, // *
    forKeyPath: #keyPath(AVPlayerViewController.readyForDisplay),
    options: .new, context: nil)

Sooner or later, isReadyForDisplay will become true, and we’ll be notified. Now we can unregister from KVO and show the AVPlayerViewController’s view:

override func observeValue(forKeyPath keyPath: String?,
    of object: Any?, change: [NSKeyValueChangeKey : Any]?,
    context: UnsafeMutableRawPointer?) {
        let ready = #keyPath(AVPlayerViewController.readyForDisplay)
        guard keyPath == ready else {return}
        guard let vc = object as? AVPlayerViewController else {return}
        guard let ok = change?[.newKey] as? Bool else {return}
        guard ok else {return}
        vc.removeObserver(self, forKeyPath:ready)
        DispatchQueue.main.async {
            vc.view.isHidden = false
        }
}

Note that, in that code, I make no assumptions about what thread KVO calls me back on: I intend to operate on the interface, so I step out to the main thread.

Now let’s talk about setting the AVPlayerViewController’s view.frame in accordance with the video’s aspect ratio. An AVAsset has tracks (AVAssetTrack); in particular, an AVAsset representing a video has a video track. A video track has a naturalSize, which will give me the aspect ratio I need.

However, it turns out that, for the sake of efficiency, these properties are among the many AV Foundation object properties that are not even evaluated unless you specifically ask for them. How do you do that? Well, AV Foundation objects that behave this way conform to the AVAsynchronousKeyValueLoading protocol. You call loadValuesAsynchronously(forKeys:completionHandler:) ahead of time, for any properties you’re going to be interested in. When your completion function is called, you check the status of a key and, if its status is .loaded, you are now free to access it.

So let’s go all the way back to the beginning. I’ll start by creating the AVAsset and then stop, waiting to hear that its tracks property is ready:

let url = Bundle.main.url(forResource:"ElMirage", withExtension:"mp4")!
let asset = AVURLAsset(url:url)
let track = #keyPath(AVURLAsset.tracks)
asset.loadValuesAsynchronously(forKeys:[track]) {
    let status = asset.statusOfValue(forKey:track, error: nil)
    if status == .loaded {
        DispatchQueue.main.async {
            self.getVideoTrack(asset)
        }
    }
}

When the tracks property is ready, my getVideoTrack method is called. I obtain the video track and then stop once again, waiting to hear when the video track’s naturalSize property is ready:

func getVideoTrack(_ asset:AVAsset) {
    let visual = AVMediaCharacteristicVisual
    let vtrack = asset.tracks(withMediaCharacteristic: visual)[0]
    let size = #keyPath(AVAssetTrack.naturalSize)
    vtrack.loadValuesAsynchronously(forKeys: [size]) {
        let status = vtrack.statusOfValue(forKey: size, error: nil)
        if status == .loaded {
            DispatchQueue.main.async {
                self.getNaturalSize(vtrack, asset)
            }
        }
    }
}

When the naturalSize property is ready, my getNaturalSize method is called. I get the natural size and use it to finish constructing the AVPlayer and to set AVPlayerController’s frame:

func getNaturalSize(_ vtrack:AVAssetTrack, _ asset:AVAsset) {
    let sz = vtrack.naturalSize
    let item = AVPlayerItem(asset:asset)
    let player = AVPlayer(playerItem:item)
    let av = AVPlayerViewController()
    av.view.frame = AVMakeRect(
        aspectRatio: sz, insideRect: CGRect(10,10,300,200))
    av.player = player
    // ... and the rest is as before ...
}

AVPlayerItem provides another way of loading an asset’s properties: initialize it with init(asset:automaticallyLoadedAssetKeys:) and observe its status using KVO. When that status is .readyToPlay, you are guaranteed that the player item’s asset has attempted to load those keys, and you can query them just as you would in loadValuesAsynchronously.

Time is Measured Oddly

Another peculiarity of AV Foundation is that time is measured in an unfamiliar way. This is necessary because calculations using an ordinary built-in numeric class such as CGFloat will always have slight rounding errors that quickly begin to matter when you’re trying to specify a time within a large piece of media.

Therefore, the Core Media framework provides the CMTime class, which under the hood is a pair of integers; they are called the value and the timescale, but they are simply the numerator and denominator of a rational number. When you call the CMTime initializer init(value:timescale:) (equivalent to C CMTimeMake), that’s what you’re providing. The denominator represents the degree of granularity; a typical value is 600, sufficient to specify individual frames in common video formats.

However, in the convenience initializer init(seconds:preferredTimescale:) (equivalent to C CMTimeMakeWithSeconds), the two arguments are not the numerator and denominator; they are the time’s equivalent in seconds and the denominator. For example, CMTime(seconds:2.5, preferredTimescale:600) yields the CMTime (1500,600).

Constructing Media

AV Foundation allows you to construct your own media asset in code as an AVComposition, an AVAsset subclass, using its subclass, AVMutableComposition. An AVMutableComposition is an AVAsset, so given an AVMutableComposition, we could make an AVPlayerItem from it (by calling init(asset:)) and hand it over to an AVPlayerViewController’s player; we will thus be creating and displaying our own movie.

Let’s try it! In this example, I start with an AVAsset (asset1, a video file) and assemble its first 5 seconds of video and its last 5 seconds of video into an AVMutableComposition (comp):

let type = AVMediaTypeVideo
let arr = asset1.tracks(withMediaType: type)
let track = arr.last!
let duration : CMTime = track.timeRange.duration
let comp = AVMutableComposition()
let comptrack = comp.addMutableTrack(withMediaType: type,
    preferredTrackID: Int32(kCMPersistentTrackID_Invalid))
try! comptrack.insertTimeRange(CMTimeRange(
    start: CMTime(seconds:0, preferredTimescale:600),
    duration: CMTime(seconds:5, preferredTimescale:600)),
    of:track, at:CMTime(seconds:0, preferredTimescale:600))
try! comptrack.insertTimeRange(CMTimeRange(
    start: CMTimeSubtract(duration,
        CMTime(seconds:5, preferredTimescale:600)),
    duration: CMTime(seconds:5, preferredTimescale:600)),
    of:track, at:CMTime(seconds:5, preferredTimescale:600))

This works perfectly. We are not very good video editors, however, as we have forgotten the corresponding soundtrack from asset1. Let’s go back and get it and add it to our AVMutableComposition (comp):

let type2 = AVMediaTypeAudio
let arr2 = asset1.tracks(withMediaType: type2)
let track2 = arr2.last!
let comptrack2 = comp.addMutableTrack(withMediaType: type2,
    preferredTrackID:Int32(kCMPersistentTrackID_Invalid))
try! comptrack2.insertTimeRange(CMTimeRange(
    start: CMTime(seconds:0, preferredTimescale:600),
    duration: CMTime(seconds:5, preferredTimescale:600)),
    of:track2, at:CMTime(seconds:0, preferredTimescale:600))
try! comptrack2.insertTimeRange(CMTimeRange(
    start: CMTimeSubtract(duration,
        CMTime(seconds:5, preferredTimescale:600)),
    duration: CMTime(seconds:5, preferredTimescale:600)),
    of:track2, at:CMTime(seconds:5, preferredTimescale:600))

But wait! Now let’s overlay another audio track from another asset; this might be, for example, some additional narration:

let type3 = AVMediaTypeAudio
let s = Bundle.main.url(forResource:"aboutTiagol", withExtension:"m4a")!
let asset2 = AVURLAsset(url:s)
let arr3 = asset2.tracks(withMediaType: type3)
let track3 = arr3.last!
let comptrack3 = comp.addMutableTrack(withMediaType: type3,
    preferredTrackID:Int32(kCMPersistentTrackID_Invalid))
try! comptrack3.insertTimeRange(CMTimeRange(
    start: CMTime(seconds:0, preferredTimescale:600),
    duration: CMTime(seconds:10, preferredTimescale:600)),
    of:track3, at:CMTime(seconds:0, preferredTimescale:600))

You can also apply audio volume changes and video opacity and transform changes to the playback of individual tracks. I’ll continue from the previous example, applying a fadeout to the last three seconds of the narration track (comptrack3) by creating an AVAudioMix:

let params = AVMutableAudioMixInputParameters(track:comptrack3)
params.setVolume(1, at:CMTime(seconds:0, preferredTimescale:600))
params.setVolumeRamp(fromStartVolume: 1, toEndVolume:0,
    timeRange:CMTimeRange(
        start: CMTime(seconds:7, preferredTimescale:600),
        duration: CMTime(seconds:3, preferredTimescale:600)))
let mix = AVMutableAudioMix()
mix.inputParameters = [params]

The audio mix must be applied to a playback milieu, such as an AVPlayerItem. So when we make an AVPlayerItem out of our AVComposition, we can set its audioMix property to mix:

let item = AVPlayerItem(asset:comp)
item.audioMix = mix

Similar to AVAudioMix, you can use AVVideoComposition to dictate how video tracks are to be composited. Starting in iOS 9, you can easily add a CIFilter (Chapter 2) to be applied to your video.

Synchronizing Animation with Video

An intriguing feature of AV Foundation is AVSynchronizedLayer, a CALayer subclass that effectively crosses the bridge between video time (the CMTime within the progress of a movie) and Core Animation time (the time within the progress of an animation). This means that you can coordinate animation in your interface (Chapter 4) with the playback of a movie. You attach an animation to a layer in more or less the usual way, but the animation takes place in movie playback time: if the movie is stopped, the animation is stopped; if the movie is run at double rate, the animation runs at double rate; and the current “frame” of the animation always corresponds to the current frame, within its entire duration, of the video.

The synchronization is performed with respect to an AVPlayer’s AVPlayerItem. To demonstrate, I’ll draw a long thin gray rectangle containing a little black square; the horizontal position of the black square within the gray rectangle will be synchronized to the movie playhead position:

let vc = self.childViewControllers[0] as! AVPlayerViewController
let p = vc.player!
// create synch layer, put it in the interface
let item = p.currentItem!
let syncLayer = AVSynchronizedLayer(playerItem:item)
syncLayer.frame = CGRect(10,220,300,10)
syncLayer.backgroundColor = UIColor.lightGray.cgColor
self.view.layer.addSublayer(syncLayer)
// give synch layer a sublayer
let subLayer = CALayer()
subLayer.backgroundColor = UIColor.black.cgColor
subLayer.frame = CGRect(0,0,10,10)
syncLayer.addSublayer(subLayer)
// animate the sublayer
let anim = CABasicAnimation(keyPath:#keyPath(CALayer.position))
anim.fromValue = subLayer.position
anim.toValue = CGPoint(295,5)
anim.isRemovedOnCompletion = false
anim.beginTime = AVCoreAnimationBeginTimeAtZero // important trick
anim.duration = CMTimeGetSeconds(item.asset.duration)
subLayer.add(anim, forKey:nil)
pios 2813
Figure 15-5. The black square’s position is synchronized to the movie

The result is shown in Figure 15-5. The gray rectangle is the AVSynchronizedLayer, tied to our movie. The little black square inside it is its sublayer; when we animate the black square, that animation will be synchronized to the movie, changing its position from the left end of the gray rectangle to the right end, starting at the beginning of the movie and with the same duration as the movie. Thus, although we attach this animation to the black square layer in the usual way, that animation is frozen: the black square doesn’t move until we start the movie playing. Moreover, if we pause the movie, the black square stops. The black square is thus automatically representing the current play position within the movie. This may seem a silly example, but if you were to suppress the video controls it could prove downright useful.

AVPlayerLayer

An AVPlayer is not an interface object. The corresponding interface object — an AVPlayer made visible, as it were — is an AVPlayerLayer (a CALayer subclass). It has no controls for letting the user play and pause a movie and visualize its progress; it just shows the movie, acting as a bridge between the AV Foundation world of media and the CALayer world of things the user can see.

An AVPlayerViewController’s view hosts an AVPlayerLayer for you automatically; otherwise you would not see any video in the AVPlayerViewController’s view. But there may certainly be situations where you find AVPlayerViewController too heavyweight, where you don’t need the standard transport controls, where you don’t want the video to be expandable or to have a fullscreen mode — you just want the simple direct power that can be obtained only by putting an AVPlayerLayer into the interface yourself. And you are free to do so!

Here, I’ll display the same movie as before, but without an AVPlayerViewController:

let m = Bundle.main.url(forResource:"ElMirage", withExtension:"mp4")!
let asset = AVURLAsset(url:m)
let item = AVPlayerItem(asset:asset)
let p = AVPlayer(playerItem:item)
self.player = p // might need a reference later
let lay = AVPlayerLayer(player:p)
lay.frame = CGRect(10,10,300,200)
self.playerLayer = lay // might need a reference later
self.view.layer.addSublayer(lay)

As before, if we want to prevent a flash when the video becomes ready for display, we can postpone adding the AVPlayerLayer to our interface until its isReadyForDisplay property becomes true — which we can learn through KVO.

In a WWDC 2016 video, Apple suggests an interesting twist on the preceding code: create the AVPlayer without an AVPlayerItem, create the AVPlayerLayer, and then assign the AVPlayerItem to AVPlayer, like this:

let m = Bundle.main.url(forResource:"ElMirage", withExtension:"mp4")!
let asset = AVURLAsset(url:m)
let item = AVPlayerItem(asset:asset)
let p = AVPlayer() // *
self.player = p
let lay = AVPlayerLayer(player:p)
lay.frame = CGRect(10,10,300,200)
self.playerLayer = lay
p.replaceCurrentItem(with: item) // *
self.view.layer.addSublayer(lay)

Apparently, there is some increase in efficiency if you do things in this order. The reason, it turns out, is that when an AVPlayerItem is assigned to an AVPlayer that doesn’t have an associated AVPlayerLayer, the AVPlayer assumes that only the audio track of the AVAsset is important — and then, when an AVPlayerLayer is assigned, it must scramble to pick up the video track as well.

The movie is now visible in the interface, but it isn’t doing anything. We haven’t told our AVPlayer to play, and there are no transport controls, so the user can’t tell the video to play either. This is why I kept a reference to the AVPlayer in a property! We can start play either by calling play or by setting the AVPlayer’s rate. Here, I imagine that we’ve provided a simple play/pause button that toggles the playing status of the movie by changing its rate:

@IBAction func doButton (_ sender: Any!) {
    let rate = self.player.rate
    self.player.rate = rate < 0.01 ? 1 : 0
}

Without trying to replicate the transport controls, we might also like to give the user a way to jump the playhead back to the start of the movie. The playhead position is a feature, not of an AVPlayer, but of an AVPlayerItem:

@IBAction func restart (_ sender: Any!) {
    let item = self.player.currentItem!
    item.seek(to:CMTime(seconds:0, preferredTimescale:600))
}

If we want our AVPlayerLayer to support picture-in-picture, then (in addition to making the app itself support picture-in-picture, as I’ve already described) we need to call upon AVKit to supply us with an AVPictureInPictureController. This is not a view controller; it merely endows our AVPlayerLayer with picture-in-picture behavior. You create the AVPictureInPictureController (checking first to see whether the environment supports picture-in-picture in the first place), initialize it with the AVPlayerLayer, and retain it:

if AVPictureInPictureController.isPictureInPictureSupported() {
    let pic = AVPictureInPictureController(playerLayer: self.playerLayer)
    self.pic = pic
}

There are no transport controls, so there is no picture-in-picture button. Supplying one is up to you. Don’t forget to hide the button if picture-in-picture isn’t supported! When the button is tapped, tell the AVPictureInPictureController to startPictureInPicture:

@IBAction func doPicInPic(_ sender: Any) {
    if self.pic.isPictureInPicturePossible {
        self.pic.startPictureInPicture()
    }
}

You might also want to set yourself as the AVPictureInPictureController’s delegate (AVPictureInPictureControllerDelegate). This is very similar to the AVPlayerViewController delegate, and serves the same purpose: you are informed of stages in the life of the picture-in-picture window so that you can adjust your interface accordingly. When the user taps the button that dismisses the system window and returns to your app, then if the AVPlayerLayer is still sitting in your interface, there may be no work to do. If you removed the AVPlayerLayer from your interface, and you now want to restore it, implement picture(_:restoreUserInterfaceForPictureInPictureStopWithCompletionHandler:). In your implementation, be sure that the AVPlayerLayer that you now put into your interface is the same one that was removed earlier; in other words, your player layer must continue to be the same as the AVPictureInPictureController’s playerLayer.

Further Exploration of AV Foundation

Here are some other things you can do with AV Foundation:

  • Extract single images (“thumbnails”) from a movie (AVAssetImageGenerator).

  • Export a movie in a different format (AVAssetExportSession), or read/write raw uncompressed data through a buffer to or from a track (AVAssetReader, AVAssetReaderOutput, AVAssetWriter, AVAssetWriterInput, and so on).

  • Capture audio, video, and stills through the device’s hardware (AVCaptureSession and so on). I’ll say more about this in Chapter 17.

  • Tap into video and audio being captured or played, including capturing video frames as still images (AVPlayerItemVideoOutput, AVCaptureVideoDataOutput, and so on; and see Apple’s Technical Q&A QA1702).

UIVideoEditorController

UIVideoEditorController is a view controller that presents an interface where the user can trim video. Its view and internal behavior are outside your control, and you’re not supposed to subclass it. You are expected to treat the view controller as a presented view controller on the iPhone or as a popover on the iPad, and respond by way of its delegate.

Warning

UIVideoEditorController is one of the creakiest pieces of interface in iOS. It dates back to iOS 3.1, and hasn’t been revised since its inception — and it looks and feels like it. It has never worked properly on the iPad, and still doesn’t. I’m going to show how to use it, but I’m not going to explore its bugginess in any depth or we’d be here all day.

Before summoning a UIVideoEditorController, be sure to call its class method canEditVideo(atPath:). (This call can take some noticeable time to return.) If it returns false, don’t instantiate UIVideoEditorController to edit the given file. Not every video format is editable, and not every device supports video editing. You must also set the UIVideoEditorController instance’s delegate and videoPath before presenting it; the delegate should adopt both UINavigationControllerDelegate and UIVideoEditorControllerDelegate. You must manually set the video editor controller’s modalPresentationStyle to .popover on the iPad (a good instance of the creakiness I was just referring to):

let path = Bundle.main.path(forResource:"ElMirage", ofType: "mp4")!
let can = UIVideoEditorController.canEditVideo(atPath:path)
if !can {
    print("can't edit this video")
    return
}
let vc = UIVideoEditorController()
vc.delegate = self
vc.videoPath = path
if UIDevice.current.userInterfaceIdiom == .pad {
    vc.modalPresentationStyle = .popover
}
self.present(vc, animated: true)
if let pop = vc.popoverPresentationController {
    let v = sender as! UIView
    pop.sourceView = v
    pop.sourceRect = v.bounds
    pop.delegate = self
}

The view’s interface (on the iPhone) contains Cancel and Save buttons, a trimming box displaying thumbnails from the movie, a play/pause button, and the movie itself. The user slides the ends of the trimming box to set the beginning and end of the saved movie. The Cancel and Save buttons do not dismiss the presented view; you must do that in your implementation of the delegate methods. There are three of them, and you should implement all three and dismiss the presented view in all of them:

  • videoEditorController(_:didSaveEditedVideoToPath:)

  • videoEditorControllerDidCancel(_:)

  • videoEditorController(_:didFailWithError:)

Implementing the second two delegate methods is straightforward:

func videoEditorControllerDidCancel(_ editor: UIVideoEditorController) {
    self.dismiss(animated:true)
}
func videoEditorController(_ editor: UIVideoEditorController,
    didFailWithError error: Error) {
        self.dismiss(animated:true)
}

Saving the trimmed video is more involved. Like everything else about a movie, it takes time. When the user taps Save, there’s a progress view while the video is trimmed and compressed. By the time the delegate method videoEditorController(_:didSaveEditedVideoToPath:) is called, the trimmed video has already been saved to a file in your app’s temporary directory.

Doing something useful with the saved file at this point is up to you; if you merely leave it in the temporary directory, you can’t rely on it to persist. In this example, I copy the edited movie into the the Camera Roll album of the user’s photo library, by calling UISaveVideoAtPathToSavedPhotosAlbum. For this to work, our app’s Info.plist must have a “Privacy — Photo Library Usage Description” entry (NSPhotoLibraryUsageDescription) so that the runtime can ask for the user’s permission on our behalf. Saving takes time too, so I configure a callback to a method that dismisses the editor after the saving is over:

func videoEditorController(_ editor: UIVideoEditorController,
    didSaveEditedVideoToPath path: String) {
        if UIVideoAtPathIsCompatibleWithSavedPhotosAlbum(path) {
            UISaveVideoAtPathToSavedPhotosAlbum(path, self,
                #selector(savedVideo), nil)
        } else {
            // can't save to photo album, try something else
        }
}

The function reference #selector(savedVideo) in that code refers to a callback method that must take three parameters: a String (the path), an Optional wrapping an Error, and an UnsafeMutableRawPointer. It’s important to check for errors, because things can still go wrong. In particular, the user could deny us access to the photo library (see Chapter 17 for more about that). If that’s the case, we’ll get an Error whose domain is ALAssetsLibraryErrorDomain:

func savedVideo(at path:String, withError error:Error?,
    ci:UnsafeMutableRawPointer) {
        if let error = error {
            print("error: (error)")
        }
        self.dismiss(animated:true)
}
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.224.64.89