Chapter 17. Photo Library and Camera

The photos and videos accessed by the user through the Photos app constitute the device’s photo library.

Your app can give the user an interface for exploring the photo library through the UIImagePickerController class. In addition, the Photos framework lets you access the photo library and its contents programmatically — including the ability to modify a photo’s image. You’ll need to import Photos.

The UIImagePickerController class can also be used to give the user an interface similar to the Camera app, letting the user capture photos and videos. And of course, having allowed the user to capture an image, you can store it in the photo library, just as the Camera app does. At a deeper level, the AV Foundation framework (Chapter 15) provides direct control over the camera hardware. You’ll need to import AVFoundation.

Constants such as kUTTypeImage, referred to in this chapter, are provided by the Mobile Core Services framework; you’ll need to import MobileCoreServices.

Access to the photo library requires user authorization. You’ll use the PHPhotoLibrary class for this (part of the Photos framework; import Photos). To learn what the current authorization status is, call the class method authorizationStatus. To ask the system to put up the authorization request alert if the status is .notDetermined, call the class method requestAuthorization(_:). The Info.plist must contain some text that the system authorization request alert can use to explain why your app wants access. For the photo library, the relevant key is “Privacy — Photo Library Usage Description” (NSPhotoLibraryUsageDescription). See “Music Library Authorization” for detailed consideration of authorization strategy and testing.

Browsing with UIImagePickerController

UIImagePickerController is a view controller providing an interface in which the user can choose an item from the photo library, similar to the Photos app. You are expected to treat the UIImagePickerController as a presented view controller. You can use a popover on the iPad, but it looks good as a fullscreen presented view.

Warning

The documentation claims that a fullscreen presented view is forbidden on the iPad; this is not true (though it was true in early versions of iOS).

To let the user choose an item from the photo library, instantiate UIImagePickerController and assign its sourceType one of these values (UIImagePickerControllerSourceType):

.photoLibrary

The user is shown a table of all albums, and can navigate into any of them.

.savedPhotosAlbum

In theory, the user is supposed to be confined to the contents of the Camera Roll album. Instead, ever since iOS 8, the user sees the Moments interface and all photos are shown; I regard this as an atrocious bug.

You should call the class method isSourceTypeAvailable(_:) beforehand; if it doesn’t return true, don’t present the controller with that source type.

You’ll probably want to specify an array of mediaTypes you’re interested in. This array will usually contain kUTTypeImage, kUTTypeMovie, or both; or you can specify all available types by calling the class method availableMediaTypesForSourceType(_:).

After doing all of that, and having supplied a delegate (adopting UIImagePickerControllerDelegate and UINavigationControllerDelegate), present the picker:

let src = UIImagePickerControllerSourceType.photoLibrary
guard UIImagePickerController.isSourceTypeAvailable(src)
    else {return}
guard let arr = UIImagePickerController.availableMediaTypes(for:src)
    else {return}
let picker = UIImagePickerController()
picker.sourceType = src
picker.mediaTypes = arr
picker.delegate = self
self.present(picker, animated: true)

The delegate will receive one of these messages:

imagePickerController(_:didFinishPickingMediaWithInfo:)

The user selected an item from the photo library. The info: parameter describes it; I’ll give details in a moment.

imagePickerControllerDidCancel(_:)

The user tapped Cancel.

If a UIImagePickerControllerDelegate method is not implemented, the view controller is dismissed automatically at the point where that method would be called; but rather than relying on this, you should probably implement both delegate methods and dismiss the view controller yourself in each.

The info in the first delegate method is a dictionary of information about the chosen item. The keys in this dictionary depend on the media type:

An image

The keys are:

UIImagePickerControllerMediaType

A UTI; probably "public.image", which is the same as kUTTypeImage.

UIImagePickerControllerReferenceURL

An asset URL pointing to the image file in the photo library.

UIImagePickerControllerOriginalImage

A UIImage. You might display it in a UIImageView.

A movie

The keys are:

UIImagePickerControllerMediaType

A UTI; probably "public.movie", which is the same as kUTTypeMovie.

UIImagePickerControllerReferenceURL

An asset URL pointing to the movie file in the photo library.

UIImagePickerControllerMediaURL

A file URL to a copy of the movie saved into a temporary directory. You might display it in an AVPlayerViewController’s view or an AVPlayerLayer (Chapter 15).

Optionally, you can set the view controller’s allowsEditing to true. In the case of an image, the interface then allows the user to scale the image up and to move it so as to be cropped by a preset rectangle; the dictionary will include two additional keys:

UIImagePickerControllerCropRect

An NSValue wrapping a CGRect.

UIImagePickerControllerEditedImage

A UIImage. This becomes the image you are expected to use.

In the case of a movie, if the view controller’s allowsEditing is true, the user can trim the movie just as with a UIVideoEditorController (Chapter 15). The dictionary keys are the same as before.

Since the advent of devices with a 12 megapixel camera, a photo may be a live photo. When the user opts to capture a live photo, the device keeps a record of the last couple of seconds of video visible to the camera; when the user actually snaps a photo, the device briefly keeps recording video, to create a proprietary combination of a still photo backed by a roughly 3-second snippet of video.

UIImagePickerController can return a live photo as a live photo (as opposed to a simple image) if the mediaTypes includes both kUTTypeLivePhoto and kUTTypeImage, and if allowsEditing is false. In that case, in the delegate method, the info dictionary will include a UIImagePickerControllerLivePhoto key whose value is a PHLivePhoto (supplied by the Photos framework), and the media type will be reported as kUTTypeLivePhoto.

To display a PHLivePhoto in your interface, use a PHLivePhotoView (supplied by the Photos UI framework; import PhotosUI). This view has many powerful properties and delegate methods, but you don’t need any of them just to display the live photo; the live photo is shown as a live photo automatically, meaning that the user can use force touch on it (or long press on a device without 3D touch) to show the accompanying movie. The only properties you really need to set are the PHLivePhotoView’s frame and its contentMode (similar to a UIImageView).

Here’s an example implementation of imagePickerController(_:didFinishPickingMediaWithInfo:) that covers the fundamental cases:

func imagePickerController(_ picker: UIImagePickerController,
    didFinishPickingMediaWithInfo info: [String : Any]) {
        let url = info[UIImagePickerControllerMediaURL] as? URL
        var im = info[UIImagePickerControllerOriginalImage] as? UIImage
        if let ed = info[UIImagePickerControllerEditedImage] as? UIImage {
            im = ed
        }
        let live = info[UIImagePickerControllerLivePhoto] as? PHLivePhoto
        self.dismiss(animated:true) {
            if let mediatype = info[UIImagePickerControllerMediaType],
                let type = mediatype as? NSString {
                    switch type {
                    case kUTTypeLivePhoto:
                        if live != nil {
                            self.showLivePhoto(live!)
                        }
                    case kUTTypeImage:
                        if im != nil {
                            self.showImage(im!)
                        }
                    case kUTTypeMovie:
                        if url != nil {
                            self.showMovie(url:url!)
                        }
                    default:break
                    }
                }
        }
}
Warning

UIImagePickerController provides no way to govern its supported interface orientations (rotation). The delegate method navigationControllerSupportedInterfaceOrientations(_:) is ineffective. My solution is to subclass.

Photos Framework

The Photos framework (import Photos), also known as Photo Kit, does for the photo library roughly what the Media Player framework does for the music library (Chapter 16), letting your code explore the library’s contents — and then some. You can manipulate albums, add photos, and even perform edits on the user’s photos.

Note

The Assets Library framework from iOS 7 and before is deprecated, and is not discussed in this edition.

The photo library itself is represented by the PHPhotoLibrary class, and by its shared instance, which you can obtain through the shared method. You will not often need to use this class, however, and you do not need to retain the shared photo library instance. More important are the classes representing the kinds of things that inhabit the library (the photo entities):

PHAsset

A single photo or video file.

PHCollection

An abstract class representing collections of all kinds. Its concrete subclasses are:

PHAssetCollection

A collection of photos; albums and moments are PHAssetCollections.

PHCollectionList

A collection of asset collections. For example, a year of moments is a collection list; a folder of albums is a collection list.

Finer typological distinctions are drawn, not through subclasses, but through a system of types and subtypes. For example, a PHAsset might have a type of .image and a subtype of .photoPanorama; a PHAssetCollection might have a type of .album and a subtype of .albumRegular; and so on.

The photo entity classes are actually all subclasses of PHObject, an abstract class that endows them with a localIdentifier property that functions as a persistent unique identifier.

Querying the Photo Library

When you want to know what’s in the photo library, start with one of the photo entity classes — the one that represents the type of entity you want to know about. The photo entity class will supply class methods whose names begin with fetch; you’ll pick the class method that expresses the kind of criteria you’re starting with. For example, to fetch one or more PHAssets, you’ll call a PHAsset fetch method; you can fetch by containing asset collection, by local identifier, by media type, or by asset URL (such as you might get from a UIImagePickerController). Similarly, you can fetch PHAssetCollections by identifier, by URL, by whether they contain a given PHAsset; you can fetch PHCollectionLists by identifier, by whether they contain a given PHAssetCollection; and so on.

Many of these fetch methods have parameters that help to limit and define the search. In addition, you can supply a PHFetchOptions object letting you refine the results even further: you can set its predicate to limit your request results, and its sortDescriptors to determine the results order. Starting in iOS 9, a PHFetchOptions fetchLimit can limit the number of results returned, and its includeAssetSourceTypes can specify where the results should come from, such as eliminating cloud items (as we did with MPMediaItems in Chapter 16).

What you get back from a fetch method query is not images or videos but information. A fetch method returns a collection of PHObjects of the type to which you sent the fetch method originally; these refer to entities in the photo library, rather than handing you an entire file (which would be huge). The collection itself is expressed as a PHFetchResult, which behaves very like an array: you can ask for its count, obtain the object at a given index (possibly by subscripting), look for an object within the collection, and enumerate the collection with an enumerate method.

Warning

You cannot enumerate a PHFetchResult with for...in in Swift, even though you can do so in Objective-C. I regard this as a bug (caused by the fact that in Swift 3, PHFetchResult is a generic).

For example, let’s say we want to know how moments are divided into years. A clump of moments grouped by year is a PHCollectionList, so the relevant class is PHCollectionList. This code is a fairly standard template for any sort of information fetching:

let opts = PHFetchOptions()
let desc = NSSortDescriptor(key: "startDate", ascending: true)
opts.sortDescriptors = [desc]
let result = PHCollectionList.fetchCollectionLists(with: .momentList,
    subtype: .momentListYear, options: opts)
for ix in 0..<result.count {
    let list = result[ix]
    let f = DateFormatter()
    f.dateFormat = "yyyy"
    print(f.string(from:list.startDate!))
}
/*
1987
1988
1989
1990
...
*/

Each resulting list object in the preceding code is a PHCollectionList comprising a list of moments. Let’s dive into that object to see how those moments are clumped into clusters. A cluster of moments is a PHAssetCollection, so the relevant class is PHAssetCollection:

let result = PHAssetCollection.fetchMoments(inMomentList:list, options:nil)
for ix in 0 ..< result.count {
    let coll = result[ix]
    if ix == 0 {
        print("======= (result.count) clusters")
    }
    f.dateFormat = ("yyyy-MM-dd")
    print("starting (f.string(from:coll.startDate!)): " +
        "(coll.estimatedAssetCount)")
}
/*
======= 12 clusters
starting 1987-05-15: 2
starting 1987-05-16: 6
starting 1987-05-17: 2
starting 1987-05-20: 4
....
*/

Observe that in that code we can learn how many moments are in each cluster only as its estimatedAssetCount. This is probably the right answer, but to obtain the real count, we’d have to dive one level deeper and fetch the cluster’s actual moments.

Next, let’s list all albums that have been synced onto the device from iPhoto. An album is a PHAssetCollection, so the relevant class is PHAssetCollection:

let result = PHAssetCollection.fetchAssetCollections(with: .album,
    subtype: .albumSyncedAlbum, options: nil)
for ix in 0 ..< result.count {
    let album = result[ix]
    print("(album.localizedTitle): " +
        "approximately (album.estimatedAssetCount) photos")
}

Again, let’s dive further: given an album, let’s fetch its contents. An album’s contents are its assets (photos and videos), so the relevant class is PHAsset:

let result = PHAsset.fetchAssets(in:album, options: nil)
for ix in 0 ..< result.count {
    let asset = result[ix]
    print(asset.localIdentifier)
}

If you don’t find the fetch method you need, don’t forget about PHFetchOptions. For example, there is no PHAsset fetch method for fetching from a certain collection all assets of a certain type — for example, to specify that you want all photos (but no videos) from the user’s Camera Roll. You can perform such a fetch, but to do so, you need to form an NSPredicate and use a PHFetchOptions object.

Modifying the Library

Structural modifications to the photo library are performed through a change request class corresponding to the class of photo entity we wish to modify. The name of the change request class is the name of a photo entity class followed by “ChangeRequest.” Thus, for PHAsset, there’s the PHAssetChangeRequest class — and so on.

A change request is usable only by calling a performChanges method on the shared photo library. Typically, the method you’ll call will be performChanges(_:completionHandler:), which takes two functions. The first function, the changes function, is where you describe the changes you want performed; the second function, the completion function, is called back after the changes have been performed. The reason for this peculiar structure is that the photo library is a live database. While we are working, the photo library can change. Therefore, the changes function is used to batch our desired changes and send them off as a single transaction to the photo library, which responds by calling the completion function when the outcome of the entire batch is known.

Each change request class comes with methods that ask for a change of some particular type. Here are some examples:

PHAssetChangeRequest

Class methods include deleteAssets(_:), creationRequestForAssetFromImage(atFileURL:), and so on.

PHAssetCollectionChangeRequest

Class methods include deleteAssetCollections(_:) and creationRequestForAssetCollection(withTitle:). In addition, there are initializers like init(for:), which takes an asset collection, along with instance methods addAssets(_:), removeAssets(_:), and so on.

The creationRequest class methods also return change request instances, but you won’t need them unless you plan to perform further changes as part of the same batch.

For example, let’s create an album called “Test Album.” An album is a PHAssetCollection, so we start with the PHAssetCollectionChangeRequest class and call its creationRequestForAssetCollection(withTitle:) class method in the performChanges function. This method returns a PHAssetCollectionChangeRequest instance, but we don’t need that instance for anything, so we simply throw it away:

PHPhotoLibrary.shared().performChanges({
    let t = "TestAlbum"
    typealias Req = PHAssetCollectionChangeRequest
    Req.creationRequestForAssetCollection(withTitle:t)
})

(The class name PHAssetCollectionChangeRequest is very long, so purely as a matter of style and presentation I’ve shortened it with a type alias.)

It may appear, in that code, that we didn’t actually do anything — we asked for a creation request, but we didn’t tell it to do any creating. Nevertheless, that code is sufficient; generating the creation request for a new asset collection in the performChanges function constitutes an instruction to create an asset collection.

That code, however, is rather silly. The album was created asynchronously, so to use it, we need a completion function (see Appendix C). Moreover, we’re left with no reference to the album we created. For that, we need a PHObjectPlaceholder. This minimal PHObject subclass has just one property — localIdentifier, which it inherits from PHObject. But this is enough to permit a reference to the created object to survive into the completion function, where we can do something useful with it:

var ph : PHObjectPlaceholder?
PHPhotoLibrary.shared().performChanges({
    let t = "TestAlbum"
    typealias Req = PHAssetCollectionChangeRequest
    let cr = Req.creationRequestForAssetCollection(withTitle:t)
    ph = cr.placeholderForCreatedAssetCollection
}) { ok, err in
    if ok, let ph = ph {
        self.newAlbumId = ph.localIdentifier
    }
}

Now suppose we subsequently want to populate our newly created album. For example, let’s say we want to make the first asset in the user’s Recently Added smart album a member of our new album as well. No problem! First, we need a reference to the Recently Added album; then we need a reference to its first asset; and finally, we need a reference to our newly created album (whose identifier we’ve already captured as self.newAlbumId). Those are all basic fetch requests, which we can perform in succession — and we then use their results to form the change request:

// find Recently Added smart album
let result = PHAssetCollection.fetchAssetCollections(with: .smartAlbum,
    subtype: .smartAlbumRecentlyAdded, options: nil)
guard let rec = result.firstObject else { return }
// find its first asset
let result2 = PHAsset.fetchAssets(in:rec, options: nil)
guard let asset1 = result2.firstObject else { return }
// find our newly created album by its local id
let result3 = PHAssetCollection.fetchAssetCollections(
    withLocalIdentifiers: [self.newAlbumId!], options: nil)
guard let alb2 = result3.firstObject else { return }
// ready to perform the change request
PHPhotoLibrary.shared().performChanges({
    typealias Req = PHAssetCollectionChangeRequest
    let cr = Req(for: alb2)
    cr?.addAssets([asset1] as NSArray)
})

A PHObjectPlaceholder has another use. Think about this problem: What if we created, say, an asset collection and wanted to add it to something (presumably to a PHCollectionList), all in one batch request? Requesting the creation of an asset collection gives us a PHAssetCollectionChangeRequest instance; you can’t add that to a collection. And the requested PHAssetCollection itself hasn’t been created yet! The solution would be to obtain a PHObjectPlaceholder. Because it is a PHObject, it can be used in the argument of change request methods such as addChildCollections(_:).

A PHAssetChangeRequest is a little different from a collection change request: you can create an asset or delete an asset, but you obviously are not going to add or remove anything from an asset. You can, however, change the asset’s features, such as its creation date or its associated geographical location. By default, creating a PHAsset puts it into the user’s Camera Roll album immediately. I’ll give an example later in this chapter.

Being Notified of Changes

When the library is modified, either by your code or by some other means while your app is running, any information you’ve collected about the library — information which you may even be displaying in your interface at that very moment — may become out of date. To cope with this possibility, you should, perhaps very early in the life of your app, register a change observer (adopting the PHPhotoLibraryChangeObserver protocol) with the photo library:

PHPhotoLibrary.shared().register(self)

The outcome is that, whenever the library changes, the observer’s photoLibraryDidChange(_:) method is called, with a PHChange object encapsulating a description of the change. The observer can then probe the PHChange object by calling changeDetails(for:). The parameter can be one of two types:

A PHObject

The parameter is a single PHAsset, PHAssetCollection, or PHCollectionList you’re interested in. The result is a PHObjectChangeDetails object, with properties like objectBeforeChanges, objectAfterChanges, and objectWasDeleted.

A PHFetchResult

The result is a PHFetchResultChangeDetails object, with properties like fetchResultBeforeChanges, fetchResultAfterChanges, removedObjects, insertedObjects, and so on.

The idea is that if you’re hanging on to information in an instance property, you can use what the PHChange object tells you to modify that information (and possibly your interface).

For example, suppose my interface is displaying a list of album names, which I obtained originally through a PHAssetCollection fetch request. And suppose that, at the time that I performed the fetch request, I also retained, as an instance property (self.albums), the fetch result that it returned. Then if my photoLibraryDidChange(_:) method is called, I can update the fetch result and change my interface accordingly:

func photoLibraryDidChange(_ changeInfo: PHChange) {
    if self.albums !== nil {
        let details = changeInfo.changeDetails(for:self.albums)
        if details !== nil {
            self.albums = details!.fetchResultAfterChanges
            // ... and adjust interface if needed ...
        }
    }
}

Displaying Images

Sooner or later, you’ll probably want to go beyond information about the structure of the photo library and fetch an actual photo or video for display in your app. This is surprisingly tricky, because the process of obtaining an image can be time-consuming: not only may the image data may be large, but also it may be stored in the cloud. Thus, there has to be a way in which you can be called back asynchronously with the data.

To obtain an image, you’ll need an image manager, which you’ll get by calling the PHImageManager default class method. You then call a method whose name starts with request, supplying a completion function. For an image, you can ask for a UIImage or its Data; for a video, you can ask for an AVPlayerItem, an AVAsset, or an AVAssetExportSession suitable for exporting the video file to a new location (see Chapter 15). The result comes back to you as a parameter passed into your completion function. Thus, to use the result, you do not proceed after your request call; rather, you proceed inside the completion function.

If you’re asking for a UIImage, information about the image may increase in accuracy and detail in the course of time — with the curious consequence that your completion function may be called multiple times. The idea is to give you some image to display as fast as possible, with better versions of the image arriving later. If you would rather receive just one version of the image, you can, by passing into your call an appropriate PHImageRequestOptions object (as I’ll explain in a moment).

You can specify details of the data-retrieval process by using the parameters of the method that you call. For example, when asking for a UIImage, you supply these parameters:

targetSize:

The size of the desired image. It is a waste of memory to ask for an image larger than you need for actual display, and a larger image may take longer to supply (and a photo, remember, is a very large image). The image retrieval process performs the desired downsizing so that you don’t have to. For the largest possible size, pass PHImageManagerMaximumSize.

contentMode:

A PHImageContentMode, either .aspectFit or .aspectFill, with respect to your targetSize. With .aspectFill, the image retrieval process does any needed cropping so that you don’t have to.

options:

A PHImageRequestOptions object. This is a value class representing a grab-bag of additional tweaks, such as:

  • Do you want the original image or the edited image?

  • Do you want one call to your completion function or many, and if one, do you want a degraded thumbnail (which will arrive quickly) or the best possible quality (which may take some considerable time)?

  • Do you want custom cropping?

  • Do you want the image fetched over the network if necessary, and if so, do you want to install a progress handler?

  • Do you want the image fetched synchronously? If you do, you will get only one call to your completion function — but then you must make your call on a background thread, and the image will arrive on that same background thread (see Chapter 24).

In this simple example, I have a view controller called DataViewController, good for displaying one photo in an image view (self.iv). It has a PHAsset property, self.asset, which is assumed to have been set when this DataViewController instance was created. In viewDidLoad, I call my setUpInterface utility method to populate the interface:

func setUpInterface() {
    guard let asset = self.asset else { return }
    PHImageManager.default().requestImage(for: asset,
        targetSize: CGSize(300,300), contentMode: .aspectFit,
        options: nil) { im, info in
            if let im = im {
                self.iv.image = im
            }
    }
}

This may result in the image view’s image being set multiple times as the requested image is supplied repeated with its quality improving each time, but there is nothing wrong with that. Using this technique with a UIPageViewController, you can easily write an app that allows the user to browse photos one at a time.

The info parameter in an image request’s completion function is a dictionary whose elements may be useful in a variety of circumstances. Among the possible keys are:

PHImageResultRequestIDKey

Uniquely identifies a single image request for which this result function is being called multiple times. This value is also returned by the original request method call (I didn’t bother to capture it in the previous example). You can also use this identifier to call cancelImageRequest(_:) if it turns out that you don’t need this image after all.

PHImageCancelledKey

Reports that an attempt to cancel an image request with cancelImageRequest(_:) succeeded.

PHImageResultIsInCloudKey

Warns that the image is in the cloud and that your request must be resubmitted with explicit permission to use the network.

If you imagine that your interface is a table view or collection view, you can see why the asynchronous, time-consuming nature of image fetching can be of importance. As the user scrolls, a cell comes into view and you request the corresponding image. But as the user keeps scrolling, that cell goes out of view, and now the requested image, if it hasn’t arrived, is no longer needed, so you cancel the request. (I’ll tackle the same sort of problem with regard to Internet-based images in a table view in Chapter 23.)

There is also a PHImageManager subclass, PHCachingImageManager, that can help do the opposite: you can prefetch some images before the user scrolls to view them, thus improving response time. For an example, displaying photos in a UICollectionView, look at Apple’s SamplePhotosApp sample code (also called “Example app using Photos framework”). It uses the PHImageManager class to fetch individual photos; but for the UICollectionViewCell thumbnails it uses PHCachingImageManager.

A PHAsset represents a live photo if its mediaSubtypes includes .photoLive. To ask for the live photo, there’s a PHImageManager requestLivePhoto method, parallel to requestImage; what you get in the completion function is a PHLivePhoto (and see earlier in this chapter on how to display it in your interface).

Fetching a video resource is far simpler, and there’s little to say about it. In this example, I fetch a reference to the first video in the user’s photo library and display it in the interface (using an AVPlayerViewController); the only tricky bit is that, unlike an image, I am not guaranteed that the result will arrive on the main thread, so I must step out to the main thread before interacting with the app’s user interface:

func fetchMovie() {
    let opts = PHFetchOptions()
    opts.fetchLimit = 1
    let result = PHAsset.fetchAssets(with: .video, options: opts)
    guard result.count > 0 else {return}
    let asset = result[0]
    PHImageManager.default().requestPlayerItem(
        forVideo: asset, options: nil) { item, info in
        if let item = item {
            DispatchQueue.main.async {
                self.display(item:item)
            }
        }
    }
}
func display(item:AVPlayerItem) {
    let player = AVPlayer(playerItem: item)
    let vc = AVPlayerViewController()
    vc.player = player
    vc.view.frame = self.v.bounds
    self.addChildViewController(vc)
    self.v.addSubview(vc.view)
    vc.didMove(toParentViewController: self)
}

Starting in iOS 9, you can access an asset’s various kinds of data directly through the PHAssetResource and PHAssetResourceManager classes. Conversely, PHAssetChangeRequest has a subclass PHAssetCreationRequest, which allows you to supply the asset’s Data directly (I’ll give an example later in this chapter). For a list of the data types we’re talking about here, see the documentation on the PHAssetResourceType enum.

Editing Images

Astonishingly, Photo Kit allows you to change an image in the user’s photo library. Why is this even legal? There are two reasons:

  • The user will have to give permission every time your app proposes to modify a photo in the library (and will be shown the proposed modification).

  • Changes to library photos are undoable, because the original image remains in the database along with the changed image that the user sees (and the user can revert to that original at any time).

How to change a photo image

To change a photo image is a three-step process:

  1. You send a PHAsset the requestContentEditingInput(with:completionHandler:) message. Your completion function is called, and is handed a PHContentEditingInput object. This object wraps some image data which you can display to the user (displaySizeImage), along with a pointer to the real image data on disk (fullSizeImageURL).

  2. You create a PHContentEditingOutput object by calling init(contentEditingInput:), handing it the PHContentEditingInput object. This PHContentEditingOutput object has a renderedContentURL property, representing a URL on disk. Your mission is to write the edited photo image data to that URL. Thus, what you’ll typically do is:

    1. Fetch the image data from the PHContentEditingInput object’s fullSizeImageURL.

    2. Process the image.

    3. Write the resulting image data to the PHContentEditingOutput object’s renderedContentURL.

  3. You notify the photo library that it should pick up the edited version of the photo. To do so, you call performChanges(_:completionHandler:) and, inside the changes function, create a PHAssetChangeRequest and set its contentEditingOutput property to the PHContentEditingOutput object. This is when the user will be shown the alert requesting permission to modify this photo; your completion function is then called, with a first parameter of false if the user refuses (or if anything else goes wrong).

Handling the adjustment data

So far, so good. However, if you do only what I have just described, your attempt to modify the photo will fail. The reason is that I have omitted something: before the third step, you must set the PHContentEditingOutput object’s adjustmentData property to a newly instantiated PHAdjustmentData object. The initializer is init(formatIdentifier:formatVersion:data:). What goes into these parameters is completely up to you; the idea, however, is to send a message to your future self in case you are called upon to edit the same photo again on some later occasion. In that message, you describe to yourself how you edited the photo on this occasion.

Your handling of the adjustment data works in three steps, interwoven with the three steps I already outlined:

  1. When you call the requestContentEditingInput(with:completionHandler:) method, the options: argument should be a PHContentEditingInputRequestOptions object. You are to create this object and set its canHandleAdjustmentData property to a function that takes a PHAdjustmentData and returns a Bool. This Bool will be based mostly on whether you recognize this PHAdjustmentData as yours — typically because you recognize its formatIdentifier. That determines what image you’ll get when you receive your PHContentEditingInput object:

    Your canHandleAdjustmentData function returns false

    The image you’ll be editing is the edited image displayed in the Photos app.

    Your canHandleAdjustmentData function returns true

    The image you’ll be editing is the original image, stripped of your edits. This is because, by returning true, you are asserting that you can recreate the content of your edits based on what’s in the PHAdjustmentData’s data.

  2. When your completion function is called and you receive your PHContentEditingInput object, it has (you guessed it) an adjustmentData property, which is an Optional wrapping a PHAdjustmentData object. If this isn’t nil, and if you edited this image previously, its data is the data you put in the last time you edited this image, and you are expected to extract it and use it to recreate the edited state of the image.

  3. When you prepare the PHContentEditingOutput, you give it a new PHAdjustmentData object, as I already explained. If you are performing edits, the data of this new PHAdjustmentData object can be a summary of the edited state of the photo from your point of view — and so the whole cycle can start again if the same photo is to be edited again later.

Example: Before editing

An actual implementation is quite straightforward and almost pure boilerplate. The details will vary only in regard to the actual editing of the photo and the particular form of the data by which you’ll summarize that editing — so, in constructing an example, I’ll keep that part very simple. Recall, from Chapter 2 (“CIFilter and CIImage”), my example of a custom “vignette” CIFilter called MyVignetteFilter. I’ll provide an interface whereby the user can apply that filter to a photo. My interface will include a slider that allows the user to set the degree of vignetting that should be applied (MyVignetteFilter’s inputPercentage). Moreover, my interface will include a button that lets the user remove all vignetting from the photo, even if that vignetting was applied in a previous editing session.

First, I’ll plan the structure of the PHAdjustmentData:

The formatIdentifier

This can be any unique string; I’ll use "com.neuburg.matt.PhotoKitImages.vignette", a constant that I’ll store in a property (self.myidentifier).

The formatVersion

This is likewise arbitrary; I’ll use "1.0".

The data

This will express the only thing about my editing that is adjustable — the inputPercentage. The data will wrap an NSNumber which itself wraps a Double whose value is the inputPercentage.

As editing begins, I construct the PHContentEditingInputRequestOptions object that expresses whether a photo’s most recent editing belongs to me. I then obtain the photo that is to be edited (a PHAsset) and ask for the PHContentEditingInput object:

let options = PHContentEditingInputRequestOptions()
options.canHandleAdjustmentData = { adjustmentData in
    return adjustmentData.formatIdentifier == self.myidentifier
}
var id : PHContentEditingInputRequestID = 0
id = self.asset.requestContentEditingInput(with: options) { input, info in
    // ....
}

I have omitted the content of the completion function. What should we do here? First, I receive my PHContentEditingInput object as a parameter (input). I’m going to need this object later when editing ends, so I immediately store it in a property. I then unwrap its adjustmentData, extract the data, and construct the editing interface; in this case, that happens to be a presented view controller, but the details are irrelevant and are omitted here:

guard let input = input else {
    self.asset.cancelContentEditingInputRequest(id)
    return
}
self.input = input
let im = input.displaySizeImage! // show this to user during editing
let adj : PHAdjustmentData? = input.adjustmentData
if let adj = input.adjustmentData,
    adj.formatIdentifier == self.myidentifier {
        if let vigAmount =
            NSKeyedUnarchiver.unarchiveObject(with: adj.data) as? Double {
                // ... store vigAmount ...
        }
}
// ... present editing interface, passing it the vigAmount ...

The important thing about that code is how we deal with the adjustmentData and its data. The question is whether we have data, and whether we recognize this as our data from some previous edit on this image. This will affect how our editing interface needs to behave. There are two possibilities:

It’s our data

If we were able to extract a vigAmount from the adjustmentData, then the displaySizeImage is the original, unvignetted image. Meanwhile, our editing interface itself initially applies the vigAmount of vignetting to this image — thus reconstructing the vignetted state of the photo as shown in the Photos app, while allowing the user to change the amount of vignetting, or even to remove all vignetting entirely.

It’s not our data

On the other hand, if we weren’t able to extract a vigAmount from the adjustmentData, then there is nothing to reconstruct; the displaySizeImage is just the photo image from the Photos app, and our editing interface will apply vignetting to it directly.

Example: After editing

Let’s skip ahead now to the point where the user’s interaction with our editing interface comes to an end. If the user cancelled, that’s all; the user doesn’t want to modify the photo after all. Otherwise, the user either asked to apply a certain amount of vignetting (vignette) or asked to remove all vignetting; in the latter case, I use an arbitrary vignette value of -1 as a signal.

The time has now come to do what the user is asking us to do. But we do not want to do this to the PHContentEditingInput’s displaySizeImage. That was merely the image we displayed in our editing interface; we showed the user the effect of applying vignetting to this image, but that was merely for illustrative purposes. Now, however, we must apply this amount of vignetting to the real photo image, which has been sitting waiting for us all this time, untouched, at the PHContentEditingInput’s fullSizeImageURL. This is a much bigger image, which will take significant time to load, to alter, and to save — which is why we haven’t been working with it live in the editing interface.

So, depending on the value of vignette requested by the user, I either run the input image from the fullSizeImageURL through my vignette filter or I don’t; either way, I write a JPEG to the PHContentEditingOutput’s renderedContentURL:

let inurl = self.input.fullSizeImageURL!
let inorient = self.input.fullSizeImageOrientation
let output = PHContentEditingOutput(contentEditingInput:self.input)
let outurl = output.renderedContentURL
var ci = CIImage(contentsOf: inurl)!.applyingOrientation(inorient)
let space = ci.colorSpace!
if vignette >= 0.0 {
    let vig = MyVignetteFilter()
    vig.setValue(ci, forKey: "inputImage")
    vig.setValue(vignette, forKey: "inputPercentage")
    ci = vig.outputImage!
}
try! CIContext().writeJPEGRepresentation(
    of: ci, to: outurl, colorSpace: space)
Note

The CIContext method called in the last line (new in iOS 10, and apparently provided exactly for this situation) is time-consuming. The preceding code should therefore probably be called on background thread, with a UIActivityIndicatorView or similar to let the user know that work is being done.

But we are not quite done. It is crucial that we set the PHContentEditingOutput’s adjustmentData, and the goal here is to send a message to myself, in case I am asked later to edit this same image again, stating what amount of vignetting is already applied to the image. That amount is represented by vignette — so that’s the value I store in the adjustmentData:

let data = NSKeyedArchiver.archivedData(withRootObject: vignette)
output.adjustmentData = PHAdjustmentData(
    formatIdentifier: self.myidentifier, formatVersion: "1.0", data: data)

We conclude by telling the photo library to retrieve the edited image. This will cause the alert to appear, asking the user whether to allow us to modify this photo. If the user taps Modify, the modification is made, and if we are displaying the image, we should get onto the main thread and redisplay it:

PHPhotoLibrary.shared().performChanges({
    let Req = PHAssetChangeRequest
    let req = Req(for: self.asset)
    req.contentEditingOutput = output // triggers alert
}) { ok, err in
    if ok {
        // if we are displaying image, redisplay it — on main thread
    } else {
        // user refused to allow modification, do nothing
    }
}
Tip

You can also edit a live photo, using a PHLivePhotoEditingContext: you are handed each frame of the video as a CIImage, making it easy, for example, to apply a CIFilter. For a demonstration, see Apple’s Photo Edit sample app (also known as Sample Photo Editing Extension).

Photo Editing Extension

A photo editing extension is photo-modifying code supplied by your app that is effectively injected into the Photos app. When the user edits a photo from within the Photos app, your extension appears as an option and can modify the photo being edited.

To make a photo editing extension, create a new target in your app, specifying iOS → Application Extension → Photo Editing Extension. The template supplies a storyboard containing one scene, along with the code file for a corresponding UIViewController subclass. This file imports not only the Photos framework but also the Photos UI framework, which supplies the PHContentEditingController protocol, to which the view controller conforms. This protocol specifies the methods through which the runtime will communicate with your extension’s code.

A photo editing extension works almost exactly the same way as modifying photo library assets in general, as I described in the preceding section. The chief differences are:

  • You don’t put a Done or a Cancel button into your editing interface. The Photos app will wrap your editing interface in its own interface, which supplies them when it presents your view.

  • You must situate the pieces of your code in such a way that those pieces respond to the calls that will come through the PHContentEditingController methods.

The PHContentEditingController methods are as follows:

canHandle(_:)

You will not be instantiating PHContentEditingInput; the runtime will do it for you. Therefore, instead of configuring a PHContentEditingInputRequestOptions object and setting its canHandleAdjustmentData, you implement this method; you’ll receive the PHAdjustmentData and return a Bool.

startContentEditing(with:placeholderImage:)

The runtime has obtained the PHContentEditingInput object for you. Now it supplies that object to you, along with a very temporary initial version of the image to be displayed in your interface; you are expected to replace this with the PHContentEditingInput object’s displaySizeImage. Just as in the previous section’s code, you should retain the PHContentEditingInput object in a property, as you will need it again later.

cancelContentEditing

The user tapped Cancel. You may well have nothing to do here.

finishContentEditing(completionHandler:)

The user tapped Done. In your implementation, you get onto a background thread (the template configures this for you) and do exactly the same thing you would do if this were not a photo editing extension — get the PHContentEditingOutput object and set its adjustmentData; get the photo from the PHContentEditingInput object’s fullSizeImageURL, modify it, and save the modified image as a full-quality JPEG at the PHContentEditingOutput object’s renderedContentURL. When you’re done, don’t notify the PHPhotoLibrary; instead, call the completionHandler that arrived as a parameter, handing it the PHContentEditingOutput object.

During the time-consuming part of this method, the Photos app puts up a UIActivityIndicatorView, just as I suggested you might want to do in your own app. When you call the completionHandler, there is no alert asking the user to confirm the modification of the photo; the user is already in the Photos app and has explicitly asked to edit the photo, so no confirmation is needed — and moreover, the user will have one more chance to remove all changes made in the editing interface.

Using the Camera

Use of the camera requires user authorization. You’ll use the AVCaptureDevice class for this (part of the AV Foundation framework; import AVFoundation). To learn what the current authorization status is, call the class method authorizationStatus(forMediaType:). To ask the system to put up the authorization request alert if the status is .notDetermined, call the class method requestAccess(forMediaType:completionHandler:). The media type will be AVMediaTypeVideo; this embraces both stills and movies. The Info.plist must contain some text that the system authorization request alert can use to explain why your app wants camera use; the relevant key is “Privacy — Camera Usage Description” (NSCameraUsageDescription).

If your app will let the user capture movies (as opposed to stills), you will also need to obtain permission from the user to access the microphone. The same methods apply, but with argument AVMediaTypeAudio. You should modify the body of the authorization alert by setting the “Privacy — Microphone Usage Description” key (NSMicrophoneUsageDescription) in your app’s Info.plist.

See “Music Library Authorization” for detailed consideration of authorization strategy and testing.

Warning

Use of the camera is greatly curtailed, and is interruptible, under iPad multitasking. Watch WWDC 2015 video 211 for details.

Capture with UIImagePickerController

The simplest way to prompt the user to take a photo or video is to use our old friend UIImagePickerController, which provides an interface that is effectively a limited subset of the Camera app.

The procedure is astonishingly similar to what you do when you use UIImagePickerController to browse the photo library, described earlier in this chapter. First check isSourceTypeAvailable(_:) for .camera; it will be false if the user’s device has no camera or the camera is unavailable. If it is true, call availableMediaTypes(for:.camera) to learn whether the user can take a still photo (kUTTypeImage), a video (kUTTypeMovie), or both. Now instantiate UIImagePickerController, set its source type to .camera, and set its mediaTypes in accordance with which types you just learned are available; if your setting is an array of both kUTTypeImage and kUTTypeMovie, the user will see a Camera-like interface allowing a choice of either one. Finally, set a delegate (adopting UINavigationControllerDelegate and UIImagePickerControllerDelegate), and present the picker:

let src = UIImagePickerControllerSourceType.camera
guard UIImagePickerController.isSourceTypeAvailable(src)
    else {return}
guard let arr = UIImagePickerController.availableMediaTypes(for:src)
    else {return}
let picker = UIImagePickerController()
picker.sourceType = src
picker.mediaTypes = arr
picker.delegate = self
self.present(picker, animated: true)

For video, you can also specify the videoQuality and videoMaximumDuration. Moreover, these additional properties and class methods allow you to discover the camera capabilities:

isCameraDeviceAvailable:

Checks to see whether the front or rear camera is available, using one of these values as argument (UIImagePickerControllerCameraDevice):

  • .front

  • .rear

cameraDevice

Lets you learn and set which camera is being used.

availableCaptureModes(for:)

Checks whether the given camera can capture still images, video, or both. You specify the front or rear camera; returns an array of integers. Possible modes are (UIImagePickerControllerCameraCaptureMode):

  • .photo

  • .video

cameraCaptureMode

Lets you learn and set the capture mode (still or video).

isFlashAvailable(for:)

Checks whether flash is available.

cameraFlashMode

Lets you learn and set the flash mode (or, for a movie, toggles the LED “torch”). Your choices are (UIImagePickerControllerCameraFlashMode):

  • .off

  • .auto

  • .on

When the view controller’s view appears, the user will see the interface for taking a picture, familiar from the Camera app, possibly including flash options, camera selection button, photo/video option (if your mediaTypes setting allows both), and Cancel and shutter buttons. If the user takes a picture, the presented view offers an opportunity to use the picture or to retake it.

Allowing the user to edit the captured image or movie (allowsEditing), and handling the outcome with the delegate messages, is the same as I described earlier for dealing with an image or movie selected from the photo library, with these additional points regarding the info dictionary delivered to the delegate:

  • There won’t be any UIImagePickerControllerReferenceURL key, because the image isn’t in the photo library.

  • A still image might report a UIImagePickerControllerMediaMetadata key containing the metadata for the photo.

The photo library was not involved in the process of media capture, so no user permission to access the photo library is needed; of course, if you now propose to save the media into the photo library, you will need permission. Suppose, for example, that the user takes a still image, and you now want to save it into the user’s Camera Roll album; this is as simple as can be — creating the PHAsset is sufficient:

func imagePickerController(_ picker: UIImagePickerController,
    didFinishPickingMediaWithInfo info: [String : Any]) {
        var im = info[UIImagePickerControllerOriginalImage] as? UIImage
        if let ed = info[UIImagePickerControllerEditedImage] as? UIImage {
            im = ed
        }
        self.dismiss(animated:true) {
            let mediatype = info[UIImagePickerControllerMediaType]
            guard let type = mediatype as? NSString else {return}
            switch type {
            case kUTTypeImage:
                if im != nil {
                    let lib = PHPhotoLibrary.shared()
                    lib.performChanges({
                        typealias Req = PHAssetChangeRequest
                        let req = Req.creationRequestForAsset(from: im!)
                        // apply metadata info here, as desired
                    })
                }
            default:break
            }
        }
}

You can customize the UIImagePickerController interface. If you need to do that, you should probably consider dispensing with UIImagePickerController altogether and designing your own image capture interface from scratch, based around AV Foundation and AVCaptureSession, which I’ll introduce in the next section. Still, it may be that a modified UIImagePickerController is all you need.

In the image capture interface, you can hide the standard controls by setting showsCameraControls to false, replacing them with your own overlay view, which you supply as the value of the cameraOverlayView. In this case, you’re probably going to want some means in your overlay view to allow the user to take a picture! You can do that through these methods:

  • takePicture

  • startVideoCapture

  • stopVideoCapture

The UIImagePickerController is a UINavigationController, so if you need additional interface — for example, to let the user vet the captured picture before dismissing the controller — you can push it onto the navigation interface.

Capture with AV Foundation

Instead of using UIImagePickerController, you can control the camera and capture images directly using the AV Foundation framework (Chapter 15). You get no help with interface, but you get vastly more detailed control than UIImagePickerController can give you. For example, for stills, you can control focus and exposure directly and independently, and for video, you can determine the quality, size, and frame rate of the resulting movie.

To understand how AV Foundation classes are used for image capture, imagine how the Camera app works. When you are running the Camera app, you have, at all times, a “window on the world” — the screen is showing you what the camera sees. At some point, you might tap the button to take a still image or start taking a video; now what the camera sees also goes into a file.

Think of all that as being controlled by an engine. This engine, the heart of all AV Foundation capture operations, is an AVCaptureSession object. It has inputs (such as a camera) and outputs (such as a file). It also has an associated layer in your interface. When you start the engine running, by calling startRunning, data flows from the input through the engine; that is how you get your “window on the world,” displaying on the screen what the camera sees.

As a rock-bottom example, let’s implement just that much of the engine. We need a special CALayer that will display what the camera is seeing — namely, an AVCaptureVideoPreviewLayer. This layer is not really an AVCaptureSession output; rather, the layer receives its imagery by association with the AVCaptureSession. Our capture session’s input is the default camera. We have no intention, as yet, of capturing anything to a file, so no output is needed:

self.sess = AVCaptureSession()
let cam = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeVideo)
guard let input = try? AVCaptureDeviceInput(device:cam) else {return}
self.sess.addInput(input)
let lay = AVCaptureVideoPreviewLayer(session:self.sess)!
lay.frame = // ... some reasonable frame ...
self.view.layer.addSublayer(lay)
self.sess.startRunning()

Presto! Our interface now displays a “window on the world,” showing what the camera sees.

Suppose now that our intention is that, while the engine is running and the “window on the world” is showing, the user is to be allowed to tap a button that will capture a still photo. Now we do need an output for our AVCaptureSession. New in iOS 10, this will be an AVCapturePhotoOutput instance. We should also configure the session with a preset to match our intended use of it; in this case, that will be AVCaptureSessionPresetPhoto.

So let’s modify the preceding code to give the session an output and a preset. We can do this directly before we start the session running. We can also do it while the session is already running (and in general, if you want to reconfigure a running session, doing so while it is running is far more efficient than stopping the session and starting it again), but then we must wrap our configuration changes in beginConfiguration and commitConfiguration:

self.sess.beginConfiguration()
let preset = AVCaptureSessionPresetPhoto
guard self.sess.canSetSessionPreset(self.sess.sessionPreset)
    else {return}
self.sess.sessionPreset = preset
let output = AVCapturePhotoOutput()
guard self.sess.canAddOutput(output)
    else {return}
self.sess.addOutput(output)
self.sess.commitConfiguration()

The session is now running and is ready to capture a photo. The user taps the button that asks to capture a photo, and we respond by telling the session’s photo output to capturePhoto(with:delegate:). The first parameter is an AVCapturePhotoSettings object. It happens that for a standard JPEG photo a default AVCapturePhotoSettings instance will do, but to make things more interesting I’ll specify explicitly that I want the camera to use automatic flash and automatic image stabilization:

let settings = AVCapturePhotoSettings()
settings.flashMode = .auto
settings.isAutoStillImageStabilizationEnabled = true
guard let output = self.sess.outputs[0] as? AVCapturePhotoOutput
    else {return}
output.capturePhoto(with: settings, delegate: self)

In that code, I specified self as the delegate (an AVCapturePhotoCaptureDelegate adopter). Functioning as the delegate, we will now receive a sequence of events. The exact sequence depends on what sort of capture we’re doing; in this case, it will be:

  1. capture(_:willBeginCaptureForResolvedSettings:)

  2. capture(_:willCapturePhotoForResolvedSettings:)

  3. capture(_:didCapturePhotoForResolvedSettings:)

  4. capture(_:didFinishProcessingPhotoSampleBuffer:previewPhotoSampleBuffer:resolvedSettings:bracketSettings:error:)

  5. capture(_:didFinishCaptureForResolvedSettings:)

The resolvedSettings: throughout are the settings actually used during the capture; for example, we could find out whether flash was actually used. The delegate event of interest to our example is obviously the fourth one. This is where we receive the photo! It will arrive in the second parameter as a CMSampleBuffer. We can turn this into a Data object corresponding to a JPEG image by calling this photo output class method:

  • jpegPhotoDataRepresentation(forJPEGSampleBuffer:previewPhotoSampleBuffer:)

We can then do what we like with that Data object: we might write it to disk as a file, or store it as a PHAsset in the photo library. We could also transform it into a UIImage for display in our interface. But if we’re going to display the image in the interface, there’s a better way: when we configure the AVCapturePhotoSettings, we ask for a preview image, which then arrives as the previewPhotoSampleBuffer in the fourth delegate method. The reason this is better is that it’s a lot more efficient for AV Foundation to create an uncompressed thumbnail image of the correct size than for us to try to display or downsize a huge photo image.

Here’s how we might ask for the preview image:

let pbpf = settings.availablePreviewPhotoPixelFormatTypes[0]
let len = // desired maximum dimension
settings.previewPhotoFormat = [
    kCVPixelBufferPixelFormatTypeKey as String : pbpf,
    kCVPixelBufferWidthKey as String : len,
    kCVPixelBufferHeightKey as String : len
]

In this example, we implement the fourth delegate method to save the actual image as a PHAsset in the user’s photo library, and we also store the preview image as a property, for display in our interface later:

func capture(_ output: AVCapturePhotoOutput,
    didFinishProcessingPhotoSampleBuffer sampleBuffer: CMSampleBuffer?,
    previewPhotoSampleBuffer: CMSampleBuffer?,
    resolvedSettings: AVCaptureResolvedPhotoSettings,
    bracketSettings: AVCaptureBracketedStillImageSettings?,
    error: Error?) {
        if let prev = previewPhotoSampleBuffer {
            if let buff = CMSampleBufferGetImageBuffer(prev) {
                let cim = CIImage(cvPixelBuffer: buff)
                self.previewImage = UIImage(ciImage: cim)
            }
        }
        if let buff = sampleBuffer {
            if let data = AVCapturePhotoOutput.jpegPhotoDataRepresentation(
                forJPEGSampleBuffer: buff,
                previewPhotoSampleBuffer: previewPhotoSampleBuffer) {
                    let lib = PHPhotoLibrary.shared()
                    lib.performChanges({
                        let req = PHAssetCreationRequest.forAsset()
                        req.addResource(with: .photo,
                            data: data, options: nil)
                    })
            }
        }
}

Image capture with AV Foundation is a huge subject, and our example of a simple photo capture has barely scratched the surface. AVCaptureVideoPreviewLayer provides methods for converting between layer coordinates and capture device coordinates; without such methods, this can be a very difficult problem to solve. You can scan bar codes, shoot video at 60 frames per second (on some devices), and more. You can turn on the LED “torch” by setting the back camera’s torchMode to AVCaptureTorchModeOn, even if no AVCaptureSession is running. You get direct hardware-level control over the camera focus, manual exposure, and white balance. You can capture bracketed images; new in iOS 10, you can capture live images on some devices, and you can capture RAW images on some devices. There are very good WWDC videos about all this, stretching back over the past several years; there’s the “Media Capture” chapter of the AV Foundation Programming Guide; and the AVCam and AVCamManual sample code examples are absolutely superb, demonstrating how to deal with tricky issues such as orientation that would otherwise be very difficult to figure out.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.27.211