Chapter 24. Threads

A thread is a subprocess of your app that can execute even while other subprocesses are also executing. Such simultaneous execution is called concurrency. The iOS frameworks use threads all the time; if they didn’t, your app would be less responsive to the user — perhaps even completely unresponsive. For the most part, however, they do this behind the scenes on your behalf; you don’t have to worry about threads because the frameworks are worrying about them for you.

For example, suppose your app is downloading something from the network (Chapter 23). This download doesn’t happen all by itself; somewhere, someone is running code that interacts with the network and obtains data. Yet none of that interferes with your code, or prevents the user from tapping and swiping things in your interface. The networking code runs “in the background.” That’s concurrency in action.

This chapter discusses concurrency that involves your code in deliberate use of background threading. It would have been nice to dispense with this topic altogether. Background threads can be tricky and are always potentially dangerous, and should be avoided if possible. However, sometimes you can’t avoid them. So this chapter introduces threads. But beware: background threads entail complications and subtle pitfalls, and can make your code hard to debug. There is much more to threading, and especially to making your threaded code safe, than this chapter can possibly touch on. For detailed information about the topics introduced in this chapter, read Apple’s Concurrency Programming Guide and Threading Programming Guide.

Main Thread

Distinguish between the main thread and all other threads. There is only one main thread; other threads are background threads. All your code must run on some thread, but you are not usually conscious of this fact, because that thread is the main thread. The reason your code runs on the main thread is that the Cocoa frameworks ensure that this is so. How? Well, the only reason your code ever runs is that Cocoa calls into it. When Cocoa does this, it is generally careful to call your code on the main thread. Whenever code calls a method, that method runs on the same thread as the code that called it. Thus, your code runs on the main thread.

The main thread is the interface thread. This means that the main thread is the meeting-place between you and your user. When the user interacts with the interface, those interactions are reported as events — on the main thread. When your code interacts with the interface, it must do so on the main thread. Of course that will usually happen automatically, because your code normally runs on the main thread. But when you are involved with background threads, you must be careful.

So pretend now that I’m banging the table and shouting: If your code touches the interface, it must do so on the main thread. Don’t fetch any interface-related values on a background thread. Don’t set any interface-related values on a background thread. Whenever you use background threads, there is a chance you might touch the interface on a background thread. Don’t!

Warning

Touching the interface on a background thread is a very common beginner mistake. Don’t do it. A typical sign of trouble in this regard is an unaccountable delay of several seconds. In some cases, the console will also help with a warning.

Since you and the user are both using the main thread, the main thread is a very busy place. Picture how things proceed in your app:

  1. An event arrives — on the main thread. The user has tapped a button, for example, and this is reported to your app as a UIEvent and to the button through the touch delivery mechanism (Chapter 5) — on the main thread.

  2. The control event causes your code (the button’s action method) to be called — on the main thread. Your code now runs — on the main thread. While your code runs, nothing else can happen on the main thread. Your code might perform some changes in the interface; this is safe, because your code is running on the main thread.

  3. Your code finishes. The main thread’s run loop is now free to report more events, and the user is free to interact with the interface once again.

The bottleneck here is obviously step 2, the running of your code. Your code runs on the main thread. That means the main thread can’t do anything else while your code is running. No events can arrive while your code is running. The user can’t interact with the interface while your code is running. But this is usually no problem, because:

  • Your code executes really fast. It’s true that the user can’t interact with the interface while your code runs, but this is such a tiny interval of time that the user will probably never even notice.

  • Your code, as it runs, blocks the user from interacting with the interface. As long as your code finishes very quickly, that’s actually a good thing. Your code, in response to what the user does, might update the interface; it would be insane if the user could do something else in the interface while you’re in the middle of updating it.

The iOS frameworks, on the other hand, frequently operate on secondary threads. This usually doesn’t affect you, because the frameworks usually talk to your code on the main thread. You have seen many examples of this in the preceding chapters:

  • During an animation (Chapter 4), the interface remains responsive to the user, and it is possible for your code to run. The Core Animation framework is running the animation and updating the presentation layer on a background thread. But your delegate methods or completion functions are called on the main thread.

  • A web view’s fetching and loading of its content is asynchronous (Chapter 11); that means the work is done in a background thread. But your delegate methods are called on the main thread.

  • Sounds are played asynchronously (Chapters 14 and 16). But your delegate methods are called on the main thread. Similarly, loading, preparation, and playing of movies happens asynchronously (Chapter 15). But your delegate methods are called on the main thread.

  • Saving a movie file takes time (Chapters 15 and 17). So the saving takes place on a background thread. Similarly, UIDocument saves and reads on a background thread (Chapter 22). But your delegate methods or completion functions are called on the main thread.

Thus, you can (and should) usually ignore the existence of background threads and just keep plugging away on the main thread.

Nevertheless, there are two kinds of situation in which your code will need to be explicitly aware of background threads:

Your code is called back, but not on the main thread

Some frameworks explicitly inform you in their documentation that callbacks are not guaranteed to take place on the main thread. For example, the documentation on CATiledLayer (Chapter 7) warns that draw(_:in:) is called on a background thread. By implication, our draw(_:) code, triggered by CATiledLayer to update tiles, is running on a background thread. (Fortunately, drawing into the current context is thread-safe.)

Similarly, the documentation on AV Foundation (Chapters 15 and 17) warns that its completion functions and notifications can arrive on a background thread. So if you intend to update the user interface, or use a value that might also be used by your main-thread code, you’ll need to be thread-conscious.

Your code takes significant time

If your code takes significant time to run, you might need to run that code on a background thread, rather than letting it block the main thread and prevent anything else from happening there. For example:

During startup and other app state transitions

You want your app to launch as quickly as possible. In Chapter 22, I called URL(forUbiquityContainerIdentifier:) during app launch. The documentation told me to call this method on a background thread, because it can take some time to return; we don’t want to block the main thread waiting for it, because the app is trying to launch on the main thread, and the user won’t see our interface until the launch process is over.

Similarly, when your app is in the process of being suspended into the background, or resumed from the background, your app should not occupy the main thread for too long; it must act quickly and get out of the way.

When the user can see or interact with the app

In a table view data source, tableView(_:cellForRowAt:) needs to be fast (as I warned in Chapter 8) because the user needs to see this cell now; your code must not perform any time-consuming work here, but must return a configured cell immediately. Otherwise, the user won’t be able to scroll the table view; you’ll be freezing the interface because you are blocking the main thread.

Similarly, in Chapter 19, I called enumerateEvents(matching:using:) on a background thread, because it can take some time to run. If I were to call this method on the main thread, then when the user taps the button that triggers this call, the button might stay highlighted for a significant amount of time, during which the interface will be completely frozen. I would be perceptibly blocking the main thread.

Warning

Moving time-consuming code off the main thread, so that the main thread is not blocked, isn’t just a matter of aesthetics or politeness: the system “watchdog” will summarily kill your app if it discovers that the main thread is blocked for too long.

Why Threading Is Hard

The one certain thing about computer code is that it just clunks along the path of execution, one statement at a time. Lines of code, in effect, are performed in the order in which they appear. With threading, that certainty goes right out the window. If you have code that can be performed on a background thread, then you don’t know when your code will be performed. Your code is now concurrent. This means that any line of your background-thread code could be interleaved between any two lines of your main-thread code. Indeed, under certain circumstances, your background-thread code can be called multiple times on multiple background threads, meaning that any line of your background-thread code could be interleaved between any two lines of itself. (I’ll discuss later in this chapter a situation in which this very thing does happen.)

Now, you might say: So what? That’s just a matter of timing, isn’t it? The threads are still separate things, running code with separate paths of execution. But you’d be wrong. The reason: shared data.

There are variables in your app, such as instance properties, that persist and can be accessed from multiple places. Background threads mean that such variables can be accessed from multiple places at the same time. That is a really scary thought. Suppose two concurrent threads were to get hold of the same variable and change it. Who knows what horrors might result? Objects in general have state. That state resides in their instance properties. If multiple threads are permitted to access your objects and their instance properties, your objects can be put into an indeterminate or nonsensical state.

This problem cannot be solved by simple logic. For example, suppose you try to make access to a variable safe with a condition, as in this pseudocode:

if no other thread is touching this variable {
    ... do something to the variable ...
}

Such logic is utterly specious. Suppose the condition succeeds: no other thread is touching this variable. But between the time when that condition is evaluated and the time when the next line executes and you start to do something to the variable, another thread can still come along and start touching the variable!

It is possible to request assistance at a deeper level to ensure that a section of code is not run by two threads simultaneously. For example, you can implement a lock around a section of code. But locks generate an entirely new level of potential pitfalls. In general, a lock is an invitation to forget to use the lock, or to forget to remove the lock after you’ve set it. And threads can end up contending for a lock in a way that permits neither thread to proceed.

Another problem has to do with thread lifetimes. The lifetime of a thread is independent of the lifetimes of other objects in your app. When an object is about to go out of existence and its deinit has been called and executed, you are supposed to be guaranteed that none of your code in that object will ever run again. But a thread might still be running, and might try to talk to your object, even after your object has supposedly gone out of existence. This can result in a crash, if you’re lucky; if you’re not lucky, your object might become a kind of zombie.

Not only is threaded code hard to get right; it’s also hard to test and hard to debug. It introduces indeterminacy, so you can easily make a mistake that never appears in your testing, but that does appear for some user. The real danger is that the user’s experience will consist only of distant consequences of your mistake, long after the point where you made it, making the true cause of the problem extraordinarily difficult to track down.

Perhaps you think I’m trying to scare you away from using threads. You’re right! For an excellent (and suitably frightening) account of some of the dangers and considerations that threading involves, see Apple’s technical note Simple and Reliable Threading with NSOperation. If terms like race condition and deadlock don’t strike fear into your veins, look them up on Wikipedia.

Tip

Xcode’s Debug navigator distinguishes threads; you can even see pending calls and learn when a call was enqueued. Also, when you call NSLog, the output in the console displays a number (in square brackets, after the colon) identifying the thread on which it was called — a simple but unbelievably helpful way of distinguishing threads.

Blocking the Main Thread

To illustrate making your code multithreaded, I need some code that is worth making multithreaded. I’ll use as my example an app that draws the Mandelbrot set. (This code is adapted from a small open source project I downloaded from the Internet.) All it does is draw the basic Mandelbrot set in black and white, but that’s a sufficiently elaborate calculation to introduce a significant delay, especially on an older, slower device. The idea is then to see how we can safely get that delay off the main thread.

The app contains a UIView subclass, MyMandelbrotView, which has one property, a CGContext called bitmapContext. Here’s MyMandelbrotView:

let MANDELBROT_STEPS = 1000 // determines how long the calculation takes
var bitmapContext: CGContext!
// jumping-off point: draw the Mandelbrot set
func drawThatPuppy () {
    self.makeBitmapContext(size: self.bounds.size)
    let center = CGPoint(self.bounds.midX, self.bounds.midY)
    self.draw(center: center, zoom:1)
    self.setNeedsDisplay()
}
// create bitmap context
func makeBitmapContext(size:CGSize) {
    var bitmapBytesPerRow = Int(size.width * 4)
    bitmapBytesPerRow += (16 - (bitmapBytesPerRow % 16)) % 16
    let colorSpace = CGColorSpaceCreateDeviceRGB()
    let prem = CGImageAlphaInfo.premultipliedLast.rawValue
    let context = CGContext(data: nil,
        width: Int(size.width), height: Int(size.height),
        bitsPerComponent: 8, bytesPerRow: bitmapBytesPerRow,
        space: colorSpace, bitmapInfo: prem)
    self.bitmapContext = context
}
// draw pixels of bitmap context
func draw(center:CGPoint, zoom:CGFloat) {
    func isInMandelbrotSet(_ re:Float, _ im:Float) -> Bool {
        var fl = true
        var (x, y, nx, ny) : (Float,Float,Float,Float) = (0,0,0,0)
        for _ in 0 ..< MANDELBROT_STEPS {
            nx = x*x - y*y + re
            ny = 2*x*y + im
            if nx*nx + ny*ny > 4 {
                fl = false
                break
            }
            x = nx
            y = ny
        }
        return fl
    }
    self.bitmapContext.setAllowsAntialiasing(false)
    self.bitmapContext.setFillColor(red: 0, green: 0, blue: 0, alpha: 1)
    var re : CGFloat
    var im : CGFloat
    let maxi = Int(self.bounds.size.width)
    let maxj = Int(self.bounds.size.height)
    for i in 0 ..< maxi {
        for j in 0 ..< maxj {
            re = (CGFloat(i) - 1.33 * center.x) / 160
            im = (CGFloat(j) - 1.0 * center.y) / 160
            re /= zoom
            im /= zoom
            if (isInMandelbrotSet(Float(re), Float(im))) {
                self.bitmapContext.fill(
                    CGRect(CGFloat(i), CGFloat(j), 1.0, 1.0))
            }
        }
    }
}
// turn pixels of bitmap context into CGImage, draw into ourselves
override func draw(_ rect: CGRect) {
    if self.bitmapContext != nil {
        let context = UIGraphicsGetCurrentContext()!
        let im = self.bitmapContext.makeImage()
        context.draw(im!, in: self.bounds)
    }
}

The draw(center:zoom:) method, which calculates the pixels of self.bitmapContext, is time-consuming, and we can see this by running the app on a device. If the entire process is kicked off by tapping a button whose action method calls drawThatPuppy, there is a significant delay before the Mandelbrot graphic appears in the interface, during which time the button remains highlighted. This is a sure sign that we are blocking the main thread.

We need to move the calculation-intensive part of this code onto a background thread, so that the main thread is not blocked by the calculation. In doing so, we have two chief concerns:

Synchronization of threads

The button is tapped, and drawThatPuppy is called, on the main thread. setNeedsDisplay is thus also called on the main thread — and rightly so, since this affects the interface — and so draw(_:) is rightly called on the main thread as well. In between, however, the calculation-intensive draw(center:zoom:) is to be called on a background thread. Yet these three methods must still run in order: drawThatPuppy on the main thread, then draw(center:zoom:) on a background thread, then draw(_:) on the main thread. But threads are concurrent, so how will we ensure this?

Shared data

The property self.bitmapContext is referred to in three different methods — in makeBitmapContext(size:), and in draw(center:zoom:), and in draw(_:). But we have just said that those three methods involve two different threads; they must not be permitted to touch the same property in a way that might conflict or clash. Indeed, because draw(center:zoom:) is run on a background thread, it might run on multiple background threads simultaneously; the access to self.bitmapContext by draw(center:zoom:) must not be permitted to conflict or clash with itself. How will we ensure this?

Manual Threading

A naïve way of dealing with our time-consuming code would involve spawning off a background thread as we reach the calculation-intensive part of the procedure, by calling performSelector(inBackground:with:). This is a very bad idea, and you should not imitate the code in this section. I’m showing it to you only to demonstrate how horrible it is.

It is not at all simple to adapt your code to use performSelector(inBackground:with:). There is additional work to do:

Pack the arguments

The method designated by the selector in performSelector(inBackground:with:) can take only one parameter, whose value you supply as the second argument. So if you want to pass more than one piece of information into the thread, you’ll need to pack it into a single object. Typically, this will be a dictionary.

Set up an autorelease pool

Secondary threads don’t participate in the global autorelease pool. So the first thing you must do in your threaded code is to wrap everything in an autorelease pool. Otherwise, you’ll probably leak memory as autoreleased objects are created behind the scenes and are never released.

We’ll rewrite MyMandelbrotView to use manual threading. Our draw(center:zoom:) method takes two parameters, so the argument that we pass into the thread will have to pack that information into a dictionary. Once inside the thread, we’ll set up our autorelease pool and unpack the dictionary. This will all be made much easier if we interpose a trampoline method between drawThatPuppy and draw(center:zoom:). So our implementation now starts like this:

func drawThatPuppy () {
    self.makeBitmapContext(size:self.bounds.size)
    let center = CGPoint(self.bounds.midX, self.bounds.midY)
    let d : [AnyHashable:Any] = ["center":center, "zoom":CGFloat(1)]
    self.performSelector(inBackground: #selector(reallyDraw), with: d)
}
// trampoline, background thread entry point
func reallyDraw(_ d: [AnyHashable:Any]) {
    autoreleasepool {
        self.draw(center:d["center"] as! CGPoint,
            zoom: d["zoom"] as! CGFloat)
        // ... ??? ...
    }
}

The comment with the question marks indicates a missing piece of functionality: we have yet to call setNeedsDisplay, which will cause the actual drawing to take place. This call used to be in drawThatPuppy, but that is now too soon; the call to performSelector(inBackground:with:) launches the thread and returns immediately, so our bitmapContext property isn’t ready yet. Clearly, we need to call setNeedsDisplay after draw(center:zoom:) has finished generating the pixels of the graphics context. We can do this at the end of our trampoline method reallyDraw(_:).

But we must remember that reallyDraw(_:) runs in a background thread. Because setNeedsDisplay is a form of communication with the interface, we should call it on the main thread, with performSelector(onMainThread:with:waitUntilDone:). For maximum flexibility, it will probably be best to implement a second trampoline method:

// trampoline, background thread entry point
func reallyDraw(_ d: [AnyHashable:Any]) {
    autoreleasepool {
        self.draw(center:d["center"] as! CGPoint,
            zoom: d["zoom"] as! CGFloat)
        self.performSelector(onMainThread: #selector(allDone), with: nil,
            waitUntilDone: false)
    }
}
// called on main thread! background thread exit point
func allDone() {
    self.setNeedsDisplay()
}

This works, in the sense that when we tap the button, it is highlighted momentarily and then immediately unhighlighted; the time-consuming calculation is taking place on a background thread. But the code is specious; the seeds of nightmare are already sown:

  • We now have a single object, MyMandelbrotView, some of whose methods are to be called on the main thread and some on a background thread; this invites us to become confused at some later time.

  • The main thread and the background thread are constantly sharing a piece of data, the instance property self.bitmapContext; this is messy and fragile. And what’s to stop some other code from coming along and triggering draw(_:) while draw(center:zoom:) is in the middle of manipulating the bitmap context that draw(_:) draws?

To solve these problems, we might need to use locks, and we would probably have to manage the thread more explicitly. Such code can become quite elaborate and difficult to understand; guaranteeing its integrity is even more difficult. There are much better ways, and I will now demonstrate two of them.

Operation

An excellent strategy is to turn to a brilliant pair of classes, Operation and OperationQueue. The essence of Operation is that it encapsulates a task, not a thread. The operation described by an Operation object may be performed on a background thread, but you don’t have to concern yourself with that directly; the threading is determined for you by an OperationQueue. You describe the operation as an Operation, and add it to an OperationQueue to set it going. You arrange to be notified when the operation ends, typically by the Operation posting a notification. (You can also query both the queue and its operations from outside with regard to their state.)

We’ll rewrite MyMandelbrotView to use Operation and OperationQueue. We need an OperationQueue property; we’ll call it queue, and we’ll create the OperationQueue and configure it in the property’s initializer:

let queue : OperationQueue = {
    let q = OperationQueue()
    // ... further configurations can go here ...
    return q
}()

We also have a new class, MyMandelbrotOperation, an Operation subclass. (It is possible to take advantage of a built-in Operation subclass such as BlockOperation, but I’m deliberately illustrating the more general case by subclassing Operation itself.) Our implementation of drawThatPuppy creates an instance of MyMandelbrotOperation, configures it, registers for its notification (called .mandelOpFinished and already defined elsewhere), and adds it to the queue:

func drawThatPuppy () {
    let center = CGPoint(self.bounds.midX, self.bounds.midY)
    let op = MyMandelbrotOperation(
        size: self.bounds.size, center: center, zoom: 1)
    NotificationCenter.default.addObserver(self,
        selector: #selector(operationFinished),
        name: .mandelOpFinished, object: op)
    self.queue.addOperation(op)
}

Our time-consuming calculations are performed by MyMandelbrotOperation. An Operation subclass, such as MyMandelbrotOperation, will typically have at least two methods:

A designated initializer

The Operation may need some configuration data. Once the Operation is added to a queue, it’s too late to talk to it, so you’ll usually hand it this configuration data as you create it, in its designated initializer.

A main method

This method will be called (with no parameters) automatically by the OperationQueue when it’s time for the Operation to start.

MyMandelbrotOperation has three private properties for configuration (size, center, and zoom), to be set in its initializer; it must be told MyMandelbrotView’s geometry explicitly because it is completely separate from MyMandelbrotView. MyMandelbrotOperation also has its own CGContext property, bitmapContext; it must be publicly gettable so that MyMandelbrotView can retrieve the finished graphics context. Note that this is different from MyMandelbrotView’s bitmapContext, thus helping to solve the problem of sharing data promiscuously between threads:

private let size : CGSize
private let center : CGPoint
private let zoom : CGFloat
private(set) var bitmapContext : CGContext! = nil
init(size sz:CGSize, center c:CGPoint, zoom z:CGFloat) {
    self.size = sz
    self.center = c
    self.zoom = z
    super.init()
}

All the calculation work has been transferred from MyMandelbrotView to MyMandelbrotOperation without change; the only difference is that self.bitmapContext now means MyMandelbrotOperation’s property. The only method of real interest is main. First, we call the Operation method isCancelled to make sure we haven’t been cancelled while sitting in the queue; this is good practice. Then, we do exactly what drawThatPuppy used to do, initializing our graphics context and drawing into its pixels:

let MANDELBROT_STEPS = 1000
func makeBitmapContext(size:CGSize) {
    // ... same as before
}
func draw(center:CGPoint, zoom:CGFloat) {
    // ... same as before
}
override func main() {
    guard !self.isCancelled else {return}
    self.makeBitmapContext(size:self.size)
    self.draw(center: self.center, zoom: self.zoom)
    if !self.isCancelled {
        NotificationCenter.default.post(
            name: .mandelOpFinished, object: self)
    }
}

When main ends, the calculation is over and it’s time for MyMandelbrotView to come and fetch our data. There are two ways in which MyMandelbrotView can learn this; either main can post a notification through the NotificationCenter, or MyMandelbrotView can use key–value observing to be notified when our isFinished property changes. We’ve chosen the former approach; observe that we check one more time to make sure we haven’t been cancelled.

Now we are back in MyMandelbrotView, hearing through the notification that MyMandelbrotOperation has finished. We must immediately pick up any required data, because the OperationQueue is about to release this Operation. However, we must be careful; the notification may have been posted on a background thread, in which case our method for responding to it will also be called on a background thread. We are about to set our own graphics context and tell ourselves to redraw; those are things we want to do on the main thread. So we immediately step out to the main thread (using Grand Central Dispatch, described more fully in the next section). We remove ourselves as notification observer for this operation instance, copy the operation’s bitmapContext into our own bitmapContext, and now we’re ready for the last step, drawing ourselves:

// warning! called on background thread
func operationFinished(_ n:Notification) {
    if let op = n.object as? MyMandelbrotOperation {
        DispatchQueue.main.async {
            NotificationCenter.default.removeObserver(self,
                name: .mandelOpFinished, object: op)
            self.bitmapContext = op.bitmapContext
            self.setNeedsDisplay()
        }
    }
}

Adapting our code to use Operation has involved some work, but the result has many advantages that help to ensure that our use of multiple threads is coherent and safe:

The operation is encapsulated

Because MyMandelbrotOperation is an object, we’ve been able to move all the code having to do with drawing the pixels of the Mandelbrot set into it. The only MyMandelbrotView method that can be called in the background is operationFinished(_:), and that’s a method we’d never call explicitly ourselves, so we won’t misuse it accidentally — and it immediately steps out to the main thread in any case.

The data sharing is rationalized

Because MyMandelbrotOperation is an object, it has its own bitmapContext property. The only moment of data sharing comes in operationFinished(_:), when we must set MyMandelbrotView’s bitmapContext to MyMandelbrotOperation’s bitmapContext. Even if multiple MyMandelbrotOperation objects are added to the queue, they are separate objects with separate bitmapContext properties, which MyMandelbrotView retrieves only on the main thread, so there is no conflict.

The threads are synchronized

The calculation-intensive operation doesn’t start until MyMandelbrotView tells it to start (self.queue.addOperation(op)). MyMandelbrotView then takes its hands off the steering wheel and makes no attempt to draw itself. If draw(_:) is called by the runtime, self.bitmapContext will be nil, or will contain the results of an earlier calculation operation, and no harm done. Nothing else happens until the operation ends and the notification arrives (operationFinished(_:)); then and only then does MyMandelbrotView update the interface — on the main thread.

If we are concerned with the possibility that more than one instance of MyMandelbrotOperation might be added to the queue and executed concurrently, we have a further line of defense — we can set the OperationQueue’s maximum concurrency level to 1:

let q = OperationQueue()
q.maxConcurrentOperationCount = 1

This turns the OperationQueue into a serial queue; every operation on the queue must be completely executed before the next can begin. This might cause an operation added to the queue to take longer to execute, if it must wait for another operation to finish before it can even get started; however, this delay might not be important. What is important is that by executing the operations on this queue separately from one another, we guarantee that only one operation at a time can do any data sharing. A serial queue is thus a form of data locking.

Because MyMandelbrotView can be destroyed (if, for example, its view controller is destroyed), there is still a risk that it will create an operation that will outlive it and will try to access it after it has been destroyed. We can reduce that risk by canceling all operations in our queue before releasing it:

deinit {
    self.queue.cancelAllOperations()
}

There is more to know about Operation; it’s a powerful tool. One Operation can have another Operation as a dependency, meaning that the former cannot start until the latter has finished, even if they are in different OperationQueues. Moreover, the behavior of an Operation can be customized; an Operation subclass can redefine what isReady means and thus can control when it is capable of execution. Thus, Operations can be combined to express your app’s logic, guaranteeing that one thing happens before another (cogently argued in a WWDC 2015 video).

Grand Central Dispatch

Grand Central Dispatch, or GCD, is a sort of low-level analogue to Operation and OperationQueue (in fact, OperationQueue uses GCD under the hood). When I say GCD is low-level, I’m not kidding; it is effectively baked into the operating system kernel. Thus it can be used by any code whatsoever and is tremendously efficient.

GCD is like OperationQueue in that it uses queues: you express a task and add it to a queue, and the task is executed on a thread as needed. Moreover, by default these queues are serial queues, with each task on a queue finishing before the next is started, which, as I’ve already said, is a form of data locking. There is no need to create any Operation objects, so all your code to be executed on different threads can appear in the same place.

We’ll rewrite MyMandelbrotView to use GCD. We have a new property to hold our queue, which is a dispatch queue; a dispatch queue (DispatchQueue) is a lightweight opaque pseudo-object consisting essentially of a list of functions to be executed:

let MANDELBROT_STEPS = 1000
var bitmapContext: CGContext!
let draw_queue : DispatchQueue = {
    let q = DispatchQueue(label: "com.neuburg.mandeldraw")
    return q
}()

Our makeBitmapContext(size:) method now returns a graphics context rather than setting a property directly:

func makeBitmapContext(size:CGSize) -> CGContext {
    // ... as before ...
    let context = CGContext(data: nil,
        width: Int(size.width), height: Int(size.height),
        bitsPerComponent: 8, bytesPerRow: bitmapBytesPerRow,
        space: colorSpace, bitmapInfo: prem)
    return context!
}

Also, our draw(center:zoom:) method now takes an additional context: parameter, the graphics context to draw into:

func draw(center:CGPoint, zoom:CGFloat, context:CGContext) {
    // ... as before, but we refer to local context, not self.bitmapContext
}

Now for the implementation of drawThatPuppy. This is where all the action is! Here it is:

func drawThatPuppy () {
    let center = CGPoint(self.bounds.midX, self.bounds.midY) 1
    self.draw_queue.async { 2
        let bitmap = self.makeBitmapContext(size: self.bounds.size) 3
        self.draw(center: center, zoom:1, context:bitmap)
        DispatchQueue.main.async { 4
            self.bitmapContext = bitmap 5
            self.setNeedsDisplay()
        }
    }
}

That’s all there is to it: all our app’s multithreading is concentrated in those few lines! There are no notifications; there is no sharing of data between threads; and the synchronization of our threads is expressed as the sequential order of the code.

Observe that our code consists of two calls to the DispatchQueue async method, which takes as its parameter a function — usually, an anonymous function — expressing what we want done on this queue. This is the GCD method you’ll use most, because asynchronous execution will be your primary reason for using GCD in the first place. It has several optional parameters, but we don’t need any of them here; we simply supply an anonymous function. Our code contains two calls to async and thus two anonymous functions, one nested within the other.

Here’s how drawThatPuppy works:

1

We begin by calculating our center, as before. This value will be visible within the subsequent anonymous functions, because the rules of scope say that function body code can see its surrounding context and capture it.

2

Now comes our task to be performed in a separate background thread on our queue, self.draw_queue. We specify this task with the async method. We describe what we want to do on the background thread in an anonymous function.

3

In the function, we begin by declaring bitmap as a variable local to the function. We then call makeBitmapContext(size:) to create the graphics context bitmap, and then call draw(center:zoom:context:) to set its pixels. Bear in mind that those calls are made on a background thread, because self.draw_queue is a background queue.

4

Now we need to step back out to the main thread. How do we do that? With the async method again! This time, we specify the main queue (which is effectively the main thread), whose name is DispatchQueue.main. We describe what we want to do on the main queue in another anonymous function.

5

Here we are in the second function. Because the first function is part of the second function’s surrounding context, the second function can see the first function’s local bitmap variable! Using it, we set our bitmapContext property and call setNeedsDisplay — on the main thread! — and we’re done.

The benefits and elegance of GCD as a form of concurrency management are simply stunning. There is no data sharing, because the bitmap variable is not shared; it is local to each individual call to drawThatPuppy. The threads are synchronized, because the nested anonymous functions are executed in succession, so any instance of bitmap must be completely filled with pixels before being used to set the bitmapContext property. Moreover, the background operation is performed on a serial queue, and bitmapContext is touched only from code running on the main thread; thus there is no possibility of conflict. Our code is also highly maintainable, because the entire task on all threads is expressed within the single drawThatPuppy method; indeed, the code is only very slightly modified from the original, nonthreaded version.

You might object that we still have methods makeBitmapContext(size:) and draw(center:zoom:context:) hanging around MyMandelbrotView, and that we must therefore still be careful not to call them on the main thread, or indeed from anywhere except from within drawThatPuppy. If that were true, we could at this point destroy makeBitmapContext(size:) and draw(center:zoom:context:) and move their functionality completely into drawThatPuppy. But it isn’t true, because these methods are now thread-safe: they are self-contained utilities that touch no properties or persistent objects, so it doesn’t matter what thread they are called on. Still, I’ll demonstrate in a moment how we can intercept an accidental attempt to call a method on the wrong thread.

The two most important DispatchQueue methods are:

async(execute:)

Push a function onto the end of a queue for later execution, and proceed immediately with our own code. Thus, we can finish our own execution without waiting for the function to execute. You might use async to execute code in a background thread or, conversely, from within a background thread as a way of stepping back onto the main thread in order to talk to the interface. Also, it can be useful to call async to step out to the main thread even though you’re already on the main thread, as a way of waiting for the run loop to complete and for the interface to settle down — a minimal form of delayed performance. Examples of all these uses have already appeared throughout this book.

sync(execute:)

Push a function onto the end of a queue for later execution, and wait until the function has executed before proceeding with our own code — because, for example, you intend to use a result that the function is to provide. The purpose of the queue would be, once again, as a lightweight, reliable version of a lock, mediating access to a shared resource. Here’s a case in point, adapted from Apple’s own code:

func asset() -> AVAsset? {
    var theAsset : AVAsset!
    self.assetQueue.sync {
        theAsset = self.getAssetInternal().copy() as! AVAsset
    }
    return theAsset
}

Any thread might call the asset method; to avoid problems with shared data, we require that only functions that are executed from a particular queue (self.assetQueue) may touch an AVAsset, so when we call getAssetInternal we do it on self.assetQueue. But we also need the result returned by our call to getAssetInternal; hence the call to sync.

In Chapter 7 I encountered a problem where I discovered that the runtime was calling my CATiledLayer’s draw(_:) simultaneously on multiple threads — a rare example of Apple’s code involving me in unexpected complications of threading. To close the door to this sort of behavior, it suffices to wrap the interior of my draw(_:) implementation in a call to sync. This is a safe and reliable mode of locking: once any thread has started to run my draw(_:), no other thread can start to run it until the first thread has finished with it. Thus my draw(_:), though it can be (and will be) run on a background thread, is immune to being run on simultaneous background threads. I have defined a property to hold a dedicated serial queue:

let drawQueue = DispatchQueue(label: "drawQueue")

And here’s how draw(_:) is structured:

override func draw(_ rect: CGRect) {
    self.drawQueue.sync {
        // ... draw here ...
    }
}

An interesting and useful exercise is to revise the MyDownloader class from Chapter 23 so that the delegate methods are run on a background thread, thus taking some strain off the main thread (and hence the user interface) while these messages are flying around behind the scenes. This looks like a reasonable and safe thing to do, because the URLSession and its delegate are packaged inside the MyDownloader object, isolated from our view controller.

To do this, we’ll need our own background OperationQueue, which we can maintain as a property:

let q = OperationQueue()

Our session is now configured and created using this background queue:

lazy var session : URLSession = {
    return URLSession(configuration:self.config,
        delegate:MyDownloaderDelegate(), delegateQueue:self.q)
}()

We must now give some thought to what will happen in urlSession(_:downloadTask:didFinishDownloadingTo:) when we call back into the client through the completion function that the client handed us at the outset. My idea here is that there is no need to involve the client in threading issues; I’d like to isolate such issues within MyDownloader itself. So I want to step out to the main thread as I call the completion function. But we cannot do this by calling async:

let ch = URLProtocol.property(forKey:"ch", in:req)
    as! MyDownloaderCompletion
DispatchQueue.main.async { // bad idea! bad idea!
    ch(url)
}

The reason is that the downloaded file is slated to be destroyed as soon as we return from urlSession(_:downloadTask:didFinishDownloadingTo:) — and if we call async, we will return immediately. Thus the downloaded file will be destroyed, and url will end up pointing at nothing by the time the client receives it! The solution is to use sync instead:

let ch = URLProtocol.property(forKey:"ch", in:req)
    as! MyDownloaderCompletion
DispatchQueue.main.sync {
    ch(url)
}

That code steps out to the main thread and also postpones returning from urlSession(_:downloadTask:didFinishDownloadingTo:) until the client has had an opportunity to do something with the file pointed to by url. We are blocking our background OperationQueue, but this is legal, and in any case we’re blocking very briefly and in a coherent manner. Again, our real purpose in using sync is to lock down some shared data — in this case, the downloaded file.

It is also good to know about the asyncAfter method; this is a way of performing a completion function after a certain amount of time has been permitted to elapse (delayed performance). Many examples in this book have made use of it, by way of my utility method delay (see Appendix B).

In Objective-C, a useful GCD function is dispatch_once, a thread-safe way of ensuring that code is run only once; this is often used, for example, to help vend a singleton. In Swift, dispatch_once is unavailable (because it can’t be implemented in a thread-safe way); the workaround is to use the built-in lazy initialization feature of global and static variables. In this example, my view controller has a constant property oncer whose value is an instance of a struct Oncer that has a doThisOnce method; the actual functionality of that method is embedded in the initializer of a private static property once. The result is that, no matter how many times we call self.oncer.doThisOnce() in the course of this view controller’s lifetime, that functionality will be performed only once:

class ViewController: UIViewController {
    struct Oncer {
        private static var once : Void = {
            print("I did it!")
        }()
        func doThisOnce() {
            _ = Oncer.once
        }
    }
    let oncer = Oncer()
    override func viewDidLoad() {
        super.viewDidLoad()
        self.oncer.doThisOnce() // I did it!
        self.oncer.doThisOnce() // nothing
        self.oncer.doThisOnce() // nothing
        self.oncer.doThisOnce() // nothing
    }
}

To change the temporal scope of the “onceness,” change the semantic scope of oncer. If oncer is defined at the top level of a file, its once functionality can be performed only once in the entire lifetime of the app.

Another useful GCD feature to know about is dispatch groups. A dispatch group effectively combines independent tasks into a single task; we proceed only when all of them have completed. Its usage is structured as in Example 24-1.

Example 24-1. Dispatch group usage
let group = DispatchGroup()
// here we go...
group.enter()
queue1.async {
    // ... do task here ...
    group.leave()
}
group.enter()
queue2.async {
    // ... do task here ...
    group.leave()
}
group.enter()
queue3.async {
    // ... do task here ...
    group.leave()
}
// ... more as needed ...
group.notify(queue: DispatchQueue.main) {
    // finished!
}

In Example 24-1, each task performed asynchronously is preceded by a call to our dispatch group’s enter and is followed by a call to our dispatch group’s leave. The queues on which the tasks are performed do not have to be different queues; the point is that it doesn’t matter if they are. Only when every enter has been balanced by a leave will the completion function in our dispatch group’s notify be called. Thus, this is effectively a way of waiting until all the tasks have completed independently, before proceeding with whatever the notify completion function says to do.

Besides serial dispatch queues, there are also concurrent dispatch queues. A concurrent queue’s functions are started in the order in which they were submitted to the queue, but a function is allowed to start while another function is still executing. Obviously, you wouldn’t want to submit to a concurrent queue a task that touches a shared resource! The advantage of concurrent queues is a possible speed boost when you don’t care about the order in which multiple tasks are finished — for example, when you want to do something with regard to every element of an array.

The built-in global queues, available by calling DispatchQueue.global(qos:), are concurrent. You specify which built-in global queue you want by means of the qos: argument; qos is an acronym for “quality of service,” and can be (QoSClass):

  • .userInteractive

  • .userInitiated

  • .default

  • .utility

  • .background

You can also create a concurrent queue yourself by calling the DispatchQueue initializer init(label:attributes:) with a .concurrent attribute.

The question sometimes arises of how to make certain that a method is called only on the correct queue. Recall that in our Mandelbrot-drawing example, we may be concerned that a method such as makeBitmapContext(size:) might be called on some other queue than the background queue that we created for this purpose. This sort of problem can be elegantly solved by calling the dispatchPrecondition(condition:) global function. It takes a DispatchPredicate enum, whose cases are:

  • .onQueue

  • .onQueueAsBarrier

  • .notOnQueue

These cases each take an associated value which is a DispatchQueue. (I told you it was elegant!) Thus, to assert that we are on our draw_queue queue, we would say:

dispatchPrecondition(condition: .onQueue(self.draw_queue))

The outcome is similar to Swift’s native precondition function: if our assertion is false, we’ll crash.

Threads and App Backgrounding

When your app is backgrounded and suspended, a problem arises if your code is running. The system doesn’t want to stop your code while it’s executing; on the other hand, some other app may need to be given the bulk of the device’s resources now. So as your app goes into the background, the system waits a very short time for your app to finish doing whatever it may be doing, and it then suspends your app.

This shouldn’t be a problem from your main thread’s point of view, because your app shouldn’t have any time-consuming code on the main thread in the first place; you now know that you can avoid this by using a background thread. On the other hand, it could be a problem for lengthy background operations, including asynchronous tasks performed by the frameworks. You can request extra time to complete a lengthy task (or at to least abort it yourself, coherently) in case your app is backgrounded, by wrapping it in calls to UIApplication’s beginBackgroundTask(expirationHandler:) and endBackgroundTask(_:). Here’s how you do it:

  1. You call beginBackgroundTask(expirationHandler:) to announce that a lengthy task is beginning; it returns an identification number. This tells the system that if your app is backgrounded, you’d like to be woken from suspension in the background now and then in order to complete the task.

  2. At the end of your lengthy task, you call endBackgroundTask(_:), passing in the same identification number that you got from your call to beginBackgroundTask(expirationHandler:). This tells the system that your lengthy task is over and that there is no need to grant you any more background time.

The function that you pass as argument to beginBackgroundTask(expirationHandler:) does not express the lengthy task. It expresses what you will do if your extra time expires before you finish your lengthy task. This is a chance for you to clean up. At the very least, your expiration function must call endBackgroundTask(_:)! Otherwise, the runtime won’t know that you’ve run your expiration function, and your app may be killed, as a punishment for trying to use too much background time. If your expiration function is called, you should make no assumptions about what thread it is running on.

Let’s use MyMandelbrotView as an example. Let’s say that if drawThatPuppy is started, we’d like it to be allowed to finish, even if the app is suspended in the middle of it, so that our bitmapContext property is updated as requested. To try to ensure this, we call beginBackgroundTask(expirationHandler:) before doing anything else, and we call endBackgroundTask(_:) at the very end:

func drawThatPuppy () {
    // prepare for background task
    var bti : UIBackgroundTaskIdentifier = UIBackgroundTaskInvalid
    bti = UIApplication.shared.beginBackgroundTask {
        UIApplication.shared.endBackgroundTask(bti)
    }
    guard bti != UIBackgroundTaskInvalid else { return }
    // now do our task as before
    let center = CGPoint(self.bounds.midX, self.bounds.midY)
    self.draw_queue.async {
        let bitmap = self.makeBitmapContext(size: self.bounds.size)
        self.draw(center: center, zoom:1, context:bitmap)
        DispatchQueue.main.async {
            self.bitmapContext = bitmap
            self.setNeedsDisplay()
            UIApplication.shared.endBackgroundTask(bti) // *
        }
    }
}

If our app is backgrounded while drawThatPuppy is in progress, it will (we hope) be given enough background time to run so that it can eventually proceed all the way to the end. Thus, self.bitmapContext will be updated, and setNeedsDisplay will be called, while we are still in the background. Our draw(_:) will not be called until our app is brought back to the front, but there’s nothing wrong with that. If we are not given enough time, there is no cleanup to do, so we just call endBackgroundTask(_:).

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.141.202.30