Threads and Multiprocessing

Simple applications tend to be very focused, concentrating on one task at a time. However, as your programs get more complex or sophisticated, you may want your application to start doing multiple things at once. Say you have any application that, among other things, periodically downloads information off a network and updates a window with the new information. For the best user experience, you would like the information retrieval to be independent of any other actions the application can take. Otherwise, the retrieval process will “lock up” your application for the duration, and your user will be left twiddling thumbs (or, more likely, cursing, as shown in Figure 14.4) until the download is complete. The best way to do this is to create a separate thread to handle the download.

Single-threaded versus multithreaded applications

Figure 14-4. Single-threaded versus multithreaded applications

Threads, which are sometimes called tasks, are independent execution paths within your application. They have their own stack and function almost like mini-applications. However, they share the same address space as the main application, which means they can share memory. This makes it easy to pass information between the thread and the main application. For example, if an application wants to spawn a thread to process an image in the background, it can simply pass a pointer to the image rather than copying the image itself.

Typically you create threads to perform background processing that doesn’t require interaction from the user. For example, time-intensive calculations or file transfers are good candidates for background threads. Saving a file, however, may not be, as you probably don’t want the user to be able to continue modifying a file while it is being saved. In some cases, it’s advantageous to have your main application thread handle only the user interface and delegate all other actions to background threads.

Most asynchronous communication between the thread and the main application (or between multiple threads) occurs by sending simple messages. For example, a thread may set a particular bit high to indicate “Okay, I’m ready for your data.” After it receives the data, it resets the bit and begins processing. When it finishes, it may set a different bit that indicates “Data is processed. Come pick it up.” Things can quickly get more sophisticated than that, but the general idea is the same.

Carbon has two programming interfaces that let you create independent threads:

  • Multiprocessing Services lets you create preemptively scheduled threads (called tasks in Apple’s documentation). These threads are given processor time just as if they were separate processes in the system. Better yet, if you are running on a multiprocessor Macintosh, threads created with Multiprocessing Services automatically take advantage of the extra processors. This is the interface of choice for most threaded applications. The only restriction is that some system software calls are not reentrant (that is, they cannot be called by multiple threads at the same time) and therefore cannot be called directly from a preemptive thread. (You can, however, use a callback mechanism called a remote procedure call to call nonreentrant functions.)

  • The Thread Manager lets you create cooperatively scheduled threads. Just as in a cooperative-tasking runtime environment (as used in older Mac OS operating systems), each thread must voluntarily cede processor time to the main application (and other threads). While not as flexible as Multiprocessing Services, the Thread Manager is a good tool to use if you need more control over when your threads are running.

Multiprocessing Services in a Peanut Shell

Let’s see how you’d go about using Multiprocessing Services to create a thread that handles a lengthy calculation in the background. You would need to create the thread, pass it the data to be processed, then wait for it to complete. We’ll take a closer look at each step.

Note

The Multiprocessing Services documentation in Carbon Help refers to preemptive threads as tasks, mostly to distinguish them from the Thread Manager’s cooperative threads. However, because most programmers are more familiar with the term thread, this book will adhere to that term instead.

First, you need to create the thread. You do so by calling the Multiprocessing Services function MPCreateTask. You pass a number of parameters indicating such items as the desired stack size for the thread and a queue to notify when the thread terminates. The latter is important because the thread is not synchronized to your main application; it could terminate the thread (by calling MPTerminateTask), but it has no way of knowing if the thread is actually terminated unless it receives confirmation of some kind.

You can think of the preemptively scheduled thread as being a field agent for some spy organization. The spy works independently and is essentially out of reach of headquarters. If headquarters wants some mission accomplished, it places a message (or a signal) at some prearranged location. After sending this notice, headquarters must monitor another prearranged location for a response message, which the spy sends when the task is accomplished.

The thread itself is just defined as a function, which may or may not take any parameters. Typically, the parameters define information useful over the life of the thread, such as the locations of “dropboxes” for messages to and from the main application. These dropboxes are called notification methods in multitasking parlance. Two common notification methods are message queues and semaphores:

  • Message queue. A first-in-first-out stack for messages, which are 96 bits long. A message can contain any sort of information, such as a pointer to the data to process or computation instructions. A message queue is analogous to a specific drop site where the spy can pick up an envelope of instructions.

  • Semaphore. A simple state variable that can be incremented from one to some specified maximum value. For example, setting a binary semaphore (that is, a semaphore with two states, zero and one) to a value of one could indicate that the data is ready for the thread to process. Semaphores cannot communicate as much information as message queues, but they incur less processing overhead.

Typically, your application sets up two queues or semaphores for a thread: one to send notifications and one to receive them. However, it’s possible for more than one thread to use the same queue. This arrangement can be useful for distributing work in multiprocessor systems: you can create multiple instances of the same function, one for each available processor.

Implementing a notification method is fairly straightforward. Each particular method has its own creation, termination, signaling, and waiting functions. For example, message queues use the following Multiprocessing Services functions:

  • MPCreateQueue. Creates a message queue and gives you a pointer to the queue object.

  • MPTerminateQueue. Removes a queue object. Typically you do this after you terminate the associated thread.

  • MPNotifyQueue. Places a 96-bit message on the specified queue.

  • MPWaitOnQueue. Stops execution of the thread that calls it and waits for a message on the specified queue. When a message appears (or if one was already present when the function was called), it removes the message from the queue and lets the thread continue execution.

Multiprocessing Services threads are usually written as an endless loop, waiting for an appropriate signal, executing the task when the signal arrives, sending a response signal, and then waiting again. This implementation ensures that no processor time is wasted; if there is nothing for the thread to do, it does nothing.

Similarly, after signaling the thread, your main application must wait for a “task completed” notification so it knows the task is done processing. However, most likely your application can’t afford to just wait, as it may be interacting with the user, updating windows, or even signaling another thread.

One solution is to poll the queue periodically by calling MPWaitOnQueue with a zero timeout (that is, by passing kDurationImmediate as the timeout constant). However, polling on preemptive multitasking systems is generally regarded as a bad thing, because it means that your application can use up valuable processor time doing essentially nothing. Fortunately, the Carbon Event Manager comes to the rescue by allowing you to send yourself a custom event instead of polling.

In addition to all the event kinds defined by the Carbon Event Manager, you can define your own. Doing so can be a useful way of notifying yourself that some special event has occurred. For example, your thread can send an event to the main application to say, “there’s a message on the result queue.” On the application end, you need to register an event handler to handle your custom event. This handler can call MPWaitOnQueue with a wait time of kDurationImmediate (because it knows that a message exists) and then process the message appropriately. To implement this mechanism, you will need the following Carbon Event Manager functions:

  • CreateEvent. Creates a custom event. You must specify an event class and kind, as well as any attributes (if desired). On return, you get an event reference that defines the event.

  • GetMainEventQueue. Returns a pointer to the main application event queue. This is the queue that contains all events your application wants to process.

  • PostEventToQueue. Places an event on the specified queue.

Figure 14.5 illustrates the basic operation sequence for working with a Multiprocessing Services thread:

  • First, you perform any application initialization, create the In and Out message queues, and create the custom event. You register your custom event handler as you would any other, by calling InstallEventHandler. The event reference target can be arbitrary, as you will be controlling who sends the event. Your application calls MPCreateTask when it needs to spawn the preemptive thread.

  • If your application has work for the thread to do, it signals the thread by calling MPNotifyQueue to place a message in the In message queue.

  • When your thread has finished processing, it places a message in the Out message queue. Then it calls GetMainEventQueue to get the main event queue and PostEventToQueue to place your custom event on the queue. After placing the event on the queue, the thread returns to its MPWaitOnQueue function, remaining blocked until a new message appears in the In queue.

  • Back in the main application thread, when the Carbon Event Manager pulls the custom event off the queue, your event handler gets called. It can then call MPWaitOnQueue to get the message on the Out message queue and take any additional actions.

Working with a Multiprocessing Services thread.

Figure 14-5. Working with a Multiprocessing Services thread.

Note that one advantage of using Carbon events for communicating with your main application is that you don’t need to worry about interrupting actions that shouldn’t be interrupted. For example, you wouldn’t want your application to start processing data it received from the thread while the user was pulling down a menu.

If you want your application to do several different tasks simultaneously, you should consider creating independent execution threads. You can use either the Thread Manager or Multiprocessing Services to create your threads, but in most cases you will want to use the latter. Multiprocessing Services threads are preemptively scheduled, and they can automatically take advantage of multiple processors.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.119.170