Asynchronous Programming

In the previous section, we were explicitly creating threads and managing their lifetime. In this section, we let the common language runtime do the grungy work for us.

Under the .NET asynchronous programming model, when a call is made to a .NET class method, the call returns immediately. The common language runtime sets up the actual method to be executed in a different thread. This makes it possible for the calling thread to continue forward with its execution. Contrast this to the synchronous programming model where the call blocks until the method is completely processed.

Once the method execution completes, the common language runtime provides two ways to obtain the results of the execution. You can call a specific method to obtain the results. You can also provide a callback function with your initial call so that the common language runtime can automatically invoke the callback function after the method execution is completed.

Asynchronous programming is supported in many areas of the .NET Framework, including:

  • Asynchronous delegates

  • Web services

  • File IO, Socket IO

  • Networking (HTTP, TCP)

Let's look at a few important areas of asynchronous programming.

Asynchronous Delegates

Asynchronous delegates provide the ability to call a synchronous method in an asynchronous manner. Consider the following code excerpt:

// Project AsyncProgramming/AsyncDelegate

public class Foo {
     public String GetGreeting(String user) {
       String retVal = "Hello " + user;
       return retVal;
     }
}

Let's see how we can invoke GetGreeting on the Foo object using an asynchronous delegate. The first step is to declare a delegate for the method, as shown here:

public delegate String GreetingProc(String user);

When compiled, this declaration results in a class that looks as follows:

public class GreetingProc : MulticastDelegate {
     public GreetingProc(Object o, int method);
     public IAsyncResult BeginInvoke(String user,
							       AsyncCallback callback, Object o);
							     public String EndInvoke(IAsyncResult result);
     public String Invoke(String user);
}

The methods that can be used for asynchronous programming are BeginInvoke and EndInvoke.

BeginInvoke is used to begin the asynchronous call. The method definition contains two more parameters than the original method to be invoked. The second to the last parameter is used to optionally specify a callback method. When the invoked method completes, the runtime automatically calls the specified callback method. The last parameter is used to pass the state information, which is any information you deem appropriate, wrapped as an object. This state object is simply made available to the callback method.

BeginInvoke returns a value of type IAsyncResult. This return value can be used later to obtain the outcome of the invoked method.

The return value of the actual method that is invoked, or the output type parameters, can be obtained by calling EndInvoke. The method definition contains all the ref type and out type parameters from the original method plus a parameter to pass in the return value from BeginInvoke.

Note that EndInvoke is a blocking call. It returns only when the invoked method completes.

If the invoked method throws an exception, you can catch the exception by putting a try-catch block around EndInvoke. You do not have to call EndInvoke unless you are interested in processing the outcome of the invoked method.

Using BeginInvoke and EndInvoke, here is one way to call the method GetGreeting asynchronously:

public class SimpleDemo {
    public static void DoIt() {
      Foo f = new Foo();
      GreetingProc proc = new GreetingProc(f.GetGreeting);
      IAsyncResult iar = proc.BeginInvoke("Jay", null, null);

      String greeting = proc.EndInvoke(iar);
      Console.WriteLine(greeting);
    }
}

Note that we passed in null for the callback function because the main thread explicitly calls EndInvoke to get back the results. The call to EndInvoke blocks until the common language runtime finishes processing the delegate method (Foo.GetGreeting in our case). However, it is also possible to specify a callback function that the common language runtime can call after it has processed the delegate method. This is illustrated in the following code:

public class AsyncCBDemo {
    public static void FinishProcessing(IAsyncResult iar) {
      // Get the state object, if need be
      Object stateObj = iar.AsyncState;

      // Get the delegate object
      GreetingProc proc =
							        (GreetingProc)((AsyncResult)iar).AsyncDelegate;

      // Call EndInvoke on the delegate object
      String greeting = proc.EndInvoke(iar);
      Console.WriteLine(greeting);
    }

    public static void DoIt() {
      Foo f = new Foo();

      GreetingProc proc = new GreetingProc(f.GetGreeting);

      // The example uses a dummy state object.
      // You should pass in a more meaningful object.
      Object stateObj = new Object();
      IAsyncResult iar = proc.BeginInvoke("Jay",
							        new AsyncCallback(AsyncCBDemo.FinishProcessing),
							        stateObj);

        Console.WriteLine("Waiting for FinishProcessing
          to be called...");
        WaitHandle wh = iar.AsyncWaitHandle;
        wh.WaitOne();
    }
}

After the common language runtime finishes processing Foo.GetGreeting, it stores the return results in an object of type AsyncResult (namespace System.Runtime.Remoting.Messaging). As our main method DoIt specifies FinishProcessing as the callback method, the common language run-time then invokes FinishProcessing and passes the AsyncResult object as the parameter.

FinishProcessing needs to call EndInvoke on the delegate object, which can be obtained from the AsyncDelegate property on AsyncResult, as highlighted in the code.

When FinishProcessing calls EndInvoke on the delegate, this time EndInvoke returns immediately with the results, as the delegate method (Foo.GetGreeting) has already been processed.

Note the extra logic in the main thread for asynchronous call completion. When BeginInvoke is called, the common language runtime invokes the original method using a thread from an internal thread pool. The threads in the pool are marked as background threads. Recall that the common language runtime does not wait for background threads to complete while quitting the application. Therefore, we need a way to ensure that the asynchronous call does not get aborted. Fortunately, the IAsyncResult interface that is returned from BeginInvoke provides a wait handle by means of an AsyncWaitHandle property. We can wait on this handle by calling our familiar method WaitOne (or any of its variations).

There is still one problem with waiting on AsyncWaitHandle. The wait state is signaled when the asynchronous call has been completed, not when the callback method finishes. As a result, there is no guarantee that the callback method has completed when the wait state is signaled.

A common technique to deal with this problem is to raise an event before returning from the callback method. The main thread can wait on this event instead of AsyncWaitHandle. The modified code is shown here:

// Project AsyncProgramming/AsyncDelegate

class ProperAsyncCBDemo {
    private AutoResetEvent m_Event = new AutoResetEvent(false);

    public void FinishProcessing(IAsyncResult iar) {
      GreetingProc proc =
        (GreetingProc)((AsyncResult)iar).AsyncDelegate;
      String greeting = proc.EndInvoke(iar);
      Console.WriteLine(greeting);
      m_Event.Set(); // Let the main thread know
    }

    public void DoIt() {
      Foo f = new Foo();
      GreetingProc proc = new GreetingProc(f.GetGreeting);
      IAsyncResult iar = proc.BeginInvoke("Jay",
        new AsyncCallback(this.FinishProcessing), null);

      Console.WriteLine("Waiting for the callback to
        complete...");
      m_Event.WaitOne();
    }
}

What would happen if the delegate method throws an exception while processing? Where does the exception go? The common language runtime catches and stores this exception. Whenever you call EndInvoke, the exception is re-thrown back. Therefore, if you expect your delegate method to throw an exception, you must set up a try-catch block while calling EndInvoke.

What about one-way methods; that is, methods marked with OneWayAttribute (Chapter 6)? Asynchronous delegates work equally well with one-way methods. Of course, calling EndInvoke for one-way methods is redundant; the runtime discards return values, return parameters, or exceptions thrown from one-way methods.

We are done with asynchronous delegates. At this point, it is worth noting that the .NET Framework extends this model seamlessly to .NET remoting. The preceding asynchronous delegate example can very easily be extended to .NET remoting. It is left as an exercise for you to make class Foo a remote object.

Web Service Clients

The .NET Framework also makes it possible to call a Web service method asynchronously. Given a WSDL description, the framework generates appropriate client-side proxy code containing the necessary BeginXXX and EndXXX methods for each Web service method where XXX is the name of the method.

Let's look at using the Calculator Web service that we developed in Chapter 6. For your convenience, I have listed the Web service code here:

public class MyCalculator : WebService {
    [WebMethod]
    public int Add(int a, int b) {
      return (a+b);
    }
}

Recall that the client-side proxy code for a Web service is generated using the tool wsdl.exe. For my setup, the proxy code is generated using the following command line:

wsdl.exe -o:Proxy.cs
    http://localhost/WSRemoting/Calculator.asmx?wsdl

Tool wsdl.exe generates a class that looks as follows:

public class MyCalculator {
  public int Add(int a, int b);
  public IAsyncResult BeginAdd(int a, int b,
    AsyncCallback callback, object state);
  public int EndAdd(IAsyncResult asyncResult);
}

Why do I get this feeling that you already know how to call the Web method Add asynchronously? Your smile probably gave it away.

Although the client-side implementation of an asynchronous Web method call is similar to that of an asynchronous delegate, there is one thing that is different. For the asynchronous Web method, the runtime passes the WebClientAsyncResult object to the callback (recall that for the asynchronous delegates, the object passed to the callback is of AsyncResult type). AsyncResult supports a property AsyncDelegate to obtain the original delegate object and call EndInvoke on it. However, currently WebClientAsyncResult does not support any property that will let you obtain the original Web service object and call EndXXX on it. The workaround that I use is to pack the Web service object in the state object and pass it to BeginXXX. This is illustrated in the following client-side code for the MyCalculator Web service.

// Project AsyncProgramming/AsyncWebServiceClient

class MyClient {
    class MyStateObject {
      public AutoResetEvent Event = new AutoResetEvent(false);
      public MyCalculator Calc = new MyCalculator();
    }

    public static void MyFinishProc(IAsyncResult iar) {
      MyStateObject o = (MyStateObject) iar.AsyncState;
      int val = o.Calc.EndAdd(iar);
      Console.WriteLine(val);
      o.Event.Set(); // let the main thread know
    }

    public static void Main() {
      MyStateObject stateObject = new MyStateObject();

      IAsyncResult iar = stateObject.Calc.BeginAdd(10, 20,
        new AsyncCallback(MyFinishProc), stateObject);

      Console.WriteLine("Waiting for the callback to
        complete...");
      stateObject.Event.WaitOne();
    }
}

Thread Pooling

Many applications use multiple threads, but quite often these threads spend a great deal of time in a sleeping state waiting for an event to occur. Other times, threads might enter a sleeping state and wake up only periodically to do some processing and then go back to sleep again. Such applications can benefit from using a thread pool, which maintains a pool of worker threads. A thread pool is best suited for small tasks that require multiple threads. Using a thread pool has many advantages:

  • The management of the thread pool is usually abstracted away from you so that you can focus on application tasks rather than pool management.

  • A well-written thread pool class can optimize throughput and thread time slices based on available system resources.

The .NET Framework uses thread pools for several purposes. One that we have already seen is for asynchronous calls. Other uses include socket connections and asynchronous I/O completion.

Under .NET, a thread pool is implemented under a class ThreadPool. There is one ThreadPool object per AppDomain. However, at the physical level, there is only one thread pool per process (at least in the first version of the framework).

Here is the definition of the class ThreadPool. For simplicity, I have shown only those methods that are relevant for our current discussion:

public sealed class ThreadPool {
    // Add items to the queue
    public static bool
      QueueUserWorkItem(WaitCallback cb, object stateObj);
    public static RegisteredWaitHandle
      RegisterWaitForSingleObject(WaitHandle wh,
        WaitOrTimerCallback callback, Object state,
          int timeOut, bool justOnce);

    // Thread pool status
    public static void GetAvailableThreads(
      out int workerThreads, out int completionPortThreads);
    public static void GetMaxThreads(
      out int workerThreads, out int completionPortThreads);
    ...
}

The static method QueueUserWorkItem can be used to add a task (called a work item in .NET) to the thread pool. This method takes as a parameter a delegate of type WaitCallback. The definition of this delegate type is shown here:

public delegate void WaitCallBack(object stateObject);

Essentially, each work item method takes a state object as parameter and has a void return value. The state object is any object that you want your queued method to have access to.

Here is a simple code that adds a work item, MyTask, to the thread pool:

// Project Threads/ThreadPool

class Foo {
    private AutoResetEvent m_Event = new AutoResetEvent(false);

    void MyTask(Object stateObject) {
      Console.WriteLine("In");
      m_Event.Set(); // let the main thread know
    }

    public void DoIt() {
      ThreadPool.QueueUserWorkItem(new WaitCallback(Task));

      // Wait for completion (if need be)
      m_Event.WaitOne();
    }
}

The common language runtime uses one of the threads from the pool to invoke the work item MyTask. The implementation of the MyTask method uses an event to signal its completion to the main thread, a technique that we have seen and used in the past.

In the preceding code, the state object provided to the callback is null. However, an overloaded version of QueueUserWorkItem lets you specify a state object that the runtime passes over to the work item.

The thread pool class also lets you associate a wait handle with a work item such that the work item is executed if the wait handle is signaled. Further, you can also configure it such that the work item is executed if the wait handle is not signaled within a certain timeout period. All this is made possible through the static method ThreadPool.RegisterWaitForSingleObject. The signature for this method was shown earlier. The delegate type WaitOrTimerCallback that this method uses is defined as follows:

public delegate void WaitOrTimerCallback(Object state,
    bool timedOut);

The first parameter, state, is the state object as set by the caller to RegisterWaitForSingleObject. The second parameter, timedOut, indicates the reason the callback was invoked. A value false implies that the wait handle was signaled within the specified timeout interval.

Here is a sample code excerpt that illustrates the use of this method:

// Project Threads/ThreadPool

class Bar {
    void MyTask(Object stateObject, bool timedOut) {
      Console.WriteLine("In: {0}", timedOut);
    }

    public void DoIt() {
      AutoResetEvent e = new AutoResetEvent(false);

      ThreadPool.RegisterWaitForSingleObject(
        e,
        new WaitOrTimerCallback(MyTask),
        null,
        -1,
        true);

      Thread.Sleep(5*1000);
      e.Set();
    }
}

The roles of the first three parameters to RegisterWaitForSingleObject are obvious from the signature of the method and need no further explanation. The fourth parameter, timeOut, defines the time period in milliseconds to wait for the signal. A value of –1 indicates an indefinite wait. The fifth parameter, justOnce, indicates whether to wait just once or reset the wait timer each time the callback is called.

By setting the fourth parameter to a suitable timeout period and the fifth parameter to false, as shown in the following code excerpt, you can create a simple scheduler where a method is automatically called periodically.

ThreadPool.RegisterWaitForSingleObject(
  e,
  new WaitOrTimerCallback(MyTask),
  null,
  10*1000, // wait for 10 seconds
  false);

Thread Pool Internals

There is only one ThreadPool object per application domain. The thread pool is created the first time you call ThreadPool.QueueUserWorkItem, or when a registered wait operation queues a callback method. Once submitted, a work item cannot be canceled.

The initial size of the pool (i.e., the number of worker threads in the pool) is one. As each item is queued, the thread pool checks if any thread in the pool is available for reuse. If not, it spawns a new worker thread and adds it to the pool. Each worker thread runs with the default stack size and priority, and in the multithreaded apartment. When processing the work item or the callback, the worker thread switches to the correct application domain.

The .NET Framework defines a limit on the maximum number of worker threads per process (not per application domain). The limit is determined based on the number of CPUs in the machine. Currently, this limit is defined as 25 per CPU. However, a runtime host is allowed to change the limit to a more suitable value (using the API CorSetMaxThreads).

Besides spawning the worker threads, the thread pool may also spawn up to two more threads for internal housekeeping functions.

The restriction on the number of worker threads does not impose a limit on the number of work items that can be added. These work items are limited only by the amount of available memory. If a work item is added, and all the worker threads are busy, then the work item is just queued until a worker thread becomes available.

Finally, it is worth mentioning that there are times when you may want to create your own thread pool mechanism instead of using the system-provided ThreadPool class. Here are some reasons:

  • You want to place a thread into a single-threaded apartment (all ThreadPool threads are placed in the multithreaded apartment).

  • You need to run a task at a particular priority.

  • You need to dedicate a specific thread in the thread pool for certain tasks.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.135.202.224