CHAPTER 8

image

Additional Concurrency Utilities

Chapters 5 through 7 introduced you to the concurrency utilities, executors (and callables and futures), synchronizers, and the Locking Framework. In this chapter, I complete my coverage of the concurrency utilities by introducing you to concurrent collections, atomic variables, the Fork/Join Framework, and completion services.

Image Note  Lack of time prevented my also covering completable futures. If you’re interested in this topic, I recommend that you check out Tomasz Nurkiewicz’s excellent blog post titled “Java 8: Definitive guide to CompletableFuture” at http://www.nurkiewicz.com/2013/05/java-8-definitive-guide-to.html.

Concurrent Collections

Java’s Collections Framework provides interfaces and classes that are located in the java.util package. Interfaces include List, Set, and Map; classes include ArrayList, TreeSet, and HashMap.

ArrayList, TreeSet, HashMap, and other classes that implement these interfaces are not thread-safe. However, you can make them thread-safe by using the synchronized wrapper methods located in the java.util.Collections class. For example, you can pass an ArrayList instance to Collections.synchronizedList() to obtain a thread-safe variant of ArrayList.

Although they’re often needed to simplify code in a multithreaded environment, there are a couple of problems with thread-safe collections:

  • It’s necessary to acquire a lock before iterating over a collection that might be modified by another thread during the iteration. If a lock isn’t acquired and the collection is modified, it’s highly likely that java.util.ConcurrentModificationException will be thrown. This happens because Collections Framework classes return fail-fast iterators, which are iterators that throw ConcurrentModificationException when collections are modified during iteration. Fail-fast iterators are often inconvenient to concurrent applications.
  • Performance suffers when synchronized collections are accessed frequently from multiple threads. This performance problem ultimately impacts an application’s scalability.

The concurrency utilities address these problems by including concurrent collections, which are concurrency performant and highly-scalable collections-oriented types that are stored in the java.util.concurrent package. Its collections-oriented classes return weakly-consistent iterators, which are iterators that have the following properties:

  • An element that’s removed after iteration starts but hasn’t yet been returned via the iterator’s next() method won’t be returned.
  • An element that’s added after iteration starts may or may not be returned.
  • No element is returned more than once during the iteration of a collection, regardless of changes made to the collection during iteration.

The following list offers a short sample of concurrency-oriented collection types that you’ll find in the java.util.concurrent package:

  • BlockingQueue is a subinterface of java.util.Queue that also supports blocking operations that wait for the queue to become nonempty before retrieving an element and wait for space to become available in the queue before storing an element. Each of the ArrayBlockingQueue, DelayQueue, LinkedBlockingQueue, PriorityBlockingQueue, and SynchronousQueue classes implements this interface directly. The LinkedBlockingDeque and LinkedTransferQueue classes implement this interface via BlockingQueue subinterfaces.
  • ConcurrentMap is a subinterface of java.util.Map that declares additional indivisible putIfAbsent(), remove(), and replace() methods. The ConcurrentHashMap class (the concurrent equivalent of java.util.HashMap), the ConcurrentNavigableMap class, and the ConcurrentSkipListMap class implement this interface.

Oracle’s Javadoc for BlockingQueue, ArrayBlockingQueue, and other concurrency-oriented collection types identifies these types as part of the Collections Framework.

Using BlockingQueue and ArrayBlockingQueue

BlockingQueue’s Javadoc reveals the heart of a producer-consumer application that’s vastly simpler than the equivalent application shown in Chapter 3 (see Listing 3-1) because it doesn’t have to deal with synchronization. Listing 8-1 uses BlockingQueue and its ArrayBlockingQueue implementation class in a high-level producer-consumer equivalent.

Listing 8-1 uses BlockingQueue’s put() and take() methods, respectively, to put an object on the blocking queue and to remove an object from the blocking queue. put() blocks when there’s no room to put an object; take() blocks when the queue is empty.

Although BlockingQueue ensures that a character is never consumed before it’s produced, this application’s output may indicate otherwise. For example, here’s a portion of the output from one run:

Y consumed by consumer.
Y produced by producer.
Z consumed by consumer.
Z produced by producer.

Chapter 3’s PC application in Listing 3-2 overcame this incorrect output order by introducing an extra layer of synchronization around setSharedChar()/System.out.println() and an extra layer of synchronization around getSharedChar()/System.out.println(). Chapter 7’s PC application in Listing 7-2 overcame this incorrect output order by placing these method calls between lock()/unlock() method calls.

Learning More About ConcurrentHashMap

The ConcurrentHashMap class behaves like HashMap but has been designed to work in multithreaded contexts without the need for explicit synchronization. For example, you often need to check if a map contains a specific value and, when this value is absent, put this value into the map:

if (!map.containsKey("some string-based key"))
   map.put("some string-based key", "some string-based value");

Although this code is simple and appears to do the job, it isn’t thread-safe. Between the call to map.containsKey() and map.put(), another thread could insert this entry, which would then be overwritten. To fix this race condition, you must explicitly synchronize this code, which I demonstrate here:

synchronized(map)
{
   if (!map.containsKey("some string-based key"))
      map.put("some string-based key", "some string-based value");
}

The problem with this approach is that you’ve locked the entire map for read and write operations while checking for key existence and adding the entry to the map when the key doesn’t exist. This locking affects performance when many threads are trying to access the map.

The generic ConcurrentHashMap<V> class addresses this problem by providing the V putIfAbsent(K key, V value) method, which introduces a key/value entry into the map when key is absent. This method is equivalent to the following code fragment but offers better performance:

synchronized(map)
{
   if (!map.containsKey(key))
      return map.put(key, value);
   else
      return map.get(key);
}

Using putIfAbsent(), the earlier code fragment translates into the following simpler code fragment:

map.putIfAbsent("some string-based key", "some string-based value");

Image Note  Java 8 has improved ConcurrentHashMap by adding more than 30 new methods, which largely support lambda expressions and the Streams API via aggregate operations. Methods that perform aggregate operations include forEach() methods (forEach(), forEachKey(), forEachValue(), and forEachEntry()), search methods (search(), searchKeys(), searchValues(), and searchEntries()), and reduction methods (reduce(), reduceToDouble(), reduceToLong(), and so on). Miscellaneous methods (such as mappingCount() and newKeySet()) have been added as well. As a result of the JDK 8 changes, ConcurrentHashMaps (and classes built from them) are now more useful as caches. Cache-improvement changes include methods to compute values for keys, plus improved support for scanning (and possibly evicting) entries, as well as better support for maps with large numbers of elements.

Atomic Variables

The intrinsic locks that are associated with object monitors have historically suffered from poor performance. Although performance has improved, they still present a bottleneck when creating web servers and other applications that require high scalability and performance in the presence of significant thread contention.

A lot of research has gone into creating nonblocking algorithms that can radically improve performance in synchronization contexts. These algorithms offer increased scalability because threads don’t block when multiple threads contend for the same data. Also, threads don’t suffer from deadlock and other liveness problems.

Java 5 provided the ability to create efficient nonblocking algorithms by introducing the java.util.concurrent.atomic package. According to this package’s JDK documentation, java.util.concurrent.atomic provides a small toolkit of classes that support lock-free, thread-safe operations on single variables.

The classes in the java.util.concurrent.atomic package extend the notion of volatile values, fields, and array elements to those that also provide an atomic conditional update so that external synchronization isn’t required. In other words, you get mutual exclusion along with the memory semantics associated with volatile variables without external synchronization.

Image Note  The terms atomic and indivisible are widely considered to be equivalent even though we can split the atom.

Some of the classes located in java.util.concurrent.atomic are described here:

  • AtomicBoolean: A boolean value that may be updated atomically.
  • AtomicInteger: An int value that may be updated atomically.
  • AtomicIntegerArray: An int array whose elements may be updated atomically.
  • AtomicLong: A long value that may be updated atomically.
  • AtomicLongArray: A long array whose elements may be updated atomically.
  • AtomicReference: An object reference that may be updated atomically.
  • AtomicReferenceArray: An object reference array whose elements may be updated atomically.

Atomic variables are used to implement counters, sequence generators (such as java.util.concurrent.ThreadLocalRandom), and other constructs that require mutual exclusion without performance problems under high thread contention. For example, consider Listing 8-2’s ID class whose getNextID() class method returns unique long integer identifiers.

Although the code is properly synchronized (and visibility is accounted for), the intrinsic lock associated with synchronized can hurt performance under heavy thread contention. Furthermore, liveness problems such as deadlock can occur. Listing 8-3 shows you how to avoid these problems by replacing synchronized with an atomic variable.

In Listing 8-3, I’ve converted nextID from a long to an AtomicLong instance, initializing this object to 1. I’ve also refactored the getNextID() method to call AtomicLong’s getAndIncrement() method, which increments the AtomicLong instance’s internal long integer variable by 1 and returns the previous value in one indivisible step. There is no explicit synchronization.

Image Note  The java.util.concurrent.atomic package includes DoubleAccumulator, DoubleAdder, LongAccumulator, and LongAdder classes that address a scalability problem in the context of maintaining a single count, sum, or some other value with the possibility of updates from many threads. These new classes “internally employ contention-reduction techniques that provide huge throughput improvements as compared to atomic variables. This is made possible by relaxing atomicity guarantees in a way that is acceptable in most applications.”

Understanding the Atomic Magic

Java’s low-level synchronization mechanism, which enforces mutual exclusion (the thread holding the lock that guards a set of variables has exclusive access to them) and visibility (changes to the guarded variables become visible to other threads that subsequently acquire the lock), impacts hardware utilization and scalability in the following ways:

  • Contended synchronization (multiple threads constantly competing for a lock) is expensive and throughput suffers as a result. This expense is caused mainly by the frequent context switching (switching the central processing unit from one thread to another) that occurs. Each context switch operation can take many processor cycles to complete. In contrast, modern Java virtual machines (JVMs) make uncontended synchronization inexpensive.
  • When a thread holding a lock is delayed (because of a scheduling delay, for example), no thread that requires that lock makes any progress; the hardware isn’t utilized as well as it might be.

Although you might believe that you can use volatile as a synchronization alternative, this won’t work. Volatile variables only solve the visibility problem. They cannot be used to safely implement the atomic read-modify-write sequences that are necessary for implementing thread-safe counters and other entities that require mutual exclusion. However, there is an alternative that’s responsible for the performance gains offered by the concurrency utilities (such as the java.util.concurrent.Semaphore class). This alternative is known as compare-and-swap.

Compare-and-swap (CAS) is the generic term for an uninterruptible microprocessor-specific instruction that reads a memory location, compares the read value with an expected value, and stores a new value in the memory location when the read value matches the expected value. Otherwise, nothing is done. Modern microprocessors offer variations of CAS. For example, Intel microprocessors provide the cmpxchg family of instructions, whereas the older PowerPC microprocessors provide equivalent load-link (such as lwarx) and store-conditional (such as stwcx) instructions.

CAS supports atomic read-modify-write sequences. You typically use CAS as follows:

  1. Read value x from address A.
  2. Perform a multistep computation on x to derive a new value called y.
  3. Use CAS to change the value of A from x to y. CAS succeeds when A’s value hasn’t changed while performing these steps.

To understand CAS’s benefit, consider Listing 8-2’s ID class, which returns a unique identifier. Because this class declares its getNextID() method synchronized, high contention for the monitor lock results in excessive context switching that can delay all of the threads and result in an application that doesn’t scale well.

Assume the existence of a CAS class that stores an int-based value in value. Furthermore, it offers atomic methods int getValue() for returning value and int compareAndSwap(int expectedValue, int newValue) for implementing CAS. (Behind the scenes, CAS relies on the Java Native Interface [JNI] to access the microprocessor-specific CAS instruction.)

The compareAndSwap() method executes the following instruction sequence atomically:

int readValue = value;            // Obtain the stored value.
if (readValue == expectedValue)   // If stored value not modified ...
   value = newValue;              // ... change to new value.
return readValue;                 // Return value before a potential change.

Listing 8-4 presents a new version of ID that uses the CAS class to obtain a unique identifier in a highly performant manner. (Forget about the performance ramifications of using the JNI and assume that we had direct access to the microprocessor-specific CAS instruction.)

ID encapsulates a CAS instance initialized to int-value 1 and declares a getNextID() method for retrieving the current identifier value and then incrementing this value with help from this instance. After retrieving the instance’s current value, getNextID() repeatedly invokes compareAndSwap() until curValue’s value hasn’t changed (by another thread). This method is then free to change this value, after which it returns the previous value. When no lock is involved, contention is avoided along with excessive context switching. Performance improves and the code is more scalable.

As an example of how CAS improves the concurrency utilities, consider java.util.concurrent.locks.ReentrantLock. This class offers better performance than synchronized under high thread contention. To boost performance, ReentrantLock’s synchronization is managed by a subclass of the abstract java.util.concurrent.locks.AbstractQueuedSynchronizer class. In turn, this class leverages the undocumented sun.misc.Unsafe class and its compareAndSwapInt() CAS method.

The atomic variable classes also leverage CAS. Furthermore, they provide a method that has the following form:

boolean compareAndSet(expectedValue, updateValue)

This method (which varies in argument types across different classes) atomically sets a variable to the updateValue when it currently holds the expectedValue, reporting true on success.

Fork/Join Framework

There is always a need for code to execute faster. Historically, this need was met by increasing microprocessor speeds and/or by supporting multiple processors. However, somewhere around 2003, microprocessor speeds stopped increasing because of natural limits. To compensate, processor manufacturers started to add multiple processing cores to their processors, to increase speed through massive parallelism.

Image Note  Parallelism refers to running threads simultaneously through some combination of multiple processors and cores. In contrast, concurrency is a more generalized form of parallelism in which threads run simultaneously or appear to run simultaneously through context switching, also known as virtual parallelism. Some people further characterize concurrency as a property of a program or operating system and parallelism as the runtime behavior of executing multiple threads simultaneously.

Java supports concurrency via its low-level threading features and higher-level concurrency utilities such as thread pools. The problem with concurrency is that it doesn’t maximize the use of available processor/core resources. For example, suppose you’ve created a sorting algorithm that divides an array into two halves, assigns two threads to sort each half, and merges the results after both threads finish.

Let’s assume that each thread runs on a different processor. Because different amounts of element reordering may occur in each half of the array, it’s possible that one thread will finish before the other thread and must wait before the merge can happen. In this case, a processor resource is wasted.

This problem (and the related problems of the code being verbose and harder to read) can be solved by recursively breaking a task into subtasks and combining results. These subtasks run in parallel and complete approximately at the same time (if not at the same moment), where their results are merged and passed up the stack to the previous layer of subtasks. Hardly any processor time is wasted through waiting, and the recursive code is less verbose and (usually) easier to understand. Java provides the Fork/Join Framework to implement this scenario.

Fork/Join consists of a special executor service and thread pool. The executor service makes a task available to the framework, and this task is broken into smaller tasks that are forked (executed by different threads) from the pool. A task waits until joined (its subtasks finish).

Fork/Join uses work stealing to minimize thread contention and overhead. Each worker thread from a pool of worker threads has its own double-ended work queue and pushes new tasks to this queue. It reads the task from the head of the queue. If the queue is empty, the worker thread tries to get a task from the tail of another queue. Stealing is infrequent because worker threads put tasks into their queues in a last-in, first-out (LIFO) order, and the size of work items gets smaller as a problem is divided into subproblems. You start by giving the tasks to a central worker and it keeps dividing them into smaller tasks. Eventually all of the workers have something to do with minimal synchronization.

Fork/Join largely consists of the java.util.concurrent package’s ForkJoinPool, ForkJoinTask, ForkJoinWorkerThread, RecursiveAction, RecursiveTask, and CountedCompleter classes:

  • ForkJoinPool is a java.util.concurrent.ExecutorService implementation for running ForkJoinTasks. A ForkJoinPool instance provides the entry point for submissions from non-ForkJoinTask clients, as well as providing management and monitoring operations.
  • ForkJoinTask is the abstract base class for tasks that run in a ForkJoinPool context. A ForkJoinTask instance is a thread-like entity that is much lighter weight than a normal thread. Huge numbers of tasks and subtasks may be hosted by a small number of actual threads in a ForkJoinPool, at the price of some usage limitations.
  • ForkJoinWorkerThread describes a thread managed by a ForkJoinPool instance, which executes ForkJoinTasks.
  • RecursiveAction describes a recursive result-less ForkJoinTask.
  • RecursiveTask describes a recursive result-bearing ForkJoinTask.
  • CountedCompleter describes a ForkJoinTask with a completion action (code that completes a fork/join task) performed when triggered and there are no remaining pending actions.

The Java documentation provides examples of RecursiveAction-based tasks (such as sorting) and RecursiveTask-based tasks (such as computing Fibonacci numbers). You can also use RecursiveAction to accomplish matrix multiplication (see http://en.wikipedia.org/wiki/Matrix_multiplication). For example, suppose that you’ve created Listing 8-5’s Matrix class to represent a matrix consisting of a specific number of rows and columns.

Listing 8-6 demonstrates the single-threaded approach to multiplying two Matrix instances.

Listing 8-6’s MatMult class declares a multiply() method that demonstrates matrix multiplication. After verifying that the number of columns in the first Matrix (a) equals the number of rows in the second Matrix (b), which is essential to the algorithm, multiply() creates a result Matrix and enters a sequence of nested loops to perform the multiplication.

The essence of these loops is as follows: For each row in a, multiply each of that row’s column values by the corresponding column’s row values in b. Add the results of the multiplications and store the overall total in result at the location specified via the row index (i) in a and the column index (j) in b.

Compile Listing 8-6 and Listing 8-5, which must be in the same directory, as follows:

javac MultMat.java

Run the resulting application as follows:

java MatMult

You should observe the following output, which indicates that a 1-row-by-3-column matrix multiplied by a 3-row-by-2 column matrix results in a 1-row-by-2-column matrix:

1 2 3

4 7
5 8
6 9

32 50

Computer scientists classify this algorithm as O(n*n*n), which is read “big-oh of n-cubed” or “approximately n-cubed.” This notation is an abstract way of classifying the algorithm’s performance (without being bogged down in specific details such as microprocessor speed). A O(n*n*n) classification indicates very poor performance, and this performance worsens as the sizes of the matrixes being multiplied increase.

The performance can be improved (on multiprocessor and/or multicore platforms) by assigning each row-by-column multiplication task to a separate thread-like entity. Listing 8-7 shows you how to accomplish this scenario in the context of the Fork/Join Framework.

Listing 8-7 presents a MatMult class that extends RecursiveAction. To accomplish meaningful work, RecursiveAction’s void compute() method is overridden.

Image Note  Although compute() is normally used to subdivide a task into subtasks recursively, I’ve chosen to handle the multiplication task somewhat differently (for brevity and simplicity).

After creating Matrixes a and b, Listing 8-7’s main() method creates Matrix c and instantiates ForkJoinPool. It then instantiates MatMult, passing these three Matrix instances as arguments to the MatMult(Matrix a, Matrix b, Matrix c) constructor, and calls ForkJoinPool’s T invoke(ForkJoinTask<T> task) method to start running this initial task. This method doesn’t return until the initial task and all of its subtasks complete.

The MatMult(Matrix a, Matrix b, Matrix c) constructor invokes the MatMult(Matrix a, Matrix b, Matrix c, int row) constructor, specifying -1 as row’s value. This value is used by compute(), which is invoked as a result of the aforementioned invoke() method call, to distinguish between the initial task and subtasks.

When compute() is initially called (row equals -1), it creates a List of MatMult tasks and passes this List to RecursiveAction’s Collection<T> invokeAll(Collection<T> tasks) method (inherited from ForkJoinTask). This method forks all of the List collection’s tasks, which will start to execute. It then waits until the invokeAll() method returns (which also joins to all of these tasks), which happens when the boolean isDone() method (also inherited from ForkJoinTask) returns true for each task.

Notice the tasks.add(new MatMult(a, b, c, row)); method call. This call assigns a specific row value to a MatMult instance. When invokeAll() is called, each task’s compute() method is called and detects a different value (other than -1) assigned to row. It then executes multiplyRowByColumn(a, b, c, row); for its specific row.

Compile Listing 8-7 (javac MatMult.java) and run the resulting application (java MatMult). You should observe the following output:

1 2 3
4 5 6

7 1
8 2
9 3

50 14
122 32

Completion Services

A completion service is an implementation of the java.util.concurrent.CompletionService<V> interface that decouples the production of new asynchronous tasks (a producer) from the consumption of the results of completed tasks (a consumer). V is the type of a task result.

A producer submits a task for execution (via a worker thread) by calling one of the submit() methods: one method accepts a callable argument and the other method accepts a runnable argument along with a result to return upon task completion. Each method returns a Future<V> instance that represents the pending completion of the task. You can then call a poll() method to poll for the task’s completion or call the blocking take() method.

A consumer takes a completed task by calling the take() method. This method blocks until a task has completed. It then returns a Future<V> object that represents the completed task. You would call Future<V>’s get() method to obtain this result.

Along with CompletionService<V>, Java 7 introduced the java.util.concurrent.ExecutorCompletionService<V> class to support task execution via a provided executor. This class ensures that, when submitted tasks are complete, they are placed on a queue that’s accessible to take().

To demonstrate CompletionService and ExecutorCompletionService, I’m revisiting the application for calculating Euler’s number that I first presented in Chapter 5. Listing 8-8 presents the source code to a new application that submits two callable tasks to calculate this number to different accuracies.

Listing 8-8 presents two classes: CSDemo and CalculateE. CSDemo drives the application and CalculateE describes the Euler’s number calculation task.

CSDemo’s main() method first creates an executor service that will execute a task. It then creates a completion service for completing the task. Two calculation tasks are subsequently submitted to the completion service, which runs each task asynchronously. For each task, the completion service’s take() method is called to return the task’s future, whose get() method is called to obtain the task result, which is then output.

CalculateE contains code that’s nearly identical to what was presented in Chapter 5 (see Listing 5-1). The only difference is the change from a LASTITER constant to a lastIter variable that records the last iteration to execute (and determines the number of digits of precision).

Compile Listing 8-8 as follows:

javac CSDemo.java

Run the resulting application as follows:

java CSDemo

You should observe the following output:

2.718281828459045070516047795848605061178979635251032698900735004065225042504843314055887974344245741730039454062711

2.71828182845904523536028747135266249775724709369995957496696762772407663035
3547594571382178525166427463896116281654124813048729865380308305425562838245
9134600326751445819115604942105262868564884769196304284703491677706848122126
6648385500451288419298517722688532167535748956289403478802971332967547449493
7583500554228384631452841986384050112497204406928225548432766806207414980593
2978161481951711991448146506

Image Note  If you’re wondering about the difference between an executor service and a completion service, consider that, with an executor service, after writing the code to submit the tasks, you need to write code to efficiently retrieve task results. With a completion service, this job is pretty much automated. Another way to look at these constructs is that an executor service provides an incoming queue for tasks and provides workers threads, whereas a completion service provides an incoming queue for tasks, worker threads, and an output queue for storing task results.

EXERCISES

The following exercises are designed to test your understanding of Chapter 8’s content:

  1. Identify the two problems with thread-safe collections.
  2. Define concurrent collection.
  3. What is a weakly-consistent iterator?
  4. Describe the BlockingQueue interface.
  5. Describe the ConcurrentMap interface.
  6. Describe the ArrayBlockingQueue and LinkedBlockingQueue BlockingQueue-implementation classes.
  7. True or false: The concurrency-oriented collection types are part of the Collections Framework.
  8. Describe the ConcurrentHashMap class.
  9. Using ConcurrentHashMap, how would you check if a map contains a specific value and, when this value is absent, put this value into the map without relying on external synchronization?
  10. Define atomic variable.
  11. What does the AtomicIntegerArray class describe?
  12. True or false: volatile supports atomic read-modify-write sequences.
  13. What’s responsible for the performance gains offered by the concurrency utilities?
  14. Describe the Fork/Join Framework.
  15. Identify the main types that comprise the Fork/Join Framework.
  16. To accomplish meaningful work via RecursiveAction, which one of its methods would you override?
  17. Define completion service.
  18. How do you use a completion service?
  19. How do you execute tasks via a completion service?
  20. Convert the following expressions to their atomic variable equivalents:
    int total = ++counter;
    int total = counter--;

Summary

This chapter completed my tour of the concurrency utilities by introducing concurrent collections, atomic variables, the Fork/Join Framework, and completion services.

A concurrent collection is a concurrency performant and highly-scalable collections-oriented type that is stored in the java.util.concurrent package. It overcomes the ConcurrentModificationException and performance problems of thread-safe collections.

An atomic variable is an instance of a class that encapsulates a single variable and supports lock-free, thread-safe operations on that variable, for example, AtomicInteger.

The Fork/Join Framework consists of a special executor service and thread pool. The executor service makes a task available to the framework, and this task is broken down into smaller tasks that are forked (executed by different threads) from the pool. A task waits until it’s joined (its subtasks finish).

A completion service is an implementation of the CompletionService<V> interface that decouples the production of new asynchronous tasks (a producer) from the consumption of the results of completed tasks (a consumer). V is the type of a task result.

Appendix A presents the answers to each chapter’s exercises.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.188.98.148