Skip to main content

A First Look at JSR 166: Concurrency Utilities

March 1, 2004

{cs.r.title}








Contents
Ambitious Goals
Innovative Process
Don't Reinvent that Wheel
JVM Improvements
New Classes and Methods
Concurrent Versus Synchronized
Look Ma, No Synchronization
Go Check It Out!

JSR 166, charged with creating a set of standardized concurrency utilities for JDK 1.5, has recently emerged from public review. Over the course of the last two years, the JSR 166 Expert Group has worked on reformulating Doug Lea's popular util.concurrent library and packaging it as a standard part of the 1.5 Java Class Library, to become the java.util.concurrent package. The result is a scalable, robust, high-performance set of standardized concurrency utilities, intended to greatly simplify the development of concurrent applications.

Ambitious Goals

While Java has included facilities for concurrency and multithreading from day one, writing concurrent applications in Java has not always been as easy as one might like. This is because Java provides only low-level facilities for concurrency -- synchronized, volatile, wait(), notify(), and notifyAll() -- while developers generally need higher-level constructs, such as concurrent collections, thread pools, semaphores, and explicit lock and condition objects. While the Java language includes all of the tools necessary to build these, it's not as easy as it looks -- even experts sometimes get it wrong -- and it's very difficult to test that you've done it right without having the code audited by concurrency experts.

The goal of JSR 166 is, immodestly, to do for concurrency what the Java Collections Framework did for data structures. Given the inherent complexity of writing concurrent classes, that's an ambitious undertaking! But the JSR 166 group had a lot to work with. By starting with the util.concurrent library, which has been used extensively by thousands of developers, and refactoring based on four years of real-world user feedback gathered from the user base, the resulting proposal is based on the actual needs and experiences of developers writing concurrent applications, not just extrapolated from theory and a small set of examples.

Innovative Process

JSR 166 also took an innovative approach for involving the public in the process. Just as with all JSRs, the expert group had a private mailing list for discussion and decision-making, but there was also a public mailing list, concurrency-interest, to which frequent API snapshots were posted and from which opinions were solicited. This allowed knowledgeable outsiders to comment and make suggestions -- many of which were very useful -- without bogging down the process, and it also allowed the expert group to consider public input far earlier in the process, before decisions were set in stone. This combination of public involvement and private decision-making almost certainly improved the result over the more typical private-only approach.

In the JCP case study entitled "A Transparent Expert Group," this open model that combines private discussion with public review was deemed such a big success that it was declared that the next version of the Java Community Process, 2.6, would require the existence of a public Early Draft Review (concurrent with Community Review), where both the public and the community can comment. All JSRs currently under JCP 2.5 will be expected to move to 2.6 when it becomes final early this year.

Don't Reinvent that Wheel

Using the concurrency utilities will, in most cases, make your programs clearer, shorter, faster, easier to write, easier to read, more reliable, and more scalable. This is possible because most concurrent applications use the same common building blocks -- concurrent collections, thread pools, semaphores, and the like. This means that these classes present a prime opportunity for standardization, just as was the case with the basic building blocks in the Collections framework (Map, Set, and List.) In fact, the payoff for standardizing the implementations for concurrency utilities might be even higher, because building them yourself with Java's low-level concurrency primitives is difficult, error-prone, and can lead to inefficient implementations if used incorrectly. The implementations in java.util.concurrent were written and peer-reviewed by concurrency and performance experts. They are based on the basic patterns from util.concurrent, which are battle-tested industrial-strength components, with many improvements and refactorings, and have been updated for Java generics where appropriate.

JVM Improvements

In addition to the classes in java.util.concurrent, JSR 166 includes a number of improvements to the JVM itself, including nanosecond-precision timing, and exposure of a low-level compare-and-swap operation to facilitate implementation of more efficient and scalable concurrent classes.

The System class now includes a nanoTime() method. Unlike System.currentTimeMillis(), which returns a time based on time elapsed since some fixed epoch, System.nanoTime() is only useful for measuring relative times. Not all operating systems can provide the JVM with a nanosecond-precision clock, but if they do, the JVM can now provide the application with a much finer level of timer granularity than was previously available. All java.util.concurrent classes that take timeout parameters can specify timeouts in seconds, milliseconds, microseconds, or nanoseconds, through the use of the new TimeUnit enum.

The low-level compare-and-swap facility is not intended to be used directly by most developers, but it enabled the java.util.concurrent classes to be developed using lock-free algorithms, a class of algorithms that has much more attractive scalability characteristics than those based on using locking (e.g., via synchronized) to protect shared data. Modern processors designed for multiprocessor environments provide hardware primitives to support synchronization (often called "read-modify-write operations"), such as compare-and-swap (CAS), or load-linked/store-conditional (LL/SC). (CAS is supported by Sparc and Intel processors, and LL/SC by PowerPC.) Prior to JDK 1.5, these low-level facilities were not available to Java classes. By exposing a compare-and-swap operation that the JVM will implement according to what concurrency features the hardware provides (CAS, LL/SC, or in the worst case, locking), it becomes practical to write efficient lock-free algorithms in Java, opening the door to higher-performance, more scalable implementations for a number of the classes in java.util.concurrent.

New Classes and Methods

The java.util.concurrent package contains a host of new goodies, in a number of categories, most of which have some analogue in util.concurrent. All have been designed for scalability and good concurrent performance:

  • Concurrent queue classes: ArrayBlockingQueue (a fixed-sized, bounded FIFO queue), PriorityBlockingQueue (a priority queue), and ConcurrentLinkedQueue (an unbounded non-blocking FIFO queue)

  • Concurrent replacements for existing synchronized collections (Hashtable and Vector): ConcurrentHashMap (scalable replacement for Hashtable) and CopyOnWriteArrayList (List implementation optimized for the case where iterations greatly outnumber insertions or removals)

  • A flexible task dispatching framework based on util.concurrent.Executor, including a flexible thread pool and a scheduling service

  • A new, high-performance Lock class, which supports timed lock waits, interruptible lock attempts, lock polling, and multiple wait sets via the Condition class

  • Atomic variables (AtomicInteger, AtomicLong, AtomicReference): Higher-performance analogues of util.concurrent.SynchronizedInt and friends

  • Several general-purpose synchronization utilities, such as Semaphore (Dijkstra counting semaphore), CountDownLatch (allows one thread to wait for a set of operations in other threads to complete), CyclicBarrier (allows multiple threads to wait until they all reach a common barrier point), and Exchanger (allows to threads to rendezvous and exchange information)

  • Nanosecond-granularity time support: System.nanoTime()

  • Uncaught exception handlers: The Thread class now supports setting an uncaught exception handler (which previously was only available through ThreadGroup)

Concurrent Versus Synchronized

The Hashtable class is thread-safe, whereas the HashMap class is not. So why was another thread-safe map -- ConcurrentHashMap -- needed, when we already have Hashtable (and Collections.synchronizedMap())? While both Hashtable and ConcurrentHashMap are thread-safe implementations of Map, Hashtable (and synchronizedMap) get their thread-safety from synchronizing every method. (Note that simply synchronizing every method does not, in general, render a class thread-safe). The result of synchronizing every method is that no operations on a Hashtable in different threads can overlap each other -- access to the Hashtable is effectively serialized. Such classes can become a scalability bottleneck, because one thread performing a long-running operation on a Hashtable can stall many other threads until that operation is finished.

By contrast, classes such as ConcurrentHashMap are designed not only for thread safety, but for highly concurrent access. This means that multiple operations can overlap each other without waiting for a lock. In the case of ConcurrentHashMap, an unbounded number of read operations can overlap each other, reads can overlap writes, and up to 16 write operations can overlap each other. In most cases, read operations (Map.get) can proceed with no locking at all! (This is the result of some extremely careful coding, deep understanding of the Java Memory Model, and extensive peer review -- don't try this at home.) The naming convention ConcurrentXxx indicates a class that has been designed not only for thread safety, but for high performance and scalability under concurrent access.

Look Ma, No Synchronization

With a small number of exceptions, there are almost no synchronized blocks or methods anywhere in the java.util.concurrent package. How can this be? Instead of synchronization, they use the new Lock class (sometimes called try-locks), which has similar semantics to synchronized but offers higher performance and additional features, such as the ability to interrupt a thread waiting for a lock, attempt waiting for a lock for a specified time, poll for a lock's availability, or structure lock acquisition and release in a non-lexically-scoped manner. The Lock class is in turn implemented using the low-level compare-and-swap functionality exposed by the JVM in JDK 1.5. While the new locks are more powerful, they are slightly more complicated to use, requiring the use of a finally block to release the lock:

Lock l = ...;
l.lock();
try {
    // access the resource protected by this lock
} finally {
    l.unlock();
}

Go Check It Out!

The fruits of JSR 166 are built into the JDK 1.5 beta that is already available. You might also want to browse the concurrency-interest web site, which includes links to the concurrency-interest mailing list as well as other information on JSR 166 and util.concurrent. Go check it out!

Brian Goetz has been a professional software developer for the past 17 years.
Related Topics >> JSR   |   Programming   |