At some point you will learn about concurrent programming. The Java Memory Model is the essential concept you need to learn. But it’s not easy to grasp as a beginner. I try to explain some of the most important aspects.
What this Blog Entry is all about
You will not learn programming from my blog or any other blog. You need a text book for that. So I write this to give you some more information about concurrent programming in Java and the Java Memory Model (JMM). This is just an addition to your text book. Instead of a book you can use the lesson on concurrency in the official tutorial:
The Java™ Tutorials : Lesson: Concurrency
It’s not about the Class
Many books and tutorials start the chapter on concurrent programming by explaining Threads. They explain how to create and start them. But that’s just basic OOP. If you have difficulties creating instances of Thread or any other class then you should go back and learn that. The API is rather easy and
start() is just a regular method.
The class Thread only needs to be explained so that you know which class actually represents a thread. But if you ever design concurrent systems you will not use them directly. Instead you will use
java.util.concurrent.ExecuterService or some framework.
It’s all about Atomicity, Visibility, and Ordering
When you have more than one thread (the JVM stops if there are none) you need to make sure the visibility of your data is always consistent. If two variables are somehow connected then you need to make sure each thread sees all changes to both of them. Your text book should explain how to use
synchronized. It explains that you need to provide some object so the JVM can use its monitor (all objects have am monitor, sometimes called a mutex). But it might not explain the exact semantics of it. The book should also mention the keyword
Before Java 5 they were just those two keywords for concurrent programming. Now, with the JMM of Java 5, they are very similar. The operations are atomic and visibility is guaranteed. But what does that actually mean?
An operation is atomic when it appears to the rest of the system to occur instantaneously. Nothing can interrupt it. It is indivisible.
A synchronized block is atomic because no other thread can execute a block that is synchronized on the same object.
A volatile variable is atomic because all other threads will always see the complete value/object. Even if you use
long (64 bit), which really is just two
int variables (32 bit each) on many systems. And references are only written after the object was fully constructed. However, operations such as increment and decrement are not atomic.
Sometimes the effects of an operation of some thread must be seen by another thread. In other situations they are irrelevant for other threads.
If thread A executes some operation and then thread B wants to use the result, it is important that thread B sees all values that thread A had read or written.
Inside a synchronized block the thread must see the same values as the other thread had seen when it finished executing a synchronized block using the same monitor.
When thread B reads the value of a volatile field it must see the same state that thread A saw when it updated the value of that field.
A compiler will often change the order of operations for optimisation. The results are the same, but it can be executed faster. In a concurrent system the ordering of operations are very important and some optimizations need to be prevented. This is explained in more detail below (happens before).
How it is done
The trick is actually very easy. The JMM defines some rules that make sure the JVM (if it correctly implements the JMM) will execute all operations in a predictable manner. This allows you to write concurrent systems with these basic language features. And it also allows you to use high level abstractions to easily define concurrent systems without complex and hard to maintain code.
Your text book should introduce you to some of the high level types inside java.util.concurrent and its subpackages.
Java code is executed by a JVM. But underneath there is some actual hardware. And it has it’s own memory model. So the JMM is just a simple abstraction of all memory models. Java threads read and write variables. But the hardware has many different kinds of memory (CPU cache, registers, main memory). So the JMM is much simpler and worthwhile to learn.
The JMM defines visibility of shared memory, defines how a single thread can optimize code, and defines happens before relations. I will explain all that in more detail.
Some key ideas of the JMM are:
- All threads share the same heap.
- Each thread has a local working memory
- Several cache levels are used, but this is hidden by the abstraction.
- Operations must comply with the JMM
The JMM does not guarantee sequential consistency. This is so the compiler can reorder some operations for better performance. Within one thread everything is as if serial. Values can be stored in a processor local cache not visible to other processors. Caches can optimize how they commit changes to main memory. Processors can execute instructions in parallel. Everything is still consistent as long as there is only one thread.
The JMM demands that some kind of monitor is used if consistency is required over more than one thread. Each synchronized block uses some object as a monitor.
Happens Before Relations
The JMM defines happens before relations for these actions:
- read / write on variables
- lock / onlock on a monitor
- start / join of threads
This allows you to define that a thread executing action B sees all results of action A. To do that you need a happens before relationship, which can easily be defined with a synchronized block or volatile variable. Those relationships are transitive: If A happens before B and B happens before C then A happens before C.
Everything that happens before an unlock is visible to everything after a lock on the same object. Even changes to fields that are not volatile will be visible to the other thread. Obtaining a lock forces the thread to get fresh values from memory. Releasing a lock forces the thread to flush out all pending writes.
The same rules apply to the other actions. Changes made in the starting thread are visible in the started thread. When a thread ends it will flush all values before waiting threads are notified. The thread waiting to join will see those changes.
And access to volatile variables works the same. Read access to a volatile variable implies to get fresh values from memory. Write access to a volatile variable forces the thread to flush out all pending writes. This is similar to wrapping each read / write access in a synchronized block using the same lock object, except that it doesn’t actually enforce mutual exclusion of the code that yields the value to be written or consumes the value.
int value; volatile int flag = false;
value = 0; flag = false;
value = 42; flag = true;
if (flag) System.out.println(value);
Thread C might see
false and not print anything. But if it sees
true then it must also see
volatile Car car = null;
car = null;
car = new Car(); car.setOwner(new Person("Jane"));
if (car != null) System.out.println(car.getOwner());
Thread C might see
null and not print anything. If it sees a car it might not see the owner, even if the field for the owner is volatile or the getter and setter methods are synchronized.
Tips and Tricks
Volatile vs Snychronized
Access to a volatile field is faster than entering a synchronized block. So it can be used for double checked locking so it works but is actually a bit faster. It doesn’t actually wait to get a lock. But repeated write access inside a loop could be more expensive than synchronization of the loop.
Independence of Values
The value written to a volatile field must not depend on the old value. Even a simple increment (
x++) is not atomic.
Variables that will only change once (false → true; null → object; -1 → value) can be defined as volatile. (See DCL)
Class initialisation is always thread-safe and can be used for lazy initialisation of objects. So no volatile fields and no synchronized blocks are needed if you simply write something like this:
private static Foo foo = new Foo();
The object is created after the class Foo is loaded, but before any thread reads any fields. All threads will see the fully initialized instance of
Foo when they access the static field
foo. It doesn’t even have to be final, because all threads will see the initial value.
All final fields are implicitly volatile. When the JVM allocates the memory it writes zeroes to all bytes (which will be interpreted as false, 0, null etc.). The constructor then initializes those fields and all threads will see those values.
Arrays can be tricky. You might see the array but not the newest values in it. Arrays are objects, so they are of reference type. The volatile variable references an array. The array does not consists of volatile variables! Writing to an array never gets you any of the ordering or visibility guarantees. You can just use a thread-safe collection instead.
Low Level vs Hight Level
The JMM defines how low level concurrent programming is done in Java. But you will want to avoid using low level coding as much as possible. Instead you will use frameworks that circumvent it. And it can be even more basic than using frameworks. If you just use immutable types then you don’t even have to think about visibility, because there are nothing but initial values, that are the same to all threads.