Deadlock
From APIDesign
Deadlock can only happen when all four deadlock conditions are satisfied. One would expect satisfying all of them isn't that common, but that would be wrong assumption. Deadlocks happen. While it is fine to avoid them by eliminating just one of the deadlock conditions in a regular code, the situation in APIDesign is a bit different. As a paradox of APIDesign there is just a single good solution to avoid deadlocks when designing an API:
Never hold a lock when calling a foreign code!
What is a foreign code in this context? Anything written by users of your API. E.g. callbacks, listeners, overriden methods, etc. must not be called under any lock used in your API. The rule is that simple. Memorize the rule, stick to it and your APIs will be deadlock free!
It is easy to follow the rule when dealing with listeners - they are just simple callbacks that notify a change - they don't return any value and as such they can be called asynchronously. which is exactly what UI frameworks like AWT or Swing do: they notify the callbacks later. Of course, without holding any locks and thus avoiding deadlocks.
However what shall one do if the foreign code needs to be called synchronously, in a critical section? Imagine:
public abstract class Helper { private int value1; private int value2; private int value3; protected abstract int combine(int x, int y); public final void update() { synchronized (this) { int v1 = value1; value1 = combine(value1, value2); value2 = combine(value2, value3); value3 = combine(value3, v1); } } }
Placing such class into an API is directly asking for troubles. First of all one synchronizes on a publicly reachable object - e.g. this - see Java_Monitor page for details. Yet, the biggest problem is the call to a unknown code while holding the lock. The unknown code is represented by abstract combine method - it can do anything wild. For example ask for another lock - and that is just a step away from causing a deadlock.
How can one prevent that? Of course, the easiest solution is to blame the user of the API and possibly put a note to javadoc of the combine method to ask users to not deadlock. While possible, it is not a solution TheAPIBook would recommend - we know users of our APIs are clueless, they don't read javadoc unless inevitable and they don't like to be blamed. It is the API writer that shall be blamed and shall work hard on allowing clueless usage of the API by its users. As such let's rewrite the code to take all the pain:
import java.util.concurrent.atomic.AtomicReference; public abstract class Helper { private static final class Data { final int value1; final int value2; final int value3; Data(int value1, int value2, int value3) { this.value1 = value1; this.value2 = value2; this.value3 = value3; } } private AtomicReference<Data> data = new AtomicReference<>(new Data(0, 0, 0)); protected abstract int combine(int x, int y); public final void update() { Data current; while (true) { current = data.get(); int r1 = combine(current.value1, current.value2); int r2 = combine(current.value2, current.value3); int r3 = combine(current.value3, current.value1); if (data.compareAndSet(current, new Data(r1, r2, r3))) { return; } } } }
Neat usage of immutability. The data that are supposed to be consistent are final in a Data class. The mutability is handled all at once with AtomicReference compareAndSet method. Either it fails and the computation runs again or it succeeds and atomically changes the data to newly created and consistent value.
Avoiding deadlocks this way sort of trades the deadlocks for live-locks. There is no limit to the number of times the while cycle is executed. Under some bad setup, it might even run indefinitely. It helps, if there is some invariant that gets stabilized with every change of the data field. For example AbstractLookup is using the same approach and with every update of the data field, the internal data structure moves towards optimum - as a result it is clear that after some number of loops, there will be no class and the update succeeds.