JavaRush /Java Blog /Random EN /Fundamentals of Concurrency: Deadlocks and Object Monitor...
Snusmum
Level 34
Хабаровск

Fundamentals of Concurrency: Deadlocks and Object Monitors (sections 1, 2) (translation of the article)

Published in the Random EN group
Source article: http://www.javacodegeeks.com/2015/09/concurrency-fundamentals-deadlocks-and-object-monitors.html Posted by Martin Mois This article is part of our Java Concurrency Fundamentals course . In this course, you'll delve into the magic of parallelism. You will learn the basics of parallelism and parallel code, and become familiar with concepts such as atomicity, synchronization, and thread safety. Take a look at it here !

Content

1. Liveness  1.1 Deadlock  1.2 Starvation 2. Object monitors with wait() and notify()  2.1 Nested synchronized blocks with wait() and notify()  2.2 Conditions in synchronized blocks 3. Design for multi-threading  3.1 Immutable object  3.2 API design  3.3 Local thread storage
1. Vitality
When developing applications that use parallelism to achieve their goals, you may encounter situations in which different threads can block each other. If the application is running slower than expected in this situation, we would say that it is not running as expected. In this section, we'll take a closer look at issues that can threaten the survivability of a multi-threaded application.
1.1 Mutual blocking
The term deadlock is well known among software developers and even most ordinary users use it from time to time, although not always in the correct sense. Strictly speaking, this term means that each of two (or more) threads is waiting for the other thread to release a resource locked by it, while the first thread itself has locked a resource that the second one is waiting to access: To better understand the problem, take a look at the Thread 1: locks resource A, waits for resource B Thread 2: locks resource B, waits for resource A following code: public class Deadlock implements Runnable { private static final Object resource1 = new Object(); private static final Object resource2 = new Object(); private final Random random = new Random(System.currentTimeMillis()); public static void main(String[] args) { Thread myThread1 = new Thread(new Deadlock(), "thread-1"); Thread myThread2 = new Thread(new Deadlock(), "thread-2"); myThread1.start(); myThread2.start(); } public void run() { for (int i = 0; i < 10000; i++) { boolean b = random.nextBoolean(); if (b) { System.out.println("[" + Thread.currentThread().getName() + "] Trying to lock resource 1."); synchronized (resource1) { System.out.println("[" + Thread.currentThread().getName() + "] Locked resource 1."); System.out.println("[" + Thread.currentThread().getName() + "] Trying to lock resource 2."); synchronized (resource2) { System.out.println("[" + Thread.currentThread().getName() + "] Locked resource 2."); } } } else { System.out.println("[" + Thread.currentThread().getName() + "] Trying to lock resource 2."); synchronized (resource2) { System.out.println("[" + Thread.currentThread().getName() + "] Locked resource 2."); System.out.println("[" + Thread.currentThread().getName() + "] Trying to lock resource 1."); synchronized (resource1) { System.out.println("[" + Thread.currentThread().getName() + "] Locked resource 1."); } } } } } } As you can see from the code above, two threads start and try to lock two static resources. But for deadlocking, we need a different sequence for both threads, so we use an instance of the Random object to choose which resource the thread wants to lock first. If the boolean variable b is true, then resource1 is locked first, and then the thread tries to acquire the lock for resource2. If b is false, then the thread locks resource2 and then tries to acquire resource1. This program does not need to run for long to achieve the first deadlock, i.e. The program will hang forever if we don't interrupt it: [thread-1] Trying to lock resource 1. [thread-1] Locked resource 1. [thread-1] Trying to lock resource 2. [thread-1] Locked resource 2. [thread-2] Trying to lock resource 1. [thread-2] Locked resource 1. [thread-1] Trying to lock resource 2. [thread-1] Locked resource 2. [thread-2] Trying to lock resource 2. [thread-1] Trying to lock resource 1. In this run, tread-1 has acquired the resource2 lock and is waiting for resource1's lock, while tread-2 has the resource1 lock and is waiting for resource2. If we were to set the value of the boolean variable b in the above code to true, we would not be able to observe any deadlock because the sequence in which thread-1 and thread-2 request locks would always be the same. In this situation, one of the two threads would obtain the lock first and then request the second, which is still available because the other thread is waiting for the first lock. In general, we can distinguish the following necessary conditions for deadlock to occur: - Shared execution: There is a resource that can be accessed by only one thread at any time. - Resource Hold: While acquiring one resource, a thread tries to acquire another lock on some unique resource. - No preemption: There is no mechanism to release a resource if one thread holds the lock for a certain period of time. - Circular Wait: During execution, a collection of threads occurs in which two (or more) threads wait for each other to release a resource that has been locked. Although the list of conditions seems long, it is not uncommon for well-run multi-threaded applications to have deadlock problems. But you can prevent them if you can remove one of the conditions above: - Shared execution: this condition often cannot be removed when the resource must be used by only one person. But this doesn't have to be the reason. When using DBMS systems, a possible solution, instead of using a pessimistic lock on some table row that needs to be updated, is to use a technique called Optimistic Locking . - A way to avoid holding a resource while waiting for another exclusive resource is to lock all the necessary resources at the beginning of the algorithm and release them all if it is impossible to lock them all at once. Of course, this is not always possible; perhaps the resources that require locking are unknown in advance, or this approach will simply lead to a waste of resources. - If the lock cannot be acquired immediately, a way to bypass a possible deadlock is to introduce a timeout. For example, the ReentrantLock classfrom the SDK provides the ability to set an expiration date for the lock. - As we saw from the above example, deadlock does not occur if the sequence of requests does not differ among different threads. This is easy to control if you can put all the blocking code into one method that all threads have to go through. In more advanced applications, you might even consider implementing a deadlock detection system. Here you will need to implement some semblance of thread monitoring, in which each thread reports that it has successfully acquired the lock and is attempting to acquire the lock. If threads and locks are modeled as a directed graph, you can detect when two different threads are holding resources while trying to access other locked resources at the same time. If you can then force the blocking threads to release the required resources, you can resolve the deadlock situation automatically.
1.2 Fasting
The scheduler decides which thread in the RUNNABLE state it should execute next. The decision is based on thread priority; therefore, threads with lower priority receive less CPU time compared to those with higher priority. What looks like a reasonable solution can also cause problems if abused. If high-priority threads are executing most of the time, then low-priority threads seem to starve because they don't get enough time to do their work properly. Therefore, it is recommended to set thread priority only when there is a compelling reason to do so. A non-obvious example of thread starvation is given, for example, by the finalize() method. It provides a way for the Java language to execute code before an object is garbage collected. But if you look at the priority of the finalizing thread, you'll notice that it doesn't run with the highest priority. Consequently, thread starvation occurs when your object's finalize() methods spend too much time relative to the rest of the code. Another problem with execution time arises from the fact that it is not defined in what order the threads traverse the synchronized block. When many parallel threads are traversing some code that is framed in a synchronized block, it may happen that some threads have to wait longer than others before entering the block. In theory, they may never get there. The solution to this problem is the so-called “fair” blocking. Fair locks take thread waiting times into account when determining who to pass next. An example implementation of fair locking is available in the Java SDK: java.util.concurrent.locks.ReentrantLock. If a constructor is used with a boolean flag set to true, then ReentrantLock gives access to the thread that has been waiting the longest. This guarantees the absence of hunger but, at the same time, leads to the problem of ignoring priorities. Because of this, lower priority processes that often wait at this barrier may run more frequently. Last but not least, the ReentrantLock class can only consider threads that are waiting for a lock, i.e. threads that were launched often enough and reached the barrier. If a thread's priority is too low, then this will not happen often for it, and therefore high priority threads will still pass the lock more often.
2. Object monitors together with wait() and notify()
In multi-threaded computing, a common situation is to have some worker threads waiting for their producer to create some work for them. But, as we learned, actively waiting in a loop while checking a certain value is not a good option in terms of CPU time. Using the Thread.sleep() method in this situation is also not particularly suitable if we want to start our work immediately after arrival. For this purpose, the Java programming language has another structure that can be used in this scheme: wait() and notify(). The wait() method, inherited by all objects from the java.lang.Object class, can be used to suspend the current thread and wait until another thread wakes us up using the notify() method. In order to work correctly, the thread calling the wait() method must hold a lock that it previously acquired using the synchronized keyword. When wait() is called, the lock is released and the thread waits until another thread that now holds the lock calls notify() on the same object instance. In a multi-threaded application, there may naturally be more than one thread waiting for notification on some object. Therefore, there are two different methods for waking up threads: notify() and notifyAll(). While the first method wakes up one of the waiting threads, the notifyAll() method wakes up all of them. But be aware that, as with the synchronized keyword, there is no rule that determines which thread will be woken up next when notify() is called. In a simple example with a producer and a consumer, this does not matter, since we do not care which thread is woken up. The following code shows how wait() and notify() can be used to cause consumer threads to wait for new work to be queued by a producer thread: package a2; import java.util.Queue; import java.util.concurrent.ConcurrentLinkedQueue; public class ConsumerProducer { private static final Queue queue = new ConcurrentLinkedQueue(); private static final long startMillis = System.currentTimeMillis(); public static class Consumer implements Runnable { public void run() { while (System.currentTimeMillis() < (startMillis + 10000)) { synchronized (queue) { try { queue.wait(); } catch (InterruptedException e) { e.printStackTrace(); } } if (!queue.isEmpty()) { Integer integer = queue.poll(); System.out.println("[" + Thread.currentThread().getName() + "]: " + integer); } } } } public static class Producer implements Runnable { public void run() { int i = 0; while (System.currentTimeMillis() < (startMillis + 10000)) { queue.add(i++); synchronized (queue) { queue.notify(); } try { Thread.sleep(100); } catch (InterruptedException e) { e.printStackTrace(); } } synchronized (queue) { queue.notifyAll(); } } } public static void main(String[] args) throws InterruptedException { Thread[] consumerThreads = new Thread[5]; for (int i = 0; i < consumerThreads.length; i++) { consumerThreads[i] = new Thread(new Consumer(), "consumer-" + i); consumerThreads[i].start(); } Thread producerThread = new Thread(new Producer(), "producer"); producerThread.start(); for (int i = 0; i < consumerThreads.length; i++) { consumerThreads[i].join(); } producerThread.join(); } } The main() method starts five consumer threads and one producer thread and then waits for them to finish. The producer thread then adds the new value to the queue and notifies all waiting threads that something has happened. Consumers get a queue lock (ie, one random consumer) and then go to sleep, to be raised later when the queue is full again. When the producer finishes its work, it notifies all consumers to wake them up. If we didn't do the last step, the consumer threads would wait forever for the next notification because we didn't set a timeout to wait. Instead, we can use the wait(long timeout) method to be woken up at least after some time has passed.
2.1 Nested synchronized blocks with wait() and notify()
As stated in the previous section, calling wait() on an object's monitor only releases the lock on that monitor. Other locks held by the same thread are not released. As is easy to understand, in everyday work it may happen that the thread calling wait() holds the lock further. If other threads are also waiting for these locks, a deadlock situation may occur. Let's look at locking in the following example: public class SynchronizedAndWait { private static final Queue queue = new ConcurrentLinkedQueue(); public synchronized Integer getNextInt() { Integer retVal = null; while (retVal == null) { synchronized (queue) { try { queue.wait(); } catch (InterruptedException e) { e.printStackTrace(); } retVal = queue.poll(); } } return retVal; } public synchronized void putInt(Integer value) { synchronized (queue) { queue.add(value); queue.notify(); } } public static void main(String[] args) throws InterruptedException { final SynchronizedAndWait queue = new SynchronizedAndWait(); Thread thread1 = new Thread(new Runnable() { public void run() { for (int i = 0; i < 10; i++) { queue.putInt(i); } } }); Thread thread2 = new Thread(new Runnable() { public void run() { for (int i = 0; i < 10; i++) { Integer nextInt = queue.getNextInt(); System.out.println("Next int: " + nextInt); } } }); thread1.start(); thread2.start(); thread1.join(); thread2.join(); } } As we learned earlier , adding synchronized to a method signature is equivalent to creating a synchronized(this){} block. In the example above, we accidentally added the synchronized keyword to the method, and then synchronized the queue with the queue object's monitor to send this thread to sleep while it waited for the next value from queue. Then, the current thread releases the lock on queue, but not the lock on this. The putInt() method notifies the sleeping thread that a new value has been added. But by chance we added the synchronized keyword to this method as well. Now that the second thread has fallen asleep, it still holds the lock. Therefore, the first thread cannot enter the putInt() method while the lock is held by the second thread. As a result, we have a deadlock situation and a frozen program. If you run the above code, it will happen immediately after the program starts running. In everyday life, this situation may not be so obvious. The locks held by a thread may depend on parameters and conditions encountered at runtime, and the synchronized block causing the problem may not be as close in the code to where we placed the wait() call. This makes it difficult to find such problems, especially since they may occur over time or under high load.
2.2 Conditions in synchronized blocks
Often you need to check that some condition is met before performing any action on a synchronized object. When you have a queue, for example, you want to wait for it to fill up. Therefore, you can write a method that checks if the queue is full. If it is still empty, then you send the current thread to sleep until it is woken up: public Integer getNextInt() { Integer retVal = null; synchronized (queue) { try { while (queue.isEmpty()) { queue.wait(); } } catch (InterruptedException e) { e.printStackTrace(); } } synchronized (queue) { retVal = queue.poll(); if (retVal == null) { System.err.println("retVal is null"); throw new IllegalStateException(); } } return retVal; } The above code synchronizes with queue before calling wait() and then waits in a while loop until at least one element appears in queue. The second synchronized block again uses queue as an object monitor. It calls the queue's poll() method to get the value. For demonstration purposes, an IllegalStateException is thrown when poll returns null. This happens when queue has no elements to fetch. When you run this example, you will see that IllegalStateException is thrown very often. Although we synchronized correctly using the queue monitor, an exception was thrown. The reason is that we have two different synchronized blocks. Imagine we have two threads that have arrived at the first synchronized block. The first thread entered the block and went to sleep because the queue was empty. The same is true for the second thread. Now that both threads are awake (thanks to the notifyAll() call called by the other thread for the monitor), they both see the value(item) in the queue added by the producer. Then both arrived at the second barrier. Here the first thread entered and retrieved the value from the queue. When the second thread enters, queue is already empty. Therefore, it receives null as the value returned from queue and throws an exception. To prevent such situations, you need to perform all operations that depend on the state of the monitor in the same synchronized block: public Integer getNextInt() { Integer retVal = null; synchronized (queue) { try { while (queue.isEmpty()) { queue.wait(); } } catch (InterruptedException e) { e.printStackTrace(); } retVal = queue.poll(); } return retVal; } Here we execute the poll() method in the same synchronized block as the isEmpty() method. Thanks to the synchronized block, we are sure that only one thread is executing a method for this monitor at a given time. Therefore, no other thread can remove elements from the queue between calls to isEmpty() and poll(). Continued translation here .
Comments
TO VIEW ALL COMMENTS OR TO MAKE A COMMENT,
GO TO FULL VERSION