Java concurrency lock related problems


There are two ways to lock in Java: one is to useSynchronized keyword, the other is to useLock interfaceImplementation class of.

If you just want to simply add a lock and have no special requirements for performance, the synchronized keyword is enough. Since Java 5, only in Java util. concurrent. There is another way to implement locks under the locks package, which is lock. in other words,Synchronized is a built-in keyword in the Java language, and lock is an interface, the implementation class of this interface implements the function of lock at the code level.

Java concurrency lock related problems

ReentrantLock、ReadLock、WriteLockAre the three most important implementation classes of the lock interface. It corresponds to “reentrant lock”, “read lock” and “write lock”.

Readwritelock is actually a factory interface, while reentrantreadwritelock is the implementation class of readwritelock, which contains two static internal classes readlock and writelock. These two static inner classes implement the lock interface respectively.

Pessimistic lock vs optimistic lock? CAS (the basis of optimistic lock implementation) implementation and existing problems?

Pessimistic lock and optimistic lock are a macro classification method. They do not specifically refer to a lock, but two different strategies in the case of concurrency.Division basis: whether the thread should lock the synchronization resources. After locking the synchronization resources fails, whether the thread should block (no blocking spin lock).

  • Pessimistic lock(pessimistic lock) is very pessimistic. Every time I go to get the data, I think others will modify it. So every time I get the data, I lock it. In this way, if others want to take this data, it will block until it gets the lock(Shared resources are only used by one thread at a time, and other threads are blocked. After they are used up, the resources are transferred to other threads)。 Many of these locking mechanisms are used in traditional relational databases, such as row lock, table lock, read lock and write lock, which are locked before operation. Exclusive locks such as synchronized and reentrantlock in Java are the implementation of pessimistic lock.
  • Optimistic lock(optimal lock) is very optimistic. Every time I go to get the data, I think others will not modify it. thereforeNo lock, no lock!However, if you want to update the data, you willBefore updating, check whether others have modified this data during the period from reading to updating。 If it has been modified, read it again and try the update again. Cycle the above steps until the update is successful (of course, the thread that fails to update is also allowed to give up the operation), that isCAS implementation, in Java util. concurrent. The atomic variable class under the atomic package is implemented by CAS, an implementation of optimistic locking. In addition, optimistic locks can be usedVersion number mechanismImplementation, like that provided by the database, is similar to write_ The condition mechanism is actually an optimistic lock provided.

Application scenario

  • Pessimistic locks are suitable for many write operations, locking first can ensure that the data is correct during write operation.
  • Optimistic lock is suitable for multiple read operations, it can save the cost of locking and improve the throughput and performance of the system.

in short,Pessimistic lock blocks transactions, optimistic lock rollback retry

Optimistic lock implementation

  • CAS implementationCAS: the full name of CAS is compare and swap, which is a lock free algorithm. Realize variable synchronization between multiple threads without using locks (no threads are blocked). java. util. The atomic class in the concurrent package implements optimistic locking through CAS.

    CAS algorithm involves three operands:

    • The memory value V that needs to be read and written.
    • Compare the value of A.
    • The new value to write is B.

    When and only when the value of V is equal to a, CAS updates the value of V with the new value B in an atomic way (comparison + update is an atomic operation as a whole), otherwise no operation will be performed. In general, “update” is an operation that keeps trying again. java. util. concurrent. Most of the operations are implemented by CAS package.

  • Version number mechanism: generally, a data version number field is added to the data table to indicate the number of times the data has been modified. When the data is modified, the version value will be added by one. When thread a wants to update the data value, it will also read the version value while reading the data. When submitting the update, if the version value just read is equal to the version value in the current database, it will not be updated. Otherwise, retry the update operation until the update is successful.

What problems will CAS cause?

  • ABA question:CAS needs to check whether the memory value changes when operating the value. If there is no change, the memory value will be updated. However, if the memory value turns out to be a, then B, and then a, CAS will find that the value has not changed, but it has actually changed. The solution to the ABA problem is to add the version number in front of the variable and add one to the version number every time the variable is updated, so that the change process changes from “A-B-A” to “1a-2b-3a”.

    JDK has been available since 1.5Atomicstampedreference classTo solve the ABA problem, the specific operations are encapsulated in compareandset(). Compareandset() first checks whether the current reference and current flag are equal to the expected reference and expected flag. If they are equal, set the reference value and flag value to the given update value atomically.

  • Long cycle time and high overhead:In the case of serious resource competition (serious thread conflict), CAS operation will spin all the time if it is not successful for a long time, which will bring great overhead to CPU and lower efficiency than synchronized.

  • Atomic operation of only one shared variable can be guaranteed:When operating on a shared variable, we can use cyclic CAS to ensure atomic operation. However, when operating on multiple shared variables, cyclic CAS cannot ensure the atomicity of the operation. At this time, locks can be used.

    Starting with Java 1.5, JDK providesAtomicreference classTo ensure the atomicity between reference objects, multiple variables can be placed in one object for CAS operation.

Synchronized lock upgrade (multiple resources compete for synchronized resources): biased lock → lightweight lock → heavyweight lock

Division basis: the process details of multiple threads competing for synchronization resources are different

These four locks refer to the state of the lock, which is specifically for synchronized. Before introducing these four lock states, you need to introduce some additional knowledge.

First, why can synchronized achieve thread synchronization?Before that, you need to understand two important concepts: “Java object header” and “monitor”.

  • Java object header:Synchronized is a pessimistic lock. You need to lock the synchronized resources before operating the synchronized resources. This lock exists in the Java object header, and what is the Java object header? Let’s take the hotspot virtual machine as an example,The object header of hotspot mainly includes two parts of data: mark word and Klass pointer.

    Mark Word: the hashcode, generation age and lock flag information of the object are stored by default. These information are data independent of the definition of the object itself, so mark word is designed as a non fixed data structure to store as much data as possible in a very small memory space. It will reuse its own storage space according to the state of the object, that is, during operation, the data stored in mark word will change with the change of lock flag bit.

    Klass Point: the pointer of the object to its class metadata. The virtual machine uses this pointer to determine which class the object is an instance of.

  • Monitor: monitor can be understood as a synchronization tool or a synchronization mechanism, which is usually described as an object. Every Java object has an invisible lock, which is called internal lock or monitor lock.

    Monitor is a thread private data structure. Each thread has an available monitor record list and a global available list. Each locked object will be associated with a monitor. At the same time, an owner field in the monitor stores the unique identification of the thread that owns the lock, indicating that the lock is occupied by this thread.

    Now let’s return to synchronized. Synchronized realizes thread synchronization through monitor, which relies on the mutex lock of the underlying operating system.

Why are there three types of synchronized locks?

Blocking or waking up a java thread requires the operating system to switch the CPU state, which takes processor time. If the content in the synchronization code block is too simple, the state transition may take longer than the execution time of user code. This is the initial synchronization mode of synchronized, which is the reason for the low efficiency of synchronized before JDK 6. This kind of lock that depends on the operating system mutex lock is called “heavyweight lock”. In order to reduce the performance consumption caused by obtaining and releasing the lock, JDK 6 introduces “bias lock” and “lightweight lock”.

  • Bias lock: biased lock means that a piece of synchronization code is accessed by a thread all the time, and the thread will automatically acquire the lock to reduce the cost of acquiring the lock.

    • In most cases, the lock is always obtained by the same thread many times, and there is no multi-threaded competition, so there is a bias lock. The goal is to improve performance when only one thread executes synchronized blocks of code.
    • When a thread accesses the synchronization code block and obtains the lock, the thread ID of the lock bias will be stored in mark word. When the thread enters and exits the synchronization block, it will no longer lock and unlock through CAS operation, but detect whether there is a bias lock pointing to the current thread stored in mark word. Bias lock is introduced to minimize unnecessary lightweight lock execution paths without multi-threaded competition, because the acquisition and release of lightweight lock depend on multiple CAS atomic instructions, while bias lock only needs to rely on CAS atomic instructions once when replacing ThreadID.
    • Bias locking is enabled by default in JDK 6 and later JVMs. You can close the bias lock through JVM parameters: – XX: – usebiasedlocking = false. After closing, the program will enter the lightweight lock state by default.
  • Lightweight Locking : when the bias lock is accessed by another thread, the bias lock will be upgraded to a lightweight lock, and other threads will try to obtain the lock in the form of spin without blocking, so as to improve performance. If there is only one waiting thread at present, the thread waits by spinning (the CPU is doing useless work. Generally, set the maximum waiting time to reach the maximum waiting time, stop spinning and enter the blocking state).

    • When the code enters the synchronization block, if the lock state of the synchronization object is unlocked, the virtual machine will first establish a space called lock record in the stack frame of the current thread to store the copy of the current mark word of the lock object, and then copy the mark word in the object header to the lock record.
    • After the copy is successful, the virtual machine will use CAS operation to try to update the mark word of the object to the pointer to lock record, and point the owner pointer in lock record to the mark word of the object.
    • If the update action is successful, the thread has the lock of the object, and the lock flag bit of the object mark word is set to “00”, indicating that the object is in a lightweight locking state.
    • If the update operation of the lightweight lock fails, the virtual machine will first check whether the mark word of the object points to the stack frame of the current thread. If so, it means that the current thread already has the lock of the object, then it can directly enter the synchronization block to continue execution. Otherwise, it means that multiple threads compete for the lock.
  • Heavyweight lockWhen a thread spins more than a certain number of times, or a thread holds a lock, one spins, and a third visits, the lightweight lock is upgraded to a heavyweight lock.

    • When upgrading to a heavyweight lock, the status value of the lock flag changes to“10”At this time, the pointer to the heavyweight lock is stored in mark word. At this time, the thread waiting for the lock will enter the blocking state.

Differences and summary of the three locks:

  • Bias lock: solve the lock problem by comparing mark word to avoid additional consumption caused by CAS operation.However, the existence of lock competition will bring additional consumption of lock revocation, which is only applicable to the scenario where one thread accesses synchronous code.
  • Lightweight lock: it solves the problem of locking by using CAS operation and spin to avoid thread blocking and wake-up affecting performance, and the program responds quickly.However, if the thread can’t get the lock all the time, it will consume CPU, which is suitable for the scenario of pursuing response time and fast execution of synchronous code blocks.
  • Heavyweight lock: it blocks all threads except those with locks, and thread competition does not use spin to consume CPU.However, the thread will be blocked and the response time is slow. It is suitable for the scenario of pursuing throughput and slow execution of synchronous code blocks.

PS: synchronous code block (resource): that is, the statement block modified by the synchronized modifier. The statement block modified by the keyword will be added with a built-in lock to realize synchronization.

Fair lock vs unfair lock?

Division basis: do you want to queue up when multiple threads compete for locks

  • Fair lock: Fair lock means that multiple threads acquire locks according to the order in which they apply for locks. Threads directly enter the queue, and the first thread in the queue can obtain the lock.

    The advantage of a fair lock is that the thread waiting for the lock will not starve to death. The disadvantage is that the overall throughput efficiency is lower than that of unfair locks. All threads in the waiting queue except the first thread will block, and the cost of CPU waking up blocked threads is greater than that of unfair locks.

  • Unfair lock: an unfair lock is a direct attempt to acquire a lock when multiple threads lock. If it cannot be acquired, it will wait at the end of the waiting queue.

    However, if the lock is just available at this time, this thread can obtain the lock directly without blocking, so the unfair lock may appear in the scenario that the thread applying for the lock obtains the lock first. The advantage of unfair lock is that it can reduce the overhead of arousing threads, and the overall throughput efficiency is high, because threads have the chance to obtain locks directly without blocking, and the CPU does not have to wake up all threads. The disadvantage is that the thread in the waiting queue may starve to death or wait a long time to obtain the lock.

public ReentrantLock() {
   sync = new NonfairSync();
public ReentrantLock(boolean fair) {
   sync = fair ? new FairSync() : new NonfairSync();

//Create a non fair lock. The default is a non fair lock
Lock lock = new ReentrantLock();
Lock lock = new ReentrantLock(false);

//Create a fair lock and construct the parameter true
Lock lock = new ReentrantLock(true);

Determine whether its internal is a fair lock or a non fair lock according to the parameters. The only difference between the lock () method of fair lock and non fair lock is that the fair lock has one more restriction when obtaining the synchronization state: hasqueuedpredecessors(). Check the source code by yourself. The function of this restriction is to judge whether the current thread is the first in the synchronization queue. If yes, return true; otherwise, return false.

  • For the reentrantlock class, parameters are passed through the constructorYou can specify whether the lock is a fair lock. The default is a non fair lock。 In general, the throughput of unfair locks is larger than that of fair locks. If there are no special requirements, unfair locks are preferred.
  • For synchronized, it is also aUnfair lockHowever, there is no way to turn it into a fair lock.

To sum up, fair lock is achieved by synchronizing the queue. Multiple threads obtain locks according to the order of applying for locks, so as to realize the characteristics of fairness. The lock application is not fair, but the lock application is not fair.

Reentrant lock vs non reentrant lock? How is reentrant lock implemented?

Division basis: can multiple processes in a thread obtain the same lock

Reentrant lock, also known as recursive lock, refers to that when the same thread obtains the lock in the outer method, the inner method of the thread will automatically obtain the lock (provided that the lock object must be the same object or class), which will not be blocked because it has been obtained before and has not been released.

Reentrantlock and synchronized in Java are reentrant locks,One advantage of reentrant locks is that deadlocks can be avoided to some extent.

Why do non reentrant locks deadlock when repeatedly calling synchronous resources?

Take reentrant lock and non reentrant lock as examples: first, both reentrantlock and non reentrantlock inherit the parent class AQS, and their parent classesAQS maintains a synchronization status to count the number of reentries. The initial value of status is 0.

  • When a thread attemptsWhen acquiring a lock, the reentrant lock first attempts to obtain and update the status value. If status = = 0 indicates that no other thread is executing synchronization code, set status to 1 and the current thread starts executing. If status= 0, thenJudge whether the current thread is the thread that obtains the lock. If so, execute status + 1, and the current thread can obtain the lock again。 Instead of a reentrant lock, you directly obtain and try to update the value of the current status if status= 0 will cause it to fail to obtain the lock, and the current thread is blocked.
  • When releasing the lock, re entrant lockGet the value of the current status first, on the premise that the current thread is the thread holding the lock. If status-1 = = 0, it means that all repeated lock acquisition operations of the current thread have been completed, and then the thread will really release the lock。 The non reentrant lock is to directly set the status to 0 and release the lock after determining that the current thread is the thread holding the lock.

Read write lock (shared lock, exclusive lock)?

Division basis: can multiple threads share a lock, exclusive locks and shared locks are also realized through AQS. Exclusive locks or shared locks can be realized through different methods.

Read lock (shared lock): the lock can be held by multiple threads. If thread t adds a shared lock to data a, other threads can only add a shared lock to a, not an exclusive lock. The thread that obtains the shared lock can only read data and cannot modify data.

Write lock (mutex / exclusive / exclusive): this lock can only be held by one thread at a time. If thread t adds an exclusive lock to data a, other threads cannot add any type of lock to a. The thread that obtains the exclusive lock can read and modify the data. The implementation classes of synchronized in JDK and lock in JUC are mutually exclusive locks.

Read / write locking is a pessimistic locking strategy! becauseThe read-write lock does not judge whether the value has been modified before updating, but decides whether to use the read lock or write lock before locking.

What’s the difference between lock and synchronized

  • Different nature: synchronized is the built-in keyword of Java. At the JVM level, lock is a Java class;
  • Different locking objects: synchronized can lock classes, methods and code blocks; Lock can only lock code blocks.
  • Will deadlock occur: synchronized does not need to obtain and release locks manually. It is simple to use. If an exception occurs, the lock will be released automatically without deadlock; Lock needs to lock and release the lock by itself. If it is not used properly, it will cause deadlock if it is not unlocked ().
  • Do you know the lock was successfully acquired: you can know whether the lock has been successfully obtained through lock, but synchronized cannot.
  • Is it an interrupt lock: lock is an interruptible lock and synchronized is a non interruptible lock. You must wait for the thread to finish executing to release the lock.

PS: interruptible lockResponse interruptJava does not provide any method to directly interrupt a thread, onlyInterrupt mechanism

JavaAn interrupt cannot directly terminate a threadInstead, the interrupted thread needs to decide how to deal with it. If thread a holds a lock, thread B waits to acquire the lock. Because thread a holds the lock for a long time, thread B doesn’t want to wait any longer. We can let thread B interrupt itself or interrupt it in another thread. This isInterruptible lock

synchronized What’s the difference with reentrantlock?How to choose?

Reentrant lock reentrantlock is the most common implementation of lock. It is reentrant like synchronized, but it adds some advanced functions:

  • Wait for interruptible: when the thread holding the lock does not release the lock for a long time, the waiting thread can choose to give up waiting and deal with other things.
  • Custom fair lock: synchronized is unfair, reentrantlockIt is unfair by default, you can specify a fair lock through a construction method. Once the fair lock is used, the performance will decline sharply and affect the throughput.
  • Lock binding multiple conditions: a reentrantlock can bind multiple conditions at the same time. The wait and notify of the lock object in synchronized can implement an implicit condition. If you want to associate with multiple conditions, you have to add additional locks, while reentrantlock can call newcondition multiple times to create multiple conditions.

Generally preferredsynchronized

  • Synchronized is syntax level synchronization, which is simple enough.
  • Lock must ensure that the lock is released in finally, otherwise it may never be released once an exception is thrown. Using synchronized, the JVM can ensure that the lock can be released normally even if an exception occurs.
  • The JVM is easier to optimize for synchronized, because the JVM can record the lock related information in synchronized in the metadata of threads and objects. When using lock, it is difficult for the JVM to know which lock objects are held by a specific thread.

Shoulders of Giants: