Detailed explanation of Java concurrency volatile

Time:2022-7-31

Read with questions

1. Why volatile is needed? What problems can volatile solve

2. What is the implementation principle of volatile

3. What is happen before

4. Can volatile ensure thread safety

Java Memory Model JMM

introducevolatileBefore that, I will first explain the JAVA memory model. stayC\C++In other languages, memory management directly uses the memory model of physical hardware and operating system, which will lead to programs that are not fully compatible on different platforms. The Java virtual machine specification attempts to define the Java Memory Model to shield the memory access difference between hardware and operating system, so as to achieve the cross platform compatibility of Java programs.

It is not easy to define the JAVA memory model. The model must be well-defined and cannot cause ambiguity in memory access; It must also be loose enough for the virtual machine to have enough flexibility to use the characteristics of hardware to improve the speed of memory operation. After a long time of verification and repair, the JAVA memory model did not mature until JDK 5.

JMM

The Java Memory Model stipulates that all variables are stored in the main memory (virtual memory, non physical memory), and each thread has its own working memory, in which a copy of the variables required by the thread is copied. The thread’s operation on all variables acts on the copy of the working memory, and cannot directly operate on the main memory. Working memory between different threads is isolated from each other, and variables in other working memory cannot be accessed directly.

Introduction to volatile function

volatileIt is the lightest synchronization operation provided by java to ensureVisibility and order

visibility

It can be seen from the introduction of JMM that each java thread has its own working memory. If two threads share the same variable, each thread copies the variable in its own working memory. When thread a modifies a variable, thread B’s modification of the variable cannot be immediately visible. Only when thread a brushes the variable into the main memory and reloads the main memory variable in thread B can thread B get the value modified by thread a.

Variable additionvolatileAfter that, the changes can be synchronized to main memory immediately, and it is required to re read from main memory immediately before use to ensure the visibility of variables.

synchronizedAfter the lock is released, the modifications of the synchronization block will be synchronized to the main memory, sosynchronizedVisibility is also guaranteed.

Orderliness

The natural orderliness in Java programs can be summarized in one sentence: if observed in this thread, all operations are orderly; If you observe another thread in one thread, all operations are disordered. The first half of the sentence refers to “the implication of serialization in the thread”, and the second half refers to “instruction rearrangement” and “synchronous delay between working memory and main memory”.

public class Singleton {
    private static volatile instance;
    private Singleton() {}
    public stativ Singleton getInstance() {
        if (instance == null) {
            synchronized(Singleton.class) {
                if (instance == null) {
                    instance = new Singleton();
                }
            }
        }
        return instance;
    }
}

Taking the standard singleton mode as an example, the creation of singleton objects is actually divided into three steps: allocating memory space, initializing objects, and assigning memory addresses to references; After the instruction rearrangement, it may become: allocate space, assign references, and initialize objects.

Considering the multithreading environment, a thread executesnew Instance()The assignment reference has not been initialized, and the B thread enters the method judgmentinstance == nullbyfalse, get an uninitialized object directly, which may cause some unexpected errors. By addingvolatileCan ensureinsntaceIs not reordered during the creation of.

synchronizedBecause only one thread is allowed to enter the code block at the same time, sosynchronizedIt can also ensure order.

Atomicity

volatileOnly the atomicity of reading and writing can be guaranteed, and the atomicity of other operations cannot be guaranteed.

volatile int i = 0;

for (int n = 0; n < 1000; n++) {
    new Thread(() -> i++).start();
}

For example, the common misunderstanding isiWhen multithreading increases to 1000, in factvolatileThere is no guaranteei++Synchronization and atomicity of. becausei++It will be divided into three instructions: read, add 1, write,volatileAtomicity can be guaranteed only in the read and write phases, so to solve the synchronization problem, we still need to rely onsynchronizedandlock

Analysis of volatile principle

Visibility principle

public class Test {
    private volatile int a;
    public void update() {
        a = 1;
    }
    public static void main(String[] args) {
        Test test = new Test();
        test.update();
    }
}

adopthsdisandjitwatchView the compiled assembly code.

....
0x000000000295158c: lock cmpxchg%rdi, (%rdx) // volatile write add lock prefix
....

Lock instruction and cache consistency protocol

The execution of the lock prefix instruction triggers two things:

  • Write the data of the current processor cache line back to main memory
  • Invalidate the cache of this memory address in other processing cache lines

Suppose that both threads a and B set variablesvarRefers to loading into your own working memory, rightvaradd tovolatileAfter modification, when thread a modifiesvarThe modification will be immediately flushed to the main memory, and thevarThe variable cache will be invalidated. When the B thread reads the variable, it needs to load it into the main memory again, so as to maintain the visibility of the variable. The data consistency in this multi cache scenario is guaranteed by cache consistency protocol (MESI).

For cache consistency recommendations, refer to the third article, attachedMESI demo

可见性

The early implementation of the lock instruction will lock the bus. During the lock, the read and write of memory by other processors will be blocked until the lock is released. Because the cost of locking the bus is too large, cache locks are used to replace bus locks in the later stage.

Principle of orderliness

Due to the existence of locks, the read of the cache must be after the write of the lock instruction. Therefore, lock implies a certain guarantee of order. And inThe order of the overall code instructions is guaranteed by the memory barrier. Lock does not provide a memory barrier

Jsr-133 divides reading and writing into:

  • Normal reading: reading of non volatile fields, such as getfield, getstatic or array loading
  • Normal write: write non volatile fields, such as setfield, setstatic or array storage
  • Volatile read: volatile field read accessible by multiple threads
  • Volatile write: volatile field reachable by multiple threads

JMM has rearrangement rules for four kinds of reads and writes:

volatile重排规则

According to this table, there are the following semantics:

  • The first operation is volatile read, and any subsequent read and write cannot be rearranged before this operation
  • The second operation is volatile read. Volatile read and write in the preceding sequence cannot be rearranged after this operation
  • The first operation is volatile write, and subsequent volatile reads and writes cannot be rearranged before this operation
  • The second operation is volatile write. Any reading and writing of the preceding sequence cannot be rearranged after this operation

In order to realize ordered semantics, JMM provides four memory barriers

Memory barrier Instruction sequence explain
Storestore barrier Store1;StoreStore:Store2 Ensure that store1 data is visible to other processors before store2 and all subsequent storage instructions
Storeload barrier Store1;StoreLoad;Load2 Ensure that store1 data is visible to other processors before load2 and subsequent load instructions
Loadload barrier Load1;LoadLoad;Load2 Ensure that load1 data loading precedes load2 and all subsequent loading instructions
Loadstore barrier Load1;LoadStore;Store2 Ensure that load1 data loading precedes store2 and all subsequent storage instructions

volatileBased on the memory barrier prohibiting instruction reordering, it is practically impossible to minimize the total number of inserted memory barriers. JMM adopts a conservative strategy, mainly following the following rules:

  • Insert a storestore barrier before each volatile write to ensure that writes before volatile writes are not rearranged after volatile writes
  • Insert the storeload barrier after each volatile write to ensure that the read and write operations after volatile write will not be rearranged before volatile write
  • Insert the loadload barrier and loadstore barrier after each volatile read, and prohibit all the following read and write operations from being rearranged before volatile reads

Why did you not add loadstore before volatile write to ensure that reads cannot be rearranged after writes?

Personal understanding is that the lock instruction of volatile restricts the priority of writing over reading, so this barrier is omitted.

During actual execution, the compiler will omit unnecessary barriers according to specific conditions, as shown in the following example:

int a;
volatile int v, u;

void f() {
    int i, j;
    i = a;          // load a
    i = v;          // load v
                    //Loadload because store a cannot cross load u, loadstore can be omitted
    j = u;          // load u
                    //Loadstore because there is no read below, loadload can be omitted
    a = i;          // store a
                    // StoreStore
    v = i;          // store v
                    //Because it is written immediately after a volatile, omit storeload and give it to store u to add
                    // StoreStore
    u = j;          // store u
                    // StoreLoad s
    i = u;          // load u
                    //Loadload prevents the following load a rearrangement
                    //Loadstore prevents the following store a from being rearranged
    j = a;          // load a
    a = i;          // store a
}

Happen before principle

If all orderliness in Java depends on addingvolatileandsynchronizedTo ensure that the programming will be very verbose. But we didn’t feel this when writing Java programs, because of the principle of the first occurrence of the Java languagehappen-before

The popular explanation of the principle of first occurrence is that if the impact of operation a is effective for operation B, then a should first occur in B. The concept of antecedent occurrence is not difficult to understand. If there is no constraint of antecedent occurrence, what problems will occur?

boolean configured = false;

// Thread1
while (!configured) {}
doSometing();

// Thread2
loadConfig();
configured = true;

Look at an example code. Thread1 needs to wait for thread2 to be loaded and configured before proceeding. Therefore, logically, the operation of thread2 should occur in thread1 first. In thread2,loadConfigShould first occur inconfigured=true。 If there is no prior occurrence principle, thread2 may be executed first due to the existence of instruction rearrangementconfigured = trueThen the configuration is loaded, and thread1 has readconfigured:true, the program will have unpredictable errors.

The JAVA memory model defines several natural antecedent relationships. If two operations are not within the following range, the sequential execution between operations is not guaranteed.

  • Program order rules. In a thread, the code written in the front occurs first in the back. To be exact, according to the sequence of control flow of the program, because there are some branch structures.

  • Tube side locking rules. An unlock operation occurs first in the subsequent lock operation on the same lock.

  • Volatile variable ruleFor a variable modified by volatile, the write operation on it first occurs in the read operation.

  • Thread start rule. The start () method of the thread object first occurs in every action of this thread.

  • Thread termination rule. All operations of the thread occur before the termination detection of this thread.

  • Thread interrupt rule. The call to the thread interrupt () method first occurs in the interrupt event detected by the code of the interrupted thread.

  • Object termination rule. The initialization of an object (the end of the constructor line) occurs first at the beginning of the finalize () method.

  • TransitivityA first occurs B, B first occurs C, then a first occurs C.

volatile boolean configured = false;

// Thread1
While (! Configured) {} // operation 3
doSometing(); //  Operation 4

// Thread2
loadConfig(); //  Operation 1
configured = true; //  Operation 2

In the example given above, isconfiguredadd tovolatileModification, according to the prior occurrence rule:

  • Operation 1 occurs first in operation 2 (program sequence rules)
  • Operation 3 occurs first in operation 4 (program sequence rules)
  • Operation 2 first occurs in operation 3 (volatile variable rule)
  • Operation 1 occurs first in operation 4 (transitivity rule)

In this way, the normal execution of the program can be ensured according to the predetermined logic.

This article is also published in Nuggets:https://juejin.cn/post/6998148242928042021/

reference resources

Recommended Today

JS generate guid method

JS generate guid method https://blog.csdn.net/Alive_tree/article/details/87942348 Globally unique identification(GUID) is an algorithm generatedBinaryCount Reg128 bitsNumber ofidentifier , GUID is mainly used in networks or systems with multiple nodes and computers. Ideally, any computational geometry computer cluster will not generate two identical guids, and the total number of guids is2^128In theory, it is difficult to make two […]