AQS source code analysis

Time:2021-12-9
Classification of locks
Pessimistic lock and optimistic lock

Almost all the locks used in Java are pessimistic locks. Synchronized locks are pessimistic from biased locks, lightweight locks to heavyweight locks. The lock implementation classes provided by JDK are all pessimistic locks. In fact, as long as there is a “lock object”, it must be a pessimistic lock. Because optimistic lock is not a lock, but an algorithm that tries CAS in a loop.

Optimistic lock is an atomic class under atomic package.

Fair lock, unfair lock

Multiple threads apply for a fair lock. When the lock is released, those who apply first get it first, which is very fair. Obviously, if it is an unfair lock, the thread applying for the lock may obtain the lock first, which is sorted randomly or according to other priorities.

The differences between fair locks and non fair locks are as follows:

final void lock() {
    if (this.compareAndSetState(0, 1)) {
        this.setExclusiveOwnerThread(Thread.currentThread());
    } else {
        this.acquire(1);
    }
}

When calling lock(), first try to jump the queue and get the lock directly. Change the state to 1. If it is successful, set the owner thread to the current thread, which means that the lock is successfully obtained. If the lock is not obtained, acquire is obtained, and the fair lock is obtained directly.

Deflection lock → lightweight lock → heavyweight lock

When the synchronized code block is executed for the first time, the lock object becomes a biased lock (modify the lock flag bit in the object header through CAS), which literally means a lock “biased to the first thread to obtain it”. After executing the synchronized code block, the thread does not actively release the bias lock. When the synchronization code block is reached for the second time, the thread will judge whether the thread holding the lock is itself (the thread ID holding the lock is also in the object header). If so, it will execute normally. Since the lock has not been released before, there is no need to re lock it here. If there is only one thread that uses locks from beginning to end, it is obvious that locking bias has little additional overhead and high performance.

Once a second thread joins the lock competition, the biased lock is upgraded to a lightweight lock (spin lock). When the lock competition continues in the lightweight lock state, the thread that does not grab the lock will spin, that is, it will constantly cycle to judge whether the lock can be successfully obtained. The operation of obtaining a lock is actually to modify the lock flag bit in the object header through CAS. First compare whether the current lock flag bit is “released”. If so, set it to “locked”. The comparison and setting are atomic. This is even if the lock is robbed, and then the thread modifies the current lock holder information to itself.

Long time spin operation is very resource consuming. One thread holds a lock, and other threads can only idle CPU in place and cannot perform any effective tasks. This phenomenon is called busy waiting. If multiple threads use a lock, but there is no lock competition, or there is a slight lock competition, synchronized uses lightweight locks, allowing short-term busy and other phenomena. This is a compromise idea, short-time busy, etc., in exchange for the overhead of switching between user mode and kernel mode.

This busy wait is limited (there is a counter to record the number of spins. By default, 10 cycles are allowed, which can be changed through virtual machine parameters). If the lock competition is serious, a thread that reaches the maximum number of spins will upgrade the lightweight lock to a heavyweight lock (CAS still modifies the lock flag bit, but does not modify the thread ID holding the lock). When a subsequent thread attempts to acquire a lock and finds that the occupied lock is a heavyweight lock, it directly suspends itself (instead of busy, etc.) and waits for wake-up in the future.

Exclusive lock and shared lock

Exclusive lock: the team leader holds the lock, tryacquire attempts to take the lock after waking up team 2, and sleeps after team 3

Shared lock: compared with the exclusive mode, which only wakes up team 2, the shared mode also wakes up all nodes with mode = shared

Exclusive and sharing are for locking (whether multiple threads can obtain locks at the same time). There is no concept of exclusive and sharing when releasing a lock.

How to achieve synchronization

According to reference article 1, four methods are proposed to realize synchronization:

1: Spin synchronization

Disadvantages: it consumes CPU resources. Threads that do not compete for locks will always occupy CPU resources for CAS operations

2: Yield + spin synchronization

Advantages: to solve the performance problem of spin lock, the thread that fails to compete for the lock must not idle, but can give up the CPU resources when the lock cannot be obtained, and the yield () method can give up the CPU resources. When the thread fails to compete for the lock, it will call the yield method to give up the CPU.

Disadvantages: the spin + yield method does not completely solve the problem. Yield is effective when there are only two threads competing for locks. It should be noted that this method only gives up the CPU at present, and the operating system may choose to run this thread next time

3: Sleep + spin synchronization

Disadvantages: how to set the sleep time

4: Park + spin synchronization

volatile int status=0;
Queue parkQueue;// Collection array list

void lock(){
    while(!compareAndSet(0,1)){
        //
        park();
    }
    //Lock 10 minutes
   unlock()
}

void unlock(){
    lock_notify();
}

void park(){
    //Add the current thread to the waiting queue
    parkQueue.add(currentThread);
    //Release the current thread from CPU blocking
    releaseCpu();
}
void lock_notify(){
    //Get the thread header to wake up
    Thread t=parkQueue.header();
    //Wake up waiting thread
    unpark(t);
}
Reentrantlock source code analysis

Our reentrantlock is implemented by the fourth method.

Let’s start with a piece of code

public class MyRunnable implements Runnable {
    private int num = 0;
    private ReentrantLock lock = new ReentrantLock(true);
    @Override
    public void run() {
        while (num < 20){
            lock.lock();
            try{
                num++;
                Log. E ("ZZF", thread. Currentthread(). Getname() + "get lock, Num is" + Num);
            }catch (Exception e){
                e.printStackTrace();
            }finally {
                lock.unlock();
            }
        }
    }
}
Initialize lock instance

First, let’s look at the constructor,

public ReentrantLock() {
    sync = new NonfairSync();
}

public ReentrantLock(boolean fair) {
    sync = fair ? new FairSync() : new NonfairSync();
}

There are two ways. The default new without parameters is a non fair lock, nonfairsync.

In our example, we call second constructors and the incoming true, so we use fair locks. (all subsequent analyses are based on fair locks)

lock.lock()
public void lock() {
    sync.lock();
}

final void lock() {
    acquire(1);
}

public final void acquire(int arg) {
   if (!tryAcquire(arg) &&
        acquireQueued(addWaiter(Node.EXCLUSIVE), arg))
       selfInterrupt();
}

When you call lock (), you first call acquire (), which is divided into two parts.

tryAcquire

This method is to try to get the lock.

protected final boolean tryAcquire(int var1) {
    Thread var2 = Thread.currentThread();
    int var3 = this.getState();
    if (var3 == 0) {
        if (!this.hasQueuedPredecessors() && this.compareAndSetState(0, var1)) {
            this.setExclusiveOwnerThread(var2);
                return true;
        }
    } else if (var2 == this.getExclusiveOwnerThread()) {
        int var4 = var3 + var1;
        if (var4 < 0) {
            throw new Error("Maximum lock count exceeded");
         }

        this.setState(var4);
        return true;
    }
        return false;
    }
}

To understand the meaning of the above code, we need to first understand a static internal class node of AQS;

static final class Node {
    static final AbstractQueuedSynchronizer.Node SHARED = new AbstractQueuedSynchronizer.Node();// Represents a shared mode
    static final AbstractQueuedSynchronizer.Node EXCLUSIVE = null;// Indicates exclusive mode
    static final int CANCELLED = 1;// Because of timeout or interruption, the node will be set to the cancelled state. The nodes in the cancelled state should not compete for locks. They can only keep the cancelled state unchanged and cannot be converted to other states. Nodes in this state will be kicked out of the queue and recycled by GC;
    static final int SIGNAL = -1;// It means that the successor node of this node is blocked, and it needs to be notified at that time
    static final int CONDITION = -2;// Indicates that this node is in the condition queue and is blocked because it waits for a condition;
    static final int PROPAGATE = -3;// In the shared mode, the head node may be in this state, indicating that the next acquisition of the lock can be unconditionally propagated;
    volatile int waitStatus;// Used to represent the above four states
    volatile AbstractQueuedSynchronizer.Node prev;// Previous node
    volatile AbstractQueuedSynchronizer.Node next;// Last node
    volatile Thread thread;// Represents a thread
    Node nextWaiter;// Only condition queues are used.
}

Getstate() gets a volatile int state of type int; Modified by volatile. The default value is 0, which indicates no lock status, 1 indicates locked status, and > 1 indicates reentry.

When the first thread comes in, getstate () is 0, so it enters the first if.

public final boolean hasQueuedPredecessors() {
    AbstractQueuedSynchronizer.Node var1 = this.tail;
    AbstractQueuedSynchronizer.Node var2 = this.head;
    AbstractQueuedSynchronizer.Node var3;
    return var2 != var1 && ((var3 = var2.next) == null || var3.thread != Thread.currentThread());
}

This method is to judge whether it needs to queue.

This needs to be discussed in three situations:

1: The queue was not initialized

When the queue is not initialized, VAR1 = null and var2 = null. Therefore, hasqueuedpredecessors() returns false. If you reverse the first if, it means that you do not need to enter the queue for processing, and you can directly lock CAS. And set the current thread to the thread of the node. Then it returns true, and if it is negative in acquire (), the code in if will not be used. It can be seen from here that locking is to enable the thread that obtains the lock to execute logic smoothly.

2: Queue initialization and > 1 node

When the queue is initialized, a new node will be placed at the front, and the second node in the queue is the first one in the queue, and the new node can be understood as the occupancy of the first one to get the lock. When it is greater than one node, var2= VAR1 is true, because var2 was initially null, and now > 1 node indicates that var2.next has a value, so (var3 = var2. Next) = = null is false.

var3.thread != When thread. Currentthread()) is true, it means that someone in front of me is already queuing, so I have to queue. So hasqueuedpredecessors() returns true. The first if will pop up. If tryacquire returns false, you need to judge acquirequeueueueued (addwaiter (node. Exclusive), Arg) in acquire(), which will be explained later.

var3.thread != When thread. Currentthread() is false, there is no need to queue. The simplest understanding is that her girlfriend is in line in front, so she buys a ticket is equivalent to me, so there is no need to queue.

3: Queue initialization has only one node

When there is only one node, it is equivalent to the new node when initializing the queue. At this time, the head and tail of the queue are itself, but it is not the one in the queue, but the one occupying the position of the thread holding the lock in the queue.

acquireQueued(addWaiter(Node.EXCLUSIVE), arg))

In the second case mentioned above, this code will be executed. Addwaiter () encapsulates the current thread as a node and adds it to the AQS queue.

private Node addWaiter(Node mode) {
    Node node = new Node(mode);

    for (;;) {
        Node oldTail = tail;
        if (oldTail != null) {
            U.putObject(node, Node.PREV, oldTail);
            if (compareAndSetTail(oldTail, node)) {
                oldTail.next = node;
                return node;
            }
        } else {
            initializeSyncQueue();
        }
    }
}

Encapsulate the current thread as a node and the mode is an exclusive lock.

Then an infinite loop is performed. The first time oldtail is null, it enters else for initialization. The second time you come in, oldtail is not null. Point the prev node of the current thread node to the tail, and then add the node to the AQS queue through CAS. After success, point the next of the old tail to the new tail. Therefore, AQS is not initialized at the first time.

final boolean acquireQueued(final Node node, int arg) {
    try {
        boolean interrupted = false;
        for (;;) {
            final Node p = node.predecessor();
            if (p == head && tryAcquire(arg)) {
                setHead(node);
                p.next = null; // help GC
                return interrupted;
            }
            if (shouldParkAfterFailedAcquire(p, node) &&
                parkAndCheckInterrupt())
                interrupted = true;
        }
    } catch (Throwable t) {
        cancelAcquire(node);
        throw t;
    }
}

First, get the previous node. Only the previous node of the current node is head, indicating that the current node is the second element in the queue. AQS is based on FIFO, so the current one releases the lock, and only the second in the queue can obtain the lock. If the lock request is successful, set the current node to head. And disconnect the previous head. Because the node.head and node.thread of the node that is the head are null, you only need to set node.next to null to break the link of the linked list.

If the lock acquisition fails, the waitstatus determines whether the thread needs to be suspended.

private static boolean shouldParkAfterFailedAcquire(Node pred, Node node) {
    int ws = pred.waitStatus;
    if (ws == Node.SIGNAL)
        return true;
        if (ws > 0) {
            do {
                node.prev = pred = pred.prev;
        } while (pred.waitStatus > 0);
        pred.next = node;
    } else {
        pred.compareAndSetWaitStatus(ws, Node.SIGNAL);
    }
    return false;
}

If the waitstatus of the previous node is signal, it indicates that the current thread needs to be unpark.

If waitstatus > 0, it indicates that it is in the status of cancel, and the next node that has not been cancelled is set as the front node of the current node. The purpose is to kick out the node in cancel status. If it is in other states, change the front node to signal state. When true is returned, continue with the following code.

private final boolean parkAndCheckInterrupt() {
        LockSupport.park(this);
        return Thread.interrupted();
    }

This means that the current thread is suspended to the waiting state through park().

lock.unlock()
public void unlock() {
    sync.release(1);
}

public final boolean release(int arg) {
    if (tryRelease(arg)) {
        Node h = head;
        if (h != null && h.waitStatus != 0)
            unparkSuccessor(h);
        return true;
    }
    return false;
}

When unlock is executed, release() will be executed;

protected final boolean tryRelease(int var1) {
    int var2 = this.getState() - var1;
    if (Thread.currentThread() != this.getExclusiveOwnerThread()) {
        throw new IllegalMonitorStateException();
    } else {
        boolean var3 = false;
        if (var2 == 0) {
            var3 = true;
            this.setExclusiveOwnerThread((Thread)null);
        }

        this.setState(var2);
        return var3;
    }
}

Getstate() – 1 indicates that there may be reentry. Getstate() may be > 1. Only when var2 = = 0, set the current thread to null, release the lock and return true.

private void unparkSuccessor(Node node) {
    int ws = node.waitStatus;
    if (ws < 0)
        node.compareAndSetWaitStatus(ws, 0);
        Node s = node.next;
        if (s == null || s.waitStatus > 0) {
            s = null;
            for (Node p = tail; p != node && p != null; p = p.prev)
                if (p.waitStatus <= 0)
                    s = p;
    }
        if (s != null)
            LockSupport.unpark(s.thread);
}

This method is to release the lock. The head node is passed in. After the current thread is released, it needs to wake up the thread of the next node.

Here is to traverse forward from the end of the queue to find the first node with waitstatus less than 0. Traversing backwards from the front will result in an endless loop.

If the next node obtained from the head is not empty, release the permission to wake up the thread.

Condition

The synchronization queue of AQS is mentioned above. In addition to this synchronization queue, there is also a condition queue.

The premise of joining the conditional queue is that the current thread has obtained the lock and is in the running state. After joining the conditional queue, you need to release the lock and enter the blocking state. At the same time, wake up the queue 2 of the synchronization queue to take the lock. The wake-up operation is to move a node from the condition queue to the end of the synchronization queue and let it return to the synchronization queue park. It does not wake up a thread at random.

technological process

1. Create a node and add it to the end of the condition queue.

2. Release the lock held by the thread.

3. Judge whether it is in the synchronization queue. If it is, perform CAS lock taking operation. If it is not, wake up and enter the tail of the synchronization queue.

public class ConditionDemo {

    private int queueSize = 10;
    private PriorityQueue<Integer> queue = new PriorityQueue<Integer>(queueSize);

    private Lock lock = new ReentrantLock();
    private Condition full = lock.newCondition();
    private Condition empty = lock.newCondition();

    class Consumer implements Runnable{

        @Override
        public void run() {
            consume();
        }

        private void consume() {
            while (true){
                lock.lock();
                try {
                    while (queue.size() == 0){
                        try {
                            System.out.println ("queue empty, waiting for data");
                            empty.await();
                        } catch (InterruptedException e) {
                            e.printStackTrace();
                        }
                    }
                    queue.poll();
                    full.signal();
                    System. Out. Println ("take an element from the queue, and the queue remains" + queue. Size() + "elements");
                }finally {
                    lock.unlock();
                }
            }
        }
    }

    class Producer implements Runnable{

        @Override
        public void run() {
            produce();
        }

        private void produce() {
            while (true){
                lock.lock();
                try {
                    while(queue.size()== queueSize){
                        try {
                            System. Out. Println ("queue full, waiting for free space");
                            full.await();
                        } catch (InterruptedException e) {
                            e.printStackTrace();
                        }
                    }
                    queue.offer(1);
                    empty.signal();
                }finally {
                    lock.unlock();
                }
            }

        }
    }
}

The above is to use condition to implement the producer consumer mode. When the size of the queue is 0, empty calls await()

dormancy
public final void await() throws InterruptedException {
    if (Thread.interrupted())
        throw new InterruptedException();
    Node node = addConditionWaiter();
    int savedState = fullyRelease(node);
    int interruptMode = 0;
    while (!isOnSyncQueue(node)) {
        LockSupport.park(this);
        if ((interruptMode = checkInterruptWhileWaiting(node)) != 0)
                    break;
    }
    if (acquireQueued(node, savedState) && interruptMode != THROW_IE)
        interruptMode = REINTERRUPT;
        if (node.nextWaiter != null) // clean up if cancelled
            unlinkCancelledWaiters();
            if (interruptMode != 0)
                reportInterruptAfterWait(interruptMode);
}

The conditionobject contains two member objects, one representing the first node of the queue and the other representing the last node of the queue.

private transient Node firstWaiter;
private transient Node lastWaiter;


private Node addConditionWaiter() {
    Node t = lastWaiter;
    if (t != null && t.waitStatus != Node.CONDITION) {
        unlinkCancelledWaiters();
        t = lastWaiter;
    }

    Node node = new Node(Node.CONDITION);

    if (t == null)
        firstWaiter = node;
    else
        t.nextWaiter = node;
    lastWaiter = node;
    return node;
}

Addconditionwaiter is to create a node in condition state. If there is no node in the queue, the newly created node will be assigned to the first node. If there is, it will be assigned to the next node of the current node. And this is also the last node. Conditional queues are also in the form of first in first out, but in the form of a single linked list.

final int fullyRelease(Node node) {
    try {
        int savedState = getState();
        if (release(savedState))
            return savedState;
        else    
            throw new IllegalMonitorStateException();
    } catch (Throwable t) {
        node.waitStatus = Node.CANCELLED;
        throw t;
    }
}

First, get the current state and then call the release method to release the lock.

In the third reference blog, it is written that fullyrelease will release all locks at once, no matter how many times it is re entered. What I understand is that when reentry, it is the same thread and the same lock. So it’s just a lock.

After the above code, the node has been placed in the condition queue and released its lock, and then it needs to hang. However, as mentioned earlier, the suspension does not need to be in the synchronization queue, so you need to judge whether it is in the synchronization queue.

final boolean isOnSyncQueue(Node node) {
    if (node.waitStatus == Node.CONDITION || node.prev == null)
        return false;
    if (node.next != null) // If has successor, it must be on queue
        return true;
    return findNodeFromTail(node);
}

private boolean findNodeFromTail(Node node) {
    for (Node p = tail;;) {
        if (p == node)
            return true;
        if (p == null)
            return false;
        p = p.prev;
    }
}

When the node status is node.condition or node.prev = = null, it is not in the synchronization queue, and node.prev and node.next are not used in the condition queue. If this is included, it indicates that it is in the synchronization queue.

private void unlinkCancelledWaiters() {
    Node t = firstWaiter;
    Node trail = null;
    while (t != null) {
        Node next = t.nextWaiter;
        if (t.waitStatus != Node.CONDITION) {
            t.nextWaiter = null;
            if (trail == null)
                firstWaiter = next;
            else
               trail.nextWaiter = next;
            if (next == null)
                lastWaiter = trail;
            }else
                trail = t;
                t = next;
        }
    }

If the status is not condition, it will be deleted automatically.

awaken
public final void signal() {
    if (!isHeldExclusively())
        throw new IllegalMonitorStateException();
        Node first = firstWaiter;
        if (first != null)
            doSignal(first);
}

private void doSignal(Node first) {
    do {
        if ( (firstWaiter = first.nextWaiter) == null)
            lastWaiter = null;
            first.nextWaiter = null;
        } while (!transferForSignal(first) &&
             (first = firstWaiter) != null);
        }

First, judge whether it is the same thread, and throw an exception if it is not. Then get the head node of the waiting queue, and subsequent operations are based on this node.

final boolean transferForSignal(Node node) {
    if (!node.compareAndSetWaitStatus(Node.CONDITION, 0))
        return false;

    Node p = enq(node);
    int ws = p.waitStatus;
    if (ws > 0 || !p.compareAndSetWaitStatus(ws, Node.SIGNAL))
        LockSupport.unpark(node.thread);
    return true;
}

private Node enq(Node node) {
    for (;;) {
        Node oldTail = tail;
        if (oldTail != null) {
            U.putObject(node, Node.PREV, oldTail);
            if (compareAndSetTail(oldTail, node)) {
                oldTail.next = node;
                return oldTail;
            }
        } else {
            initializeSyncQueue();
        }
    }
}

Enq () moves the node into the synchronization queue.

Signal only operates on the head node of the waiting queue, while signalall moves each node into the synchronization queue.

The complete cooperation between synchronization queue and condition queue is shown in the following figure:

AQS source code analysis

1608883350(1).png
reference

https://blog.csdn.net/java_lyvee/article/details/98966684

https://segmentfault.com/a/1190000017372067

https://segmentfault.com/a/1190000020345054