Common concurrency problems and event concurrency model

Time:2021-4-21

Common concurrency problems

Over the years, researchers have spent a lot of time and energy studying the defects of concurrent programming. There are many common patterns of concurrent defects, which can be divided into two categories: non deadlock defects and deadlock defects. Understanding these patterns is the first step in writing robust, correct programs.

Non deadlock defect

Research shows that non deadlock problems account for the majority of concurrent problems. How do they happen? And how to fix it? We mainly discuss two of them: atomicity violation and order violation.

Violation of atomic defects

This is an example that appears in MySQL.

1    Thread 1::
2    if (thd->proc_info) {
3      ...
4      fputs(thd->proc_info, ...);
5      ...
6    }
7
8    Thread 2::
9    thd->proc_info = NULL;

In this example, both threads need to access the member proc in the thd structure_ info。 The first thread checks the proc_ Info is not empty, and then print out the value; The second thread sets it to null. Obviously, if after the first thread check, in fputs () Before calling, it is interrupted, and the second thread sets the pointer to null; When the first thread resumes execution, the program will crash due to the reference of null pointer.

The formal definition of violation of atomicity is:Violation of serializability expected in multiple memory accesses (i.e. code segments are intended to be atomic, but atomicity is not enforced during execution)

The fix to this problem is usually simple. We only need to lock the access of shared variables to ensure that each thread accesses the proc_ Info field, hold the lock. Of course, all other code that accesses this structure should also get locks first.

1    pthread_mutex_t proc_info_lock = PTHREAD_MUTEX_INITIALIZER;
2
3    Thread 1::
4    pthread_mutex_lock(&proc_info_lock);
5    if (thd->proc_info) {
6      ...
7      fputs(thd->proc_info, ...);
8      ...
9    }
10    pthread_mutex_unlock(&proc_info_lock);
11
12   Thread 2::
13   pthread_mutex_lock(&proc_info_lock);
14   thd->proc_info = NULL;
15   pthread_mutex_unlock(&proc_info_lock);
Violation of sequence defect

Here is a simple example.

1    Thread 1::
2    void init() {
3        ...
4        mThread = PR_CreateThread(mMain, ...);
5        ...
6    }
7
8    Thread 2::
9    void mMain(...) {
10       ...
11       mState = mThread->State;
12       ...
13   }

You may have noticed that the code for thread 2 seems to assume that the variable mthread has been initialized. However, if thread 1 does not execute first, thread 2 may crash due to the reference null pointer (assuming that the initial value of mthread is null, otherwise it may cause more strange problems, because thread 2 will read any memory location and reference it).

A more formal definition of order violation is:“The expected order of two memory accesses is broken (that is, a should be executed before B, but it is not in this order in actual operation)”

We can fix this defect by forcing order. Condition variables is a simple and reliable way. In the above example, we can modify the code as follows:

1    pthread_mutex_t mtLock = PTHREAD_MUTEX_INITIALIZER;
2    pthread_cond_t mtCond = PTHREAD_COND_INITIALIZER;
3    int mtInit            = 0;
4
5    Thread 1::
6    void init() {
7       ...
8       mThread = PR_CreateThread(mMain, ...);
9
10      // signal that the thread has been created...
11      pthread_mutex_lock(&mtLock);
12      mtInit = 1;
13      pthread_cond_signal(&mtCond);
14      pthread_mutex_unlock(&mtLock);
15      ...
16   }
17
18   Thread 2::
19   void mMain(...) {
20      ...
21      // wait for the thread to be initialized...
22      pthread_mutex_lock(&mtLock);
23      while (mtInit == 0)
24          pthread_cond_wait(&mtCond,  &mtLock);
25      pthread_mutex_unlock(&mtLock);
26
27      mState = mThread->State;
28      ...
29   }

Deadlock defect

In addition to the concurrency defects mentioned above, deadlock is a classic problem in many complex concurrent systems. For example, when thread 1 holds lock L1 and is waiting for another lock L2, while thread 2 holds lock L2 and is waiting for lock L1 to be released, a deadlock occurs. This deadlock can occur in the following code fragments:

Thread 1:    Thread 2:
lock(L1);    lock(L2);
lock(L2);    lock(L1);

When this code is running, deadlock does not necessarily occur. When thread 1 holds lock L1, the context switches to thread 2. Thread 2 locks L2 and tries to lock L1. At this time, a deadlock will occur, and the two threads wait for each other. As shown in the figure, the cycles indicate deadlocks.

Common concurrency problems and event concurrency model

The condition of deadlock

There are four conditions for deadlock generation

  • Mutex: threads access the required resources in a mutually exclusive way.
  • Hold and wait: threads hold resources while waiting for other resources.
  • Non preemption: resources obtained by threads cannot be preempted.
  • Loop waiting: there is a loop between threads. Each thread in the loop holds an additional resource, which is applied by the next thread.

If any of these four conditions is not met, deadlock will not occur. Therefore, the solution to deadlock is obvious: just try to prevent one of the conditions.

Deadlock prevention
Circular waiting

Perhaps the most practical prevention technique is to keep the code from waiting in a loop.The most direct way is to provide a total ordering when obtaining locks.If there are two locks (L1 and L2) in the system, we will apply L1 first and then L2 every time. In this way, the strict sequence avoids circular waiting, and there will be no deadlock.

Of course, in more complex systems, there will not be only two locks, and the total order of locks may be difficult to achieve. Therefore,Partial ordering can be a useful way to arrange the acquisition order of locks and avoid deadlocks. The memory mapping code in Linux is an excellent example of partial order lock. The comments at the beginning of the code indicate 10 different lock sequences, including simple relationships such as I_ Mutex is earlier than I_ mmap_ Mutex also includes complex relationships, such as I_ mmap_ Mutex is earlier than private_ Lock, before swap_ Lock, earlier than mapping > tree_ lock。

however,Both total order and partial order need detailed design and implementation of lock strategy. In addition, the order is just a convention, careless programmers are easy to ignore, leading to deadlock. Finally, orderly locking requires a deep understanding of the code base and the calling relationship of various functions. Even an error will lead to serious consequences.

Note: the lock can be acquired according to the address of the lock, and the lock can be added from high to low or from low to high.

Hold and wait

The holding and waiting condition of deadlock can be avoided by atomic lock snatching.In practice, it can be realized by the following code:

lock(prevention);
lock(L1);
lock(L2);
...
unlock(prevention);

The code ensures that after a thread grabs the prevention lock, even if there is an untimely thread switch, other threads can’t grab any lock.

However, the problems of this scheme are also obvious.First of all, it is not suitable for encapsulation, because this scheme requires us to know exactly which locks to grab and grab them in advance. And because we want to get all the locks ahead of time, rather than when we really need them, we may reduce the concurrency.

non-preemptive

Before calling unlock, the lock is considered to be occupied. Multiple lock grabbing operations usually cause trouble, because when we wait for one lock, we may hold another lock at the same time. Many thread libraries provide more flexible interfaces to avoid this situation. Specifically, the trylock() function will try to obtain the lock, and return − 1 to indicate that the lock has been occupied and the thread will not hang.

This interface can be used to implement deadlock free locking method

top:
    lock(L1);
    if (trylock(L2) == -1) {
        unlock(L1);
        goto top;
    }

Note that when another thread uses the same locking method, but different locking sequence (L2 then L1), the program will still not generate deadlock. But it will lead to a new problem: livelock.It is possible for two threads to repeat this sequence all the time and fail to grab the lock at the same time. In this case, the system has been running this code, so it is called livelock. There are also livelocks: for example, at the end of the loop, you can wait for a random time, and then repeat the whole action, which can reduce the repeated interference between threads.

There may be other difficulties with the trylock method.The first problem is still encapsulation: if one of the locks is encapsulated inside the function, then the jump back to the beginning is difficult to implement. Also, if the code acquires some resources in the middle of the process, it must ensure that these resources can also be released. For example, after seizing L1, our code allocates some memory. When seizing L2 fails, we need to release the memory before goto.Of course, in some scenarios, this approach works well.

mutex

The final prevention is to avoid mutual exclusion completely. Generally speaking, code has critical areas, so it is difficult to avoid mutual exclusion. So what should we do? The idea is simple:With powerful hardware instructions, we can construct data structure without lock

For example, we can use the compare and swap instruction to implement a chained list insertion operation without lock synchronization.
This is the code to insert an element in the head of the linked list

void insert(int value) {
    node_t *n = malloc(sizeof(node_t));
    assert(n != NULL);
    n->value = value;
    n->next = head;
    head = n;
}

One possible implementation is:

void insert(int value) {
    node_t *n = malloc(sizeof(node_t));
    assert(n != NULL);
    n->value = value;
    do {
        n->next = head;
    } while (CompareAndSwap(&head, n->next, n) == 0);
}

This code first points the next pointer to the current chain header head, and then tries to exchange the new node to the chain header. If other threads successfully modify the value of head, the exchange here will fail and the thread will try again and again.

Deadlock avoidance

Besides deadlock prevention, some scenarios are more suitable for deadlock avoidance.We need to know the global information, including the lock requirements of different threads, so that the subsequent scheduling can avoid deadlock.

For example, suppose we need to schedule four threads on two processors. Let’s further assume that we know that thread 1 (T1) needs to lock L1 and L2, T2 also needs to scramble L1 and L2, T3 only needs L2, T4 does not need locks. We use the following table to show the thread’s requirements for locks.

Common concurrency problems and event concurrency model

A feasible scheduling method is that as long as T1 and T2 are not running at the same time, there will be no deadlock. Here’s how:

Common concurrency problems and event concurrency model

Dijkstra’s banker algorithm is a similar solution.However, the application scenarios of these schemes are very limited. For example, in an embedded system, you know all the tasks and the locks they need. In addition, this method limits concurrency. Therefore, scheduling to avoid deadlock is not widely used.

Deadlock check and recovery

The last common strategy is to allow deadlock to occur occasionally, and take action when deadlock is detected. If deadlocks are rare, this is not a good way to solve them.

Many database systems use deadlock detection and recovery technology. The deadlock detector runs periodically, checking the loop by building a resource graph. When a loop (deadlock) occurs, the system will roll back or even restart according to the established policy. If more complex data structure related repair is needed, then manual participation is needed.

Note: perhaps the best solution is to develop a new concurrent programming model: in a system like MapReduce, programs can complete some types of parallel computing without any locks. Lock inevitably brings all kinds of difficulties, we should avoid using lock as far as possible, unless we are sure that it must be used.

Event based concurrency

So far, the concurrency we mentioned seems to be implemented only by threads. This is not entirely true. Some GUI based applications, or some types of network servers, often use another way of concurrency. This method is called event based concurrency, which is popular in some modern systems.

The concurrency based on event aims at two problems. On the one hand, it is difficult to deal with concurrency correctly in multithreaded applications. On the other hand, developers can’t control the scheduling of multithreads at a certain time. Programmers just create threads, and then rely on the operating system to schedule threads reasonably, but sometimes the scheduling of the operating system is not optimal.

Basic idea: event cycle

Our idea is simple:We wait for some event to happen, check the event type when it happens, and then do a small amount of work (maybe I / O request, or schedule other events for subsequent processing)

Let’s look at a typical event based server. This kind of application is based on a simple structure called event loop. The pseudo code of the event loop is as follows:

while (1) {
    events = getEvents(); 
    for (e in events)
        processEvent(e);
}

The main loop waits for certain events to occur and then processes them in turn. The code that handles events is called an event handler. When a handler handles an event, it is the only activity that occurs in the system. Therefore, scheduling is to decide which event to handle next. This explicit control of scheduling is an important advantage of the event based approach.

But it also brings a bigger problem: how does an event based server know which event happened, especially for network and disk I / O?

Important API: select() / poll ()

Now that we know the basic event loop, we have to solve the problem of how to receive events. Most systems provide basic APIs, that is, system calls through select() or poll(). The support of these interfaces for the program is simple: check to see if any I / O is received that should be of concern. For example, suppose a network application, such as a web server, wants to check whether any network packets have arrived in order to service them.

Take select() as an example, and its definition is as follows:

int select(int nfds,
           fd_set *restrict readfds, 
           fd_set *restrict writefds, 
           fd_set *restrict errorfds,
           struct timeval *restrict timeout);

Select() checks the collection of I / O descriptors. Their addresses are passed in through readfds, writefds and errorfds to check whether some descriptors in them are ready to read or write, or there are exceptions to be handled. The first two descriptors are checked in each set, and the given set of descriptors is replaced by a subset of descriptors prepared for the given operation. Select() returns the total number of ready descriptors in all collections.

A common use here is to set the timeout to null, which causes select() to block indefinitely until a descriptor is ready. However, more robust servers usually specify a timeout. A common practice is to set the timeout to zero so that the call to select () returns immediately.

Use Select ()

Let’s see how to use select() to see which descriptors have received network messages. Here is a simple example:

1    #include <stdio.h>
2    #include <stdlib.h>
3    #include <sys/time.h>
4    #include <sys/types.h>
5    #include <unistd.h>
6
7    int main(void) {
8        // open and set up a bunch of sockets (not shown)
9        // main loop
10        while (1) {
11           // initialize the fd_set to all zero
12           fd_set readFDs;
13           FD_ZERO(&readFDs);
14
15           // now set the bits for the descriptors
16           // this server is interested in
17           // (for simplicity, all of them from min to max)
18           int fd;
19           for (fd = minFD; fd < maxFD; fd++)
20               FD_SET(fd, &readFDs);
21
22           // do the select
23           int rc = select(maxFD+1, &readFDs, NULL, NULL, NULL);
24
25           // check which actually have data using FD_ISSET()
26           int fd;
27           for (fd = minFD; fd < maxFD; fd++)
28               if (FD_ISSET(fd, &readFDs))
29                   processFD(fd);
30       }
31   }

This code is easy to understand. After initialization, the server enters an infinite loop. Inside the loop, it uses FD_ The zero() macro first clears the set of file descriptors, then uses FD_ Set () includes all file descriptors from minfd to maxfd into the set. Finally, the server calls select () to see which connections have available data. Then, by using FD in the loop_ Isset (), the event server can see which descriptors are ready for data and process the incoming data.

Using a single CPU and event based applications, common problems in concurrent programs no longer exist. Since only one event is processed at a time, there is no need to acquire or release locks. The event based server is single threaded, so it cannot be interrupted by another thread.

Problem: blocking system calls

However, there is a problem: what if an event asks you to make a system call that might block?

For example, suppose a request enters the server from the client, reads the file from the disk and returns its contents to the requesting client. To handle such a request, some event handlers issue an open () system call to open the file, and then read it through the read () call. When the file is read into memory, the server may start sending the results to the client.

Both open() and read() calls may issue I / O requests to the storage system, so it may take a long time to provide services. When using a thread based server, this is not a problem: when the thread making the I / O request hangs, other threads can run. However, when using the event based method, no other thread can run. This means that if an event handler makes a blocking call, the entire server will block until the call is complete. When the event cycle is blocked, the system is idle, so it is a potential huge waste of resources. Therefore,We have to abide by a rule in an event based system: blocking calls are not allowed

Solution: asynchronous I / O

In order to overcome this limitation, many modern operating systems introduce a new method to issue I / O requests to disk system, which is generally called asynchronous I / O.These interfaces enable the application to issue I / O requests, return control to the caller immediately before I / O is completed, and enable the application to determine whether various I / OS have been completed.

When the program needs to read files, it can call the relevant interface of asynchronous I / O. If successful, it returns immediately and the application can continue its work.For each unfinished asynchronous I / O, the application program can periodically poll the system by calling the interface to determine whether the I / O has been completed.

If a program issues dozens or hundreds of I / OS at a specific point in time, it is inefficient to repeatedly check whether each of them is completed. To solve this problem,Some systems provide interrupt based methods. This method uses UNIX signal to inform the application when the asynchronous I / O is completed, thus eliminating the need to repeatedly query the system

Signals provide a way to communicate with processes. Specifically, the signal can be passed to the application. Doing so will cause the application to stop any current work and start running the signal handler, that is, some code in the application that processes the signal. When completed, the process resumes its previous behavior.

Another problem: state management

Another problem with the event based approach is that when an event handler issues asynchronous I / O, it must package some program state for the next event handler to use when I / O is finally completed.

Let’s look at a simple example. In this example, a thread based server needs to read data from the file descriptor (FD). Once completed, the data read from the file will be written to the network socket descriptor SD.

It’s easy to do this in a multithreaded program. When read() finally returns, the program immediately knows which socket to write to because the information is in the thread stack. In an event based system, to perform the same task, we use AIO calls to issue reads asynchronously, and then check the completion of the reads periodically. When the read is complete, how does the event based server know what to do? Which socket should I write data to?

The solution is simple:In some data structures, the information needed to complete the processing of the event is recorded. When an event occurs (that is, when disk I / O is complete), find the required information and process the event.

In this particular example, the solution is to record the socket descriptor (SD) in some data structure (for example, hash table) indexed by the file descriptor (FD). When disk I / O is complete, the event handler uses the file descriptor to find the data structure, which returns the value of the socket descriptor to the caller. Then, the server can finish the final work and write the data to the socket.

Recommended Today

Large scale distributed storage system: Principle Analysis and architecture practice.pdf

Focus on “Java back end technology stack” Reply to “interview” for full interview information Distributed storage system, which stores data in multiple independent devices. Traditional network storage system uses centralized storage server to store all data. Storage server becomes the bottleneck of system performance and the focus of reliability and security, which can not meet […]