IO reuse model synchronous, asynchronous, blocking, non blocking and detailed examples

Time:2020-8-1

Introduction to IO model

There are five commonly used IO models

blocking IO
nonblocking IO
IO multiplexing
signal driven IO
asynchronous IO

Let’s talk about the objects and steps involved when IO occurs

For a network IO (let’s take read as an example), it involves two system objects:

  • One is to call the IO process (or thread)
  • One is the kernel

When a read operation occurs, it will go through two stages:

  • Wait for data preparation, such as accept(), recv() wait for data(Waiting for the data to be ready)
  • Copy the data from the kernel to the process, for example, accept() receives the request, recv() receives the data sent by the connection, and then copies it to the kernel, and then copies it from the kernel to the process user space (Copying the data from the kernel to the process)

For socket flow, the flow of data goes through two stages

  • The first step usually involves waiting for data packets on the network to arrive and then being copied to a buffer in the kernel.
  • The second step is to copy the data from the kernel buffer to the application process buffer.

It’s important to keep these two points in mind, because the difference between these IO models is that there are different situations in the two phases.

Blocking I / O (blocking IO)

In Linux, by default, all sockets are blocking. A typical read operation process is as follows:

When the user process calls the recvfrom system call, the kernel starts the first stage of IO: preparing data (for network IO, many times the data has not arrived at the beginning). For example, a complete UDP packet has not been received. At this time, the kernel has to wait for enough data to arrive). This process needs to wait, that is, the data is copied toBuffer of operating system kernelIt needs a process. On the user process side, the entire process is blocked (of course, the process chooses to block). When the kernel waits until the data is ready, it willCopy data from kernel to user memoryThen the kernel returns the result, and the user process is released from the block state and runs again.

Therefore, blocking IO is characterized by blocking in both phases of IO execution.

Nonblocking I / O (nonblocking IO)

Under Linux, you can set socket to non blocking. When reading a non blocking socket, the flow is as follows:

When a user process issues a read operation, if the data in the kernel is not ready,Instead of blocking the user process, it immediately returns an error。 From the perspective of user process, it does not need to wait for a read operation, but immediately gets a result. When the user process judges that the result is an error, it knows that the data is not ready, so it can send the read operation again. Once the data in the kernel is ready, and it receives the system call from the user process again, it copies the data to the user memory and returns it.

Therefore, the feature of nonblocking IO is that the user process needs to actively ask whether the kernel data is ready or not.

It is worth noting that the non blocking IO is only applied to waiting data. When there is real data arriving to execute recvfrom, it is still synchronized blocking io. As can be seen from the copy data from kernel to user in the figure

I / O multiplexing

IO multiplexing is what we call select, poll, epoll. In some places, it is also called event driven io. The advantage of select / epoll is that a single process can handle the IO of multiple network connections at the same time. Its basic principle is to select, poll and epoll. This function will continuously poll all the sockets it is responsible for. When a socket has data arrived, it will inform the user process.

In fact, this diagram is not much different from that of blocking io. In fact, it is even worse. Because there are two system calls (select and recvfrom), while blocking IO only calls one system call (recvfrom). However, the advantage of select is that it can handle multiple connections at the same time.

Therefore, if the number of connections processed is not very high, the performance of web server using select / epoll is not necessarily better than that of web server using multi threading + blocking IO, and the latency may be greater. The advantage of select / epoll is not that it can handle a single connection faster, but that it can handle more connections.)

In io multiplexing model, in practice,For each socket, it is generally set to non blocking, because only by setting it to non blocking can a single thread / process not be blocked (or locked), and can continue to process other sockets. As shown in the above figure, the entire user’s process is always blocked. It’s just that the process is blocked by the select function, not by socket io.

When a user process calls select, the entire process will be blocked. At the same time, all incoming socket connections will be added to the select watch list, and the kernel will “monitor” all the sockets in the charge of selectAfter that, the select (poll, epoll, etc.) function will continuously poll all the sockets it is responsible for. These sockets are non blocking and exist in the select watch list. Select uses some monitoring mechanism to check whether there is data in a socket. When the data in any socket is ready, select will return. At this time, the user process calls the read operation to copy the data from the kernel to the user process.

comment:
I / O multiplexing is characterized by a mechanism in which a process can wait for multiple file descriptors at the same time,If any of these file descriptors (socket descriptors) is read ready, the select() function can return it.
So, IO multiplexing, in essence, does not have concurrent functions, because there is only one process or thread working at any time. The reason why it can improve efficiency is that selectepoll puts the incoming socket into their ‘watch’ list. When any socket has readable and writable data, it will immediately process it. If selectepoll detects many sockets at the same time, As soon as there is a movement, it will be returned to the process for processing. It is always more efficient than one socket by one, blocking and waiting, and processing efficiency.
Of course, you can also use multi thread / multi process mode. One connection can open a process / thread for processing. In this way, the memory consumed and the process switching page will consume more system resources.
So we can combine IO multiplexing and multi process / multi thread to achieve high performance concurrency. IO multiplexing is responsible for improving the efficiency of receiving socket notification. After receiving the request, it is handed over to the process pool / thread pool to process the logic.

Asynchronous I / O (asynchronous IO)

In fact, asynchronous IO under Linux is rarely used. Let’s take a look at its process

After the user process initiates the read operation, it can immediately start to do other things. On the other hand, from the perspective of kernel, when it receives an asynchronous read, it will return immediately, so it will not generate any block on the user process. Then, the kernel will wait for the data to be ready, and then copy the data to the user’s memory. When all this is done, the kernel will send a signal to the user process to tell it that the read operation is complete.

The difference and relation between blocking IO, non blocking IO and synchronous IO, asynchronous IO

Blocking IO vs non blocking IO:

Concept:
Blocking and non blocking are concerned withThe state of a program waiting for the result of a call (message, return value)
Blocking call refers to that the current thread will be suspended before the call result returns. The calling thread does not return until it gets the result. A non blocking call means that the call does not block the current thread until the result is not immediately available.

Example: if you call the bookstore owner and ask if there is a book called distributed system, if you are blocking the call, you will “hang” yourself until you get the result of the book. If it is a non blocking call, whether the boss has told you or not, you can go and play by yourself, Of course, you should also check whether the boss has returned the results occasionally in a few minutes. In this case, blocking and non blocking have nothing to do with whether it is synchronous or asynchronous. It has nothing to do with the way your boss answers you.


analysis:
Blocking IO will block the corresponding process until the operation is completed, while non blocking IO will return immediately when the kernel is still preparing data.

Synchronous IO vs asynchronous IO:

Concept:
Synchronous versus asynchronous synchronous and asynchronous focuses onMessage communication mechanism(synchronous communication / asynchronous communication) the so-called synchronization is to send acallBefore the result is obtained, thecallThey don’t go back. But once the call returns, you get the return value. In other words, bycaller Take the initiative to wait for thiscallThe results. Asynchronous is the opposite,callAfter it is issued, the call returns directly, so no result is returned. In other words, when an asynchronous procedure call is issued, the caller does not get the result immediately. It’s in thecallAfter sending out,CalleesNotify the caller by status, notification, or handle the call through a callback function.

Typical asynchronous programming models such as Node.js Take a popular example: you call the bookstore owner and ask if he has the book distributed system. If it is a synchronous communication mechanism, the bookstore owner will say, “wait a moment,” and then start to check. When the book is ready (maybe 5 seconds or one day), he will tell you the result (return result). And the asynchronous communication mechanism, the bookstore boss directly told you I check ah, check up, call you, and then directly hang up (no results returned). Then check it out. He will call you on his own initiative. Here, the boss calls back by calling back.


analysis:
Before explaining the difference between synchronous IO and asynchronous IO, we need to give the definition of both. Stevens’s definition (actually the definition of POSIX) is like this:

A synchronous I/O operation causes the requesting process to be blocked until that I/O operation completes;
An asynchronous I/O operation does not cause the requesting process to be blocked;

The difference between the two is that synchronous IO will block the process when doing “IO operation”. According to this definition, theBlocking IO, non blocking IO and IO multiplexing belong to synchronous io.
Some people may say that non blocking IO is not blocked. Here is a very “cunning” place. The “IO operation” in the definition refers to the real IO operation, which is the recvfrom system call in the example. When non blocking IO executes recvfrom system call, if the data in the kernel is not ready, the process will not be blocked. However, when the data in the kernel is ready, recvfrom will copy the data from the kernel to the user’s memory. At this time, the process is blocked. During this period of time, the process is blocked.

Asynchronous IO is different. When a process initiates an IO operation, it returns and ignores it until the kernel sends a signal telling the process that IO is complete. In the whole process, the process is not blocked at all.

Image examples of IO model

Finally, give a few examples to illustrate the four IO models
There are four people a, B, C and d fishing
A uses the most old-fashioned fishing rod, so we have to keep watch until the fish is hooked;
B’s fishing rod has a function, which can show whether there is a fish on the hook, so B chats with the MM next to him, and then he will see if there is a fish on the hook. If there is one, he will pull the rod quickly;
C’s fishing rod is almost the same as B’s, but he thought of a good way: put several fishing rods at the same time, and then stand by the side. Once it is indicated that the fish is hooked, it will pull up the corresponding fishing rod;
D is a rich man. He hired a man to help him fish. Once the man catches the fish, he sends a message to d.

Select / poll / epoll polling mechanism

Select, poll, epoll are essentially synchronous I / O, because they all need to be responsible for reading and writing after the read-write event is ready, that is to say, the read-write process is blocked

Select / poll / epoll are the implementation methods of IO multiplexing. It is mentioned above that when IO multiplexing is used, socket will be set to non blocking and then put into the respective monitoring list of select / poll / epoll. Then, what are their monitoring mechanisms for data arrival on socket? What about efficiency? Which method should we use to achieve IO reuse better? Their implementation methods, efficiency, advantages and disadvantages are listed below:

(1) The implementation of select and poll needs to poll all the FD sets until the device is ready, during which sleep and wake-up may alternate several times. Epoll also needs to call epoll_ Wait continuously polls the ready list. During this period, sleep and wake-up may alternate. However, when the device is ready, the callback function is called to put the ready FD into the ready list and wake up in epoll_ The process of entering sleep in wait. Although we have to sleep and alternate, select and poll have to traverse the entire FD set when “awake”, while epoll only needs to judge whether the ready list is empty when “awake”, which saves a lot of CPU time. This is the performance improvement brought about by the callback mechanism.

(2) Each call to select and poll copies the FD set from the user mode to the kernel mode, and hangs the current to the device waiting queue once. Epoll only needs to copy once, and the current to the waiting queue is only hung once (in epoll)_ At the beginning of wait, note that the waiting queue here is not a device waiting queue, but a waiting queue defined internally by epoll). It also saves a lot of money.