Are processes, threads and coroutines confused? One article will take you through!

Time:2021-7-28

preface

Welcome to the operating system series, which is still explained in the form of illustration + vernacular, so that Xiaobai can understand it and help you get started quickly

This chapter begins with the introduction of processes, threads and collaborative processes. I believe many white people do not understand these concepts clearly. Here are all the arrangements for you. Let’s start to enter the text

content syllabus

Are processes, threads and coroutines confused? One article will take you through!

Little story

Xiao Ming (operating system) founded a small Internet company. Because he was going to develop both a and B software at the same time, Xiao Ming invited two development teams to do this. They were Xiao Wang’s development team and Xiao Li’s development team, but the company was very small. There was only one room (C P U), and the room (C P U) could only accommodate one development team, In order not to delay the development of the two software, Xiao Ming (operating system) decided to develop by Xiao Wang team in the morning and Xiao Li team in the afternoon (this process is called scheduling).

As team leaders, Xiao Li (process) and Xiao Wang (process) have a lot to worry about. They need to analyze and sort out the software, do architecture design, and finally refine the tasks and assign them to each developer (thread) of the team. When the team changes rooms, they also need to record the whole software development progress to facilitate the next development, Compared with developers, it is much easier. Everyone is only responsible for a small piece and only a small piece needs to be recorded.

Through this little story, we can see that a process manages multiple threads, just as a team leader (process) manages multiple developers (threads).


process

What is a process

When you open Netease cloud concert, a process will be generated. When you open QQ, a process will be generated. The programs running on your computer are processes. The process is so simple and violent.

Now let’s think about a problem. A process reads a file in the hard disk. This file is very large and needs to be read for a long time. If CPU has been waiting for the hard disk to return data, the utilization rate of CPU is very low.

It’s like boiling water. Will you wait for the water to boil? Obviously, you can do other things during this time (such as playing cyberpunk 2077). When the water boils, come and pour the water into the water cup. Isn’t it fragrant?

C P U is the same. It finds that the process is reading the hard disk file. It does not need to block and wait for the hard disk to return data. It directly executes other processes. When the hard disk returns data, c p u will receive the interrupt signal, so c p u returns to the previous process to continue running

Are processes, threads and coroutines confused? One article will take you through!

This way of alternating execution of multiple programs is the preliminary idea of C Pu managing multiple processes.

Some people may ask whether the alternate execution will be very slow. Don’t worry, because the execution speed and switching speed of C p u are very fast, which may be tens or hundreds of milliseconds, which is beyond human perception. Multiple processes may run alternately in one second, so it gives us the illusion of parallelism. In fact, this is called concurrency.

The alternative execution of single core and multi-process is concurrency, and the operation of multi-process in multi-core is parallel.


Process control structure

When you create anything, you must be tangible before you have anything. When you build a house, a car or other things, you must have a design drawing (structure), and then create it according to the design drawing. The process is no exception. It also has its own design drawing, that is, process control block (PCB), which is referred to as P C B later

Structure information of P C B

P C B is the unique identification of the existence of a process, which means that a process must have a corresponding PCB. If the process disappears, P C B will also disappear

  • Process description information

    • The unique identifier of the process, similar to the unique ID
    • User identifier, the user to which the process belongs. The user identifier is mainly for sharing and protection services
  • Process control and management information

    • The current state of the process, such as running, ready, blocking, etc., is used as the basis for processor allocation and scheduling
    • Process priority describes the priority of the process to preempt the processor. The process with high priority can obtain the processor first
  • Resource allocation list

    • Used to describe the status of memory address space or virtual address space, the list of open files and the information of input / output devices used
  • CPU related information

    • It refers to the register values in C P U. when the process is switched, the C P U status information must be saved in the corresponding P C B, so that when the process is re executed, it can continue from the breakpoint.

A queue of P C B

P C B organizes processes in the same state by means of linked list to form various queues

  • All processes in the ready state are chained together, which is called the ready queue
  • All the processes waiting for an event are chained together to form various blocking queues

Are processes, threads and coroutines confused? One article will take you through!


Status of the process

Through observation, we find that the process of process execution follows such a run pause run law. Although it seems very simple, it involves the transformation of process state

Process tristate

During the execution of a process, there are at least three basic states, namely, running state, ready state and blocking state.

Are processes, threads and coroutines confused? One article will take you through!
The meaning of the above state

  • Running time (P): running time
  • Ready: it can run, but it is suspended and stopped because other processes are running
  • Blocked: the process waits for an event (such as IO read) to stop running. At this time, it cannot run even if it is given CPU control

State transition process in the figure above

  1. C P U schedules the thread state process to execute and enter the running state. After the time slice is used up, it returns to the ready state and waits for C P U scheduling
  2. C P U schedules the thread state process to execute, enters the running state, executes the IO request, enters the blocking state, the IO request is completed, the CPU receives the interrupt signal, enters the ready state, and waits for C P U scheduling

Process five states

On the basis of the three states, a refinement is made, and there are two other basic states, the creation state and the end state.

Are processes, threads and coroutines confused? One article will take you through!

The meaning of the above state

  • New: the process is being created
  • Ready: it can run, but it is suspended and stopped because other processes are running
  • Running time (P): running time
  • Exit: the state when the process is disappearing from the system
  • Blocked: the process waits for an event (such as IO read) to stop running. At this time, it cannot run even if it is given CPU control

Change of state

  • Null = > create state (New): the first state when a new process is created
  • Create state (New) = > ready state: when the process is created, it enters the ready state
  • Ready = > running: C P U selects a process from the ready queue to execute and enters the running state
  • Running = > Exit: when the process has completed running or made an error, it enters the end state
  • Running = > ready: the time slice allocated to the process is used up and enters the ready state
  • Running = > blocked: the process executes a wait event and enters the blocked state
  • Blocked = > ready: when the process event is completed, C P U receives the interrupt signal and enters the ready state

Process seven states

In fact, a process has another state called suspended state. Suspended state means that the process will not occupy memory space. It will be replaced and saved in hard disk space. When it needs to be used, it will be replaced and loaded into memory. Suspended state can be divided into the following two types

  • Blocking pending status: the process is in external memory (hard disk) and waiting for an event to occur
  • Ready pending status: the process is in external memory (hard disk), but as long as it enters memory, it will run immediately

Combined with the above two suspended States, the process seven states are formed
Are processes, threads and coroutines confused? One article will take you through!

From the above figure, we can find that the creation status, ready status, running status, blocking pending status and blocking status can all be transferred to the pending status. At this time, the problem arises. What will be transferred to the pending status and what will be transferred from the pending status to the non pending status (ready status and blocking status). According to the current resource status and performance requirements There is no fixed saying that the priority of the process is used to suspend and activate operations.


Context switching of processes

C P U switches a process to another process, which is called process context switching.

Before talking about process context switching, let’s talk about C P U context switching

C P U context refers to C P U registers and program counters

  • C P U register is a built-in cache with small capacity and extremely fast speed
  • The program counter is used to store the position of the instruction being executed by C p u or the position of the next instruction to be executed

C P U context switching is well understood. It is to save the C P U context of the previous task, load the C P U context of the current task, and finally jump to the new location indicated by the program counter to run the task.

The so-called “task” mentioned above mainly includes processes, threads and interrupts. Therefore, according to different tasks, CPU context switching can be divided into process context switching, thread context switching and interrupt context switching.

###How does the context of the process switch

Firstly, the process is managed and scheduled by the kernel, so the process context switching occurs in the kernel state. The content of process context switching includes user space resources (virtual memory, stack, global variables, etc.) and kernel space resources (kernel stack, registers, etc.).

During context switching, the context of the previous process will be saved to its P C B, and then the P C B context of the current process will be loaded into c p u to make the process continue to execute

Are processes, threads and coroutines confused? One article will take you through!

Scenario of process context switching

  • In order to ensure that all processes can be fairly scheduled, CPU time is divided into time slices, which are allocated to each process in turn. In this way, when a process runs out of time slices, it switches to other processes waiting for the CPU to run
  • When a process has insufficient system resources (such as insufficient memory), it can run only after the resources are met. At this time, the process will also be suspended and the system will schedule other processes to run.
  • When a process suspends itself actively through the sleep function sleep, it will naturally be rescheduled.
  • When a higher priority process runs, in order to ensure the operation of the higher priority process, the current process will be suspended and run by the higher priority process
  • When a hardware interrupt occurs, the process on the CPU will be suspended by the interrupt and execute the interrupt service program in the kernel instead.

thread

What is a thread

In the early operating systems, processes were the basic unit of independent operation. Until later, computer scientists proposed a smaller basic unit that can run independently, which is threads.

In the modern operating system, the process is the smallest resource allocation unit, and the thread is the smallest running unit. There can be one or more threads under a process, and each thread has an independent set of registers and stacks, which can ensure that the thread control flow is relatively independent.

Are processes, threads and coroutines confused? One article will take you through!

The benefits of threads are as follows

  • Multiple threads can exist simultaneously in a process
  • Let the process have the ability of multi task parallel processing
  • Each thread in the same process can share process resources (multi thread communication in the same process is very simple and efficient)
  • Lighter and more efficient

The disadvantages of threads are as follows

  • Because processes share resources, there will be resource competition, which needs to be coordinated through the locking mechanism
  • When a thread in a process crashes, it will cause all threads of its own process to crash (the user design of general games will not adopt multi-threaded mode)

Comparison of threads and processes

  • Process is the smallest resource allocation unit (including memory, open files, etc.), and thread is the smallest running unit
  • Processes have a complete resource platform, while threads only enjoy essential resources, such as registers and stacks
  • Threads also have three basic states: ready, blocked and executed. They also have the transition relationship between states (similar to processes)
  • The creation and termination time of a thread is faster than that of a process. In the process of creating a process, it also needs resource management information, such as memory management information and file management information. Therefore, in the process of creating a thread, it will not involve these resource management information, but share them (there are few resources managed by a thread)
  • This means that when a thread switches to the same page in the same table, it is faster than when a thread switches to the same page in the same table. For the switching between processes, the page table should be switched off, and the switching process overhead of the page table is relatively large
  • Because the threads of the same process share memory and file resources, there is no need to pass through the kernel when transmitting data between threads, which makes the data interaction between threads more efficient

Threads are more time efficient and space efficient than processes


Context switching of threads

Are processes, threads and coroutines confused? One article will take you through!

When a process has only one thread, it can be considered that the process is equal to the thread. The switching of thread context can be divided into two cases

  1. For threads of different processes, the switching process is the same as the process context switching
  2. The two threads belong to the same process. Because the virtual memory is shared, the resources of the virtual memory will remain unchanged during switching. Only the private data, registers and other non shared data of the thread need to be switched

Therefore, the context switching of thread is much less expensive than that of process


Thread model

Before talking about thread mode, let’s introduce three concepts

  • Kernel thread: the thread implemented in kernel space and managed by the kernel
  • User thread: threads implemented in user space are not managed by the kernel. They are managed by the user state through the thread library (user state refers to threads or processes running in user space)
  • Lightweight process: support user threads in the kernel (the middle layer between user threads and kernel threads, and the high abstraction of kernel threads)

Kernel thread

Because the kernel thread is managed by the kernel space, its structure thread control block (TCB) is visible to TCB in the kernel space

Kernel thread

Are processes, threads and coroutines confused? One article will take you through!

What are the advantages of kernel threads

  • The kernel thread is managed by the kernel space. You don’t have to worry about the creation, destruction and scheduling of threads. It is fully automated and intelligent
  • Kernel threads can take advantage of the multi-core characteristics of CPU to realize parallel execution (because it is managed by the kernel and very intelligent)
  • Kernel thread blocking will not affect other kernel threads (because it is managed by the kernel and is very intelligent)

What are the disadvantages of kernel threads

  • Because it is kernel management, most operations of kernel threads involve kernel state, that is, they need to switch from user state to kernel state, which is expensive
  • Because kernel resources are limited, a large number of kernel threads cannot be created

User thread

Because the user thread is managed in the user space through the thread library in the user state, its structure thread control block (TCB) is also in the online library. For the operating system, it can only see the P C B of the whole process (the kernel cannot manage the user thread or sense the user thread)

Are processes, threads and coroutines confused? One article will take you through!

What are the advantages of user threads

  • Because the creation, destruction and scheduling of user threads do not go through the kernel state and operate directly in the user state, the speed is particularly fast
  • It does not depend on the kernel and can be used for operating systems that do not support threading technology
  • A large number of user threads can be created without consuming kernel resources

What are the disadvantages of user threads

  • Users need to implement the corresponding thread library for thread creation, destruction and scheduling
  • User thread blocking will cause other user threads in the whole process to block (the whole process is blocked). Because the kernel is not aware of user threads, it is unable to schedule other user threads
  • Can’t take advantage of the multi-core feature of CPU, or is it because the kernel is not aware of user threads

Light weight process (LWP)

Light weight process (LWP) can be understood as a high-level abstraction of kernel threads. A process can have one or more lwps. Because each LWP is mapped one-to-one with the kernel thread, LWP is supported by a kernel thread (user thread is associated with LWP, that is, it becomes the user thread supported by the kernel).

In most systems, LWP differs from ordinary processes in that it has only a minimum execution context and the statistics required by the scheduler. Generally speaking, a process represents an instance of a program, and LWP represents the execution thread of the program. Because an execution thread does not need so much state information as a process, LWP does not carry such information.

The first mock exam (kernel level threading model)

The first mock exam is L W P, which is a one to one model, that is, the process only needs to create L W P, because a L W P is supported by a kernel thread, so it is ultimately a kernel management thread that can be dispatched to other processors (for a simple explanation, using kernel threads directly).

Are processes, threads and coroutines confused? One article will take you through!

Not the first mock exam of 1:1, but the JVM thread is already mentioned, but it is worth mentioning that the thread is implemented by the model using Java, so it is prudent to start a thread in the thread.

One to many model (user level thread model)

The one to many model, that is, multiple user level thread pairs are implemented on the same LWP. Because the user state manages threads through the thread library in the user space, it is very fast and does not involve the conversion between user state and kernel state

Are processes, threads and coroutines confused? One article will take you through!

The advantages and disadvantages of the one to many model (n: 1) are reflected in user level threads. As mentioned earlier, the advantages and disadvantages of user threads are not summarized here. It is worth mentioning that the synergy in Python is realized through this model.

Many to many model (two-level threading model)

Many to many model is the product of integrating the advantages of each family. It fully absorbs the advantages of the first two threading models and tries to avoid their disadvantages.

The first mock exam is the first mock exam, which is the first mock exam. It is different from the many to one model, and the multiuser thread </font in multi to many model process can bind different kernel threads. This is similar to the one to one model, and secondly is different from the one to one model. When a kernel thread gives up C P U due to the blocking operation of the bound user thread, other user threads bound to the kernel thread can be unbound and re bound to other kernel threads to continue running.

The first mock exam is the first mock exam (m:n), which is not the implementation of the thread library of the Multion one model, but also the one to one model. It is entirely based on the operating system scheduling, but is an intermediate state system (responsible for the coordination between self scheduling and operating system scheduling). Finally, one Go language is used to model many to many models, which is the reason for its high concurrency. Its threading model is very similar to the forkjoinpool in Java.

Are processes, threads and coroutines confused? One article will take you through!

Many to many model advantages

  • The first mock exam is lightweight.
  • Because there are multiple kernel threads, when one user thread is blocked, other user threads can still execute
  • Due to the corresponding multiple kernel threads, more complete scheduling and priority can be realized;

Disadvantages of many to many model

  • Complex implementation (because of the high complexity of this model, operating system kernel developers generally do not use it, so it often appears as a third-party library)

dispatch

Scheduling principle

CPU utilization

  • The running program has a request for an I / O event, so it is blocked, causing the process to wait for the data from the hard disk to return. Such a process is bound to cause c p u to suddenly idle. Therefore, in order to improve the utilization of C P U, the scheduler needs to select a process from the ready queue to run when a waiting event makes C P U idle( PS: the scheduler should ensure that the C P U is always in a hurry, which can improve the utilization of C P U)

System throughput

  • The program takes a long time to execute a task. If the program always occupies C P U, it will reduce the system throughput. Therefore, to improve the throughput of the system, the scheduler should weigh the number of long task and short task processes( Throughput refers to the number of processes completed by C P U per unit time. Processes with long jobs will occupy longer C P U resources, which will reduce the throughput. On the contrary, processes with short jobs will increase the system throughput.)

Turnaround time

  • The process from the beginning to the end of a process actually includes two times: process running time and process waiting time. The sum of these two times is called turnaround time. The smaller the turnaround time of the process, the better. If the waiting time of the process is very long and the running time is very short, the turnaround time is very long. The scheduler should avoid this situation( Turnaround time is the sum of process running and blocking time. The smaller the turnaround time of a process, the better)

waiting time

  • The process in the ready queue cannot wait too long. I hope the shorter the waiting time, the better, because it can make the process execute faster in C P U. Therefore, the waiting time of the process in the ready queue is also the principle that the scheduler needs to consider (this waiting time is not the time in the blocking state, but the time when the process is in the ready queue. The longer the waiting time, the more dissatisfied the user is).

response time

  • For interactive applications such as mouse and keyboard, we certainly hope that its response time will be as fast as possible, otherwise it will affect the user experience. Therefore, for highly interactive applications, the response time is also the principle that the scheduler needs to consider (the time from the user submitting the request to the system generating the response for the first time. In the interactive system, the response time is the main standard to measure the quality of the scheduling algorithm).

In short, it is to be fast!


scheduling algorithm

Different algorithms are applicable to different scenarios. The following describes several common scheduling algorithms in a single core

First come first served (FCFS)

The first come, first serve algorithm is called FCFS for short. As the name suggests, whoever comes first is executed by CPU first, and those who arrive later will queue up and wait. It is a very simple algorithm. CPU will schedule the first process in the ready queue every time, and will not queue the process to the end of the queue until the process exits or blocks, and then continue to schedule the first process, and so on.

Are processes, threads and coroutines confused? One article will take you through!

FCFS algorithm seems fair, but when a long job runs first, the waiting time of the following short jobs will be very long, so it is not conducive to short jobs and will reduce the system throughput.

F C f s is beneficial to long operation, which is suitable for C P U busy operation system, but not for I / O busy operation system.

Shortest job first (SJF)

Similarly, as the name suggests, it will give priority to the process with the shortest running time, which helps to improve the system throughput. However, it is unfavorable to long-term operation, so it is easy to cause an extreme phenomenon. For example, a long job is waiting to run in the ready queue, and there are many short jobs in the ready queue. Finally, the long job is pushed back and the turnaround time becomes longer, so that the long job will not be run for a long time (applicable to the system of I / O busy jobs).

Are processes, threads and coroutines confused? One article will take you through!

High response ratio next (hrrn)

Because the previous “first in first out algorithm” and “shortest job first algorithm” do not well weigh short jobs and long jobs, the high response ratio first algorithm mainly weighs short jobs and long jobs.

Each time a process is scheduled, the “response ratio priority” is calculated, and then the process with the highest “response ratio priority” is put into operation.

Are processes, threads and coroutines confused? One article will take you through!

From the above formula, it can be found that:

If the “waiting time” of the two processes is the same, the shorter the “required service time”, the higher the “priority”, so that the process of short job can be easily selected to run (if the waiting time is short, the shorter the running time of the process, the higher the priority = > the process of short job with short waiting time)

If the “required service time” of the two processes is the same, the longer the “waiting time”, the higher the “priority”, which takes into account the long operation process, because the response ratio of the process can increase with the increase of the waiting time. When the waiting time is long enough, the response ratio can rise to a high level, so as to obtain the opportunity to run (if the required service time is long, The longer the waiting time of a process, the higher the priority = > long job processes with longer waiting time)

Round robin (RR) algorithm

Time slice rotation is the oldest, simplest, fairest and most widely used algorithm. Each process is assigned the same time slice (quantum) to allow the process to run in this time period.

Are processes, threads and coroutines confused? One article will take you through!

  • If the time slice runs out and the process is still running, it will put this process into the ready queue and continue to schedule another process, and so on
  • If the process blocks or ends before the end of the time slice, another process is scheduled
  • The process time slice has run out and needs to be reassigned

It should be noted that if the time slice is set too short, it will lead to frequent CPU context switching, and too long may lead to longer response time to short job processes. Therefore, setting the time slice to 20ms ~ 50ms is usually a reasonable compromise value

Highest priority first (HPF) algorithm

The previous “time slice rotation algorithm” makes all processes equally important and does not favor anyone. Everyone’s running time is the same. However, there are different views on multi-user computer systems. They hope that scheduling has priority and that the scheduler can select the process with the highest priority from the ready queue to run. This is the highest priority first (HPF) algorithm.

The priority of processes can be divided into:

  • Static priority: when creating a process, the priority has been determined, and the priority will not change throughout the running time
  • Dynamic priority: adjust the priority according to the dynamic changes of the process. For example, if the running time of the process increases, the priority will be reduced. If the waiting time of the process (the waiting time of the ready queue) increases, the priority will be increased.

There are two ways to handle high priority:

  • Non preemptive: when a process with high priority appears in the ready queue, run the current process, and then select the process with high priority.
  • Preemptive: when a process with high priority appears in the ready queue, the current process is suspended and the process with high priority is scheduled to run.

However, there are still disadvantages, which may cause low priority processes to never run.

Multilevel feedback queue algorithm

The multilevel feedback queue algorithm is evolved from the “time slice rotation algorithm” and the “highest priority algorithm”. Like its name, it is divided into multiple queues according to priority. Two concepts are involved in the algorithm:

  • Multi level indicates that there are multiple queues. The priority of each queue is from high to low. The higher the priority, the shorter the time slice
  • “Feedback” means that when a new process enters the queue with high priority, stop the current running process and run the queue with high priority

Are processes, threads and coroutines confused? One article will take you through!

Workflow:

  • Multiple queues are given different priorities. The priority of each queue is from high to low. At the same time, the higher the priority, the shorter the time slice
  • New processes will be placed at the end of the first level queue and queued for scheduling according to the principle of first come first serve. If the time slice of the first level queue runs out and there are still processes that have not been executed, the remaining processes in the first level queue will be placed at the end of the second level queue, and so on
  • When the high priority queue is empty and a process with a low priority queue is running, a new process enters the high priority queue. At this time, immediately stop the current running process, put the current process at the end of the original queue, and run the process with a high priority queue instead.

It can be found that short jobs may be processed quickly in the first level queue. For long jobs, if they cannot be processed in the first level queue, they can be moved to the next queue for execution. Although the waiting time becomes longer, the running time will also be longer, which takes into account both long and short jobs and has better response time.


About me

This is a Xing, a Java program that loves technology, ape, official account.“Procedural ape a Xing”In 2021, we will regularly share excellent original articles such as operating system, computer network, Java, distributed and database, and grow together with you on the way to be better!.

Thank you very much for seeing here. It’s not easy to be original. If the article is helpful, you can “like” or “share and comment”. It’s all support (don’t go whoring for nothing)!

I wish you and I can go on the way they want to go. See you in the next article!