Python concurrent programming — Process


1. Process concept

As the name suggests, a process is a process being executed. A process is an abstraction of a running program. The concept of process originates from the operating system. It is not only the core concept of the operating system, but also one of the oldest and most important abstract concepts provided by the operating system. Everything else in the operating system revolves around the concept of process.

process (process) is a running activity of a computer program on a data set. It is the basic unit for the system to allocate and schedule resources and the basis of the operating system structure. In the early computer structure of process oriented design, the process is the basic execution entity of the program; in the contemporary computer structure of thread oriented design, the process is the container of threads. The program is Description of instructions, data and their organizational form. A process is an entity of a program.

Narrow definition: an instance of a computer program that is being executed.

Broad definition: a process is a running activity of a program with certain independent functions about a data set. It is the basic unit of dynamic execution of the operating system. In the traditional operating system, the process is not only the basic allocation unit, but also the basic execution unit.

First, a process is an entity. Each process has its own address space. Generally, It includes text region, data region and stack region. The text region stores code executed by the processor; the data region stores variables and dynamically allocated memory used during process execution; and the stack region stores instructions and local variables called by the active procedure.
Second, a process is an "executing program". A program is an inanimate entity. It can become an active entity only when the processor gives the program life (executed by the operating system). We call it a process.
Process is the most basic and important concept in operating system. After the emergence of multiprogramming system, it is a concept introduced to describe the dynamic situation inside the system and the activity law of each program inside the system. All multiprogramming operating systems are based on process.

2. Process characteristics

Dynamic: the essence of a process is an execution process of a program in a multiprogramming system. A process is generated and dies dynamically.
Concurrency: any process can execute concurrently with other processes.
Independence: process is a basic unit that can run independently, and it is also an independent unit for system resource allocation and scheduling;
Asynchrony: due to the mutual restriction between processes, the process has the discontinuity of execution, that is, the process advances at their own independent and unpredictable speed.
Structural features: the process is composed of program, data and process control block.
Multiple different processes can contain the same program: a program constitutes different processes in different data sets and can get different results; However, the program cannot be changed during execution.

3. Programs and processes (the smallest resource allocation unit in a computer)

Program is an ordered collection of instructions and data. It has no running meaning. It is a static concept.
Process is an execution process of a program on the processor. It is a dynamic concept.
The program can exist for a long time as a kind of software data, and the process has a certain life cycle.
The program is permanent and the process is temporary.

#The running program is the process
#Data between processes is isolated

4. Scheduling of processes (done by the operating system)

Scheduled by the operating system, each process has at least one thread. In the process, one thread is responsible for the specific execution of the program.

In order to run multiple processes alternately, the operating system must schedule these processes. This scheduling is not carried out immediately, but needs to follow certain rules, so there is a process scheduling algorithm.

(1) Short job first algorithm

Short job (process) priority scheduling algorithm (SJ / PF) refers to the algorithm for priority scheduling of short jobs or short processes. This algorithm can be used for both job scheduling and process scheduling. However, it is unfavorable to long jobs; it can not ensure that urgent jobs (processes) are processed in time; the length of work is only estimated.

(2) First come first serve algorithm (FIFs)

First come first serve (FCFS) scheduling algorithm is the simplest scheduling algorithm. It can be used for job scheduling and process scheduling. FCFS algorithm is more conducive to long jobs (processes) than short jobs (processes). Therefore, it can be seen that this algorithm is suitable for CPU busy jobs and not conducive to I / O busy jobs (processes).

(3) Time slice rotation algorithm

The basic idea of round robin (RR) method is to make the waiting time of each process in the ready queue proportional to the time of enjoying the service. In the time slice rotation method, the processing time of the CPU needs to be divided into fixed size time slices, for example, tens of milliseconds to hundreds of milliseconds. If a process runs out of the time slice specified by the system after being selected by the scheduling, but fails to complete the required task, it will release its CPU and wait for the next scheduling at the end of the ready queue. At the same time, the process scheduler schedules the first process in the current ready queue.
      Obviously, the rotation method can only be used to schedule and allocate some resources that can be preempted. These resources that can be preempted can be deprived at any time, and they can be redistributed to other processes. CPU is a kind of preemptive resource. However, resources such as printers cannot be preempted. Since job scheduling is the allocation of all system hardware resources except CPU, including non preemptible resources, job scheduling does not use the rotation method.
In the rotation method, the selection of time slice length is very important. Firstly, the choice of time slice length will directly affect the system overhead and response time. If the time slice length is too short, the number of times the scheduler preempts the processor increases. This will greatly increase the number of process context switching, thus increasing the system overhead. Conversely, if the time slice length is too long, for example, a time slice can ensure that the process with the longest execution time in the ready queue can be completed, the rotation method becomes the first come first serve method. The selection of time slice length is determined according to the system's response time requirements and the maximum number of processes allowed in the ready queue.
      In the rotation method, there are three situations for processes added to the ready queue:
      The first is that the time slice allocated to it runs out, but the process has not been completed. It returns to the end of the ready queue and waits for the next scheduling to continue execution.
      In the second case, the time slice allocated to the process is not used up, but is blocked due to I / O request or mutual exclusion and synchronization of the process. When the blocking is released, it returns to the ready queue.
      The third case is that the newly created process enters the ready queue.
      If these processes are treated differently and given different priorities and time slices, intuitively, the service quality and efficiency of the system can be further improved. For example, we can divide the ready queue into different ready queues according to the type of process arriving at the ready queue and the blocking reason when the process is blocked. Each queue is arranged according to the FCFS principle. The processes between each queue enjoy different priorities, but the priorities in the same queue are the same. In this way, when a process is awakened from sleep and created after executing its time slice, it will enter a different ready queue.

(4) Multilevel feedback algorithm

The various algorithms used for process scheduling described above have certain limitations. For example, the short process first scheduling algorithm only takes care of the short process and ignores the long process. If the length of the process is not specified, the short process first and the preemptive scheduling algorithm based on the process length will not be used.
The multi-level feedback queue scheduling algorithm does not need to know the execution time of various processes in advance, and can also meet the needs of various types of processes. Therefore, it is recognized as a better process scheduling algorithm. In the system using multi-level feedback queue scheduling algorithm, the implementation process of the scheduling algorithm is as follows.
(1) Multiple ready queues shall be set and each queue shall be given different priority. The priority of the first queue is the highest, followed by the second queue, and the priority of other queues is reduced one by one. The size of the execution time slice given by the algorithm to the processes in each queue is also different. In the queue with higher priority, the smaller the execution time slice specified for each process. For example, the time slice of the second queue is twice as long as that of the first queue,..., and the time slice of the I + 1 queue is twice as long as that of the I queue.
(2) When a new process enters memory, it is first placed at the end of the first queue and queued for scheduling according to the FCFS principle. When it is the turn of the process to execute, if it can be completed within the time slice, it can be ready to evacuate the system; If it is not completed at the end of a time slice, the scheduler will transfer the process to the end of the second queue, and then wait for the scheduled execution according to the FCFS principle; If it has not finished after running a time slice in the second queue, it will be put into the third queue in turn,... In this way, when a long job (process) drops from the first queue to the nth queue in turn, it will run in the way of time slice rotation in the nth queue.
(3) Only when the first queue is idle, the scheduler schedules the process in the second queue to run; Only when queues 1 to (i-1) are empty, the processes in queue I will be scheduled to run. If a new process enters the queue with higher priority (any one of queues 1 to (i-1)) when the processor is serving a process in queue I, the new process will preempt the processor of the running process, that is, the scheduler will put the running process back to the end of queue I and assign the processor to the new high priority process.

5. Parallel, concurrent, synchronous, asynchronous, blocking and non blocking of processes

(1) Concurrency and parallelism

Parallel: multiple programs are executed by the CPU at the same time.

Concurrency: multiple programs seem to be running at the same time. It refers to the alternating use of resources in the case of limited resources in order to improve efficiency.

Parallelism is from the micro level, that is, in a precise moment of time, there are different programs executing, which requires multiple processors.
From a macro perspective, concurrency is executed at the same time in a time period. For example, a server processes multiple sessions at the same time.

(2) Synchronous and asynchronous

Synchronization: one program calls another during execution, and waits for the program to complete during the call

Asynchrony: a program calls another during execution, but does not wait for this task to finish, and then continues to execute start.

Synchronization: when the completion of a task depends on another task, the dependent task can be completed only after the dependent task is completed. This is a reliable task sequence. Either success or failure, the status of the two tasks can be consistent.
Asynchrony: you don't need to wait for the dependent task to complete. You just notify the dependent task of what work to complete, and the dependent task will be executed immediately. As long as you complete the whole task, it will be completed. As for whether the dependent task is really completed in the end, the task that depends on it cannot be determined, so it is an unreliable task sequence.

(3) Blocking and non blocking

Blocking: CPU not working

Non blocking: CPU working

(4) Synchronous / asynchronous and blocking / non blocking

<1> Synchronous blocking form

Lowest efficiency. For example, when you are queuing, you can only concentrate on queuing and do nothing else.

<2> Asynchronous blocking form

Asynchronous operation can be blocked, but it is not blocked when processing messages, but when waiting for message notification. For example, when you are in the queue, you can do other things, but you can’t get out of the queue.

<3> Synchronous non blocking form

It’s actually inefficient. For example, while you are on the phone, you need to look up to see if the queue has reached you. If you regard calling and observing the position of the queue as two operations of the program, the program needs to switch back and forth between these two different behaviors, and the efficiency can be imagined to be low.

<4> Asynchronous non blocking form

It is more efficient. For example, when you are queuing, you suddenly find yourself going to WC, so ask the person in front of you to help you occupy a position. Then you will not be blocked in the waiting operation. Naturally, this is the asynchronous + non blocking method.

Many people confuse synchronization with blocking because synchronous operations are often expressed in the form of blocking. Similarly, many people also confuse asynchronous and non blocking because asynchronous operations are generally not blocked at real IO operations.

6. Process startup and destruction

(1) Start of process

For a general-purpose system (capable of running many applications), the system needs the ability to create or cancel processes during operation. There are four main forms to create new processes:

  • System initialization
  • A process starts sub processes during running (e.g. nginx starts multi processes, os.fork, subprocess. Popen, etc.)
  • User’s interactive request, and create a new process (for example, the user double clicks the application)
  • Initialization of a batch job (only applied in mainframe batch system)

Either way, the creation of a new process is created by an existing process executing a system call to create the process. The program responsible for starting a process is called the parent process. The process that is started is called a child process

#Create process
1. in UNIX, as like as two peas, the system calls are: fork, fork creates a copy exactly the same as the parent process. The two have the same storage image, the same environment string and the same open file (in the shell interpreter process, executing a command will create a sub process).
2. In windows, the system call is: CreateProcess. CreateProcess is not only responsible for the creation of processes, but also responsible for loading the correct programs into new processes.

#About creating child processes, UNIX and windows
1. The same is: after a process is created, the parent process and the child process have different address spaces (multi-channel technology requires memory isolation between processes at the physical level). The modification of any process in its address space will not affect another process.
2. The difference is that in UNIX, the initial address space of the child process is a copy of the parent process, suggesting that the child process and the parent process can have a read-only shared memory area. But for Windows system, the address space of parent process and child process is different from the beginning.

##The viewing process uses PS command in Linux and task manager in windows. The foreground process is responsible for interacting with users. The processes running in the background have nothing to do with users. The processes running in the background and wake up only when needed are called daemons, such as e-mail, web page, news and printing

(2) Destruction of processes

  • Normal exit (voluntary, for example, if the user clicks the cross on the interactive page, or after the program is executed, call the initiating system call to exit normally, use exit in Linux and exit process in Windows)
  • Error exit (voluntary, does not exist in Python
  • Serious error (involuntary, execution of illegal instructions, such as reference to nonexistent memory, 1 / 0, etc., exception can be caught, try… Except…)
  • Killed by another process (involuntary, such as kill – 9) (the child process ends at the parent process)
#View the process ID of the current process
import os

Print (OS. Getpid()) # view the process ID of the current process
Print (OS. Getppid()) # parent process ID # view the parent process ID of the current process

#Parent child process:
-The parent process opens the child process
-The parent process is also responsible for recycling the resources of the finished child process

#Process ID - > ProcessId - > PID 
-It is impossible to have two duplicate process IDs on the same machine at the same time
-The process ID cannot be set. It is randomly assigned by the operating system
-Process ID as a program runs many times, it may be allocated many times, each time different

7. Three states of the process

​ Ready run block