In depth analysis of the dependence of Linux kernel and its related architecture

Time:2020-10-17

There are two reasons for the success of Linux kernel

Flexible architecture design makes it easy for a large number of volunteer developers to join the development process;
Each subsystem (especially those that need to be improved) has good scalability.
It is these two reasons that make the Linux kernel constantly evolving and improving.

1、 The position of Linux kernel in the whole computer system
20151210114827000.png (195×202)

The principle of hierarchical structure:

the dependencies between subsystems are from the top down: layers pictured near the top depend on lower layers, but subsystems nearer the bottom do not depend on higher layers.

The dependency between subsystems can only be from top to bottom, that is, the subsystem at the top of the diagram depends on the subsystem at the bottom, otherwise, it can not.

2、 The role of the kernel
Virtualization (abstraction) abstracts the computer hardware into a virtual machine for the user process to use. When the process is running, it does not need to know how the hardware works at all, just call the virtual interface provided by the Linux kernel.
Multitasking, in fact, is that multiple tasks use computer hardware resources in parallel. The task of the kernel is to arbitrate the use of resources, creating the illusion that each process thinks that it is the exclusive system.
PS: process context switch is to change the program status word, the contents of the page table base address register, and the task pointed to by current_ Struct instance, replace the PC and then replace the file opened by the process (through task)_ Struct files can be found), and the execution space of the process memory is changed (through task)_ The MEM of struct can be found);

3、 The overall architecture of Linux kernel
20151210114849220.png (730×507)

The central system is the process scheduler, sched: all the remaining subsystems depend on the process scheduler because the rest of the subsystems need to block and recover processes. When a process needs to wait for a hardware action to complete, the corresponding subsystem will block the process; when the hardware action is completed, the subsystem will recover the process: the blocking and recovery actions depend on the process scheduler.

Each dependent arrow in the figure above has a reason:

The process scheduler relies on the memory manager: when a process resumes execution, it needs to rely on the memory manager to allocate memory for it to run.
IPC subsystem depends on memory manager: shared memory mechanism is a method of communication between processes. Two processes are running to transfer information by using the same shared memory space.
VFS is supported by network interface;
VFS depends on memory manager: supports ramdisk devices
Memory manager depends on VFS, because to support swap swapping, processes that are not running temporarily can be swapped out to swap partition swap on the disk and enter the suspended state.

4、 Highly modular design of the system, conducive to division of labor and cooperation.
Only a very small number of programmers need to work across multiple modules, which does happen only when the current system needs to rely on another subsystem;
Hardware device drivers, logical file system modules, network device drivers and network protocol modules have the highest scalability.

5、 Data structure in the system
Task list
The process scheduler maintains a data structure task for each process_ All processes are managed by a linked list to form a task list; the process scheduler also maintains a current pointer to the process currently occupying CPU.
Memory map
The memory manager stores the mapping of virtual address to physical address of each process, and also provides how to swap out a specific page or how to deal with missing page. This information is stored in the data structure mm_ In struct. Each process has a mm_ Struct structure, in the process of task_ The struct structure has a pointer mm pointing to the mm of the secondary process_ Struct structure.
In mm_ There is a pointer PGD in struct, which points to the page directory table of the process (that is, the first address of the page directory) – > when the process is scheduled, this pointer is replaced with a physical address and written to the control register CR3 (the page base register under x86 architecture)
I-nodes
VFS represents the file image on disk through inodes node, and inodes is used to record the physical attributes of files. Each process has a file_ Struct structure, used to represent the file opened by the process, in the task_ There is a file pointer in struct. File sharing can be realized by using inodes node. There are two ways to share files: (1) open a file through the same system and point to the same inodes node, which happens between parent and child processes; (2) open a file through different systems and point to the same inode node, for example, with hard links; or open the same file with two unrelated pointers.
Data connection
The roots of all data structures in the kernel are in the task list list maintained by the process scheduler. The data structure task of each process in the system_ In struct, there is a pointer mm pointing to its memory mapping information, a pointer files pointing to its open files (the user opens the file table), and a pointer to the network socket opened by the process.

6、 Subsystem architecture
1. Process scheduler architecture
(1) Objectives
Process scheduler is the most important subsystem in Linux kernel. It is used by the system to control the access to the CPU, not only for the user process, but also for other subsystems.

(2) Module
20151210114913156.png (658×398)

Scheduling policy module: to determine which process gets access to the CPU; scheduling policy should let all processes share the CPU as fairly as possible.

Architecture specific module designs a set of unified abstract interfaces to shield the hardware details of specific architecture interface chips. This module interacts with the CPU to block and resume processes. These operations include obtaining registers and status information that each process needs to save, and executing assembly code to complete blocking or recovery operations.
The architecture independent module architecture-independent module will interact with the scheduling policy module to decide the next execution process, then invoke the architecture related code to restore the execution of the process. In addition, the module also calls the interface of memory manager to ensure that the memory mapping information of blocked process is saved correctly.
The system call interface module allows user processes to access the resources explicitly exposed by Linux kernel. The user application is decoupled from the Linux kernel by defining a set of suitable and basically unchanged interfaces (POSIX standard), so that the user process will not be affected by the kernel changes.
(3) Data representation
The scheduler maintains a data structure, task list, in which the elements are the task of each active process_ Struct instance; this data structure contains not only information for blocking and recovering processes, but also additional count and state information. This data structure is publicly accessible throughout the kernel layer.

(4) Dependency, data flow, control flow
As mentioned earlier, the scheduler needs to call the function provided by the memory manager to select the appropriate physical address for the process that needs to resume execution. Because of this, the process scheduler subsystem depends on the memory management subsystem. When other kernel subsystems need to wait for the hardware request to complete, they all rely on the process scheduling subsystem for process blocking and recovery. This dependency is reflected by function calls and accessing the shared task list data structure. All kernel subsystems must read or write the data structure representing the currently running process, thus forming a bidirectional data flow throughout the whole system.

In addition to the data flow and control flow of the kernel layer, the OS service layer also provides an interface for user processes to register timers. This forms the flow of control for the user process by the scheduler. Usually, the use case for waking up a sleeping process is not within the normal control flow range, because the user process cannot predict when it will be awakened. Finally, the scheduler interacts with the CPU to block and recover processes, which in turn forms the data flow and control flow between them. The CPU is responsible for interrupting the currently running process and allowing the kernel to schedule other processes to run.

2. Memory manager architecture
(1) Objectives
Memory management module is responsible for controlling how processes access physical memory resources. The mapping between process virtual memory and machine physical memory is managed by hardware memory management system (MMU). Each process has its own independent virtual memory space, so two processes may have the same virtual address, but they actually run in different physical memory areas. MMU provides memory protection so that the physical memory space of two processes does not interfere with each other. The memory management module also supports swapping — swapping out temporarily unused memory pages to swap partitions on the disk, which makes the virtual address space of the process larger than the size of the physical memory. The size of the virtual address space is determined by the machine word length.

(2) Module
20151210114936191.png (648×421)

Architecture specific module provides a virtual interface to access physical memory;

Architecture independent module is responsible for address mapping and virtual memory exchange of each process. When a page fault occurs, it is up to the module to decide which memory page should be swapped out of the memory. Because the memory page swapping out selection algorithm hardly needs to be changed, there is no independent policy module.

The system call interface provides strict access interface (malloc and free; MMAP and umap) for user processes. This module allows processes to allocate and free memory and perform memory mapped file operations.

(3) Data representation
Memory management stores the mapping information from virtual memory to physical memory of each process. This mapping information is stored in mm_ In the struct structure instance, the pointer of this instance is stored in the task of each process_ In struct. In addition to the mapping information, the data block should also contain information about how the memory manager gets and stores pages. For example, the executable code can store the executable image as a backup; however, the dynamically requested data must be backed up to the system page. (I don’t understand this. Would you please help me

Finally, the memory management module should also store access and technical information to ensure the security of the system.

(4) Dependency, data flow and control flow
The memory manager controls the physical memory and accepts the hardware’s notification (page missing interrupt) when page failure occurs – which means that there is a bidirectional data flow and control flow between the memory management module and the memory management hardware. Memory management also relies on the file system to support exchange and memory mapping I / O. this requirement means that the memory manager needs to call the function interface procedure calls provided to the file system to store and retrieve memory pages from the disk. Because file system requests are very slow, the memory manager needs to put the process to sleep before waiting for the memory pages to be swapped in – a requirement that causes the memory manager to call the process scheduler’s interface. Since the memory mapping of each process is stored in the data structure of the process scheduler, there are two-way data flow and control flow between the memory manager and the process scheduler. The user process can create a new process address space and be aware of page fault faults – here control flow from the memory manager is needed. Generally speaking, there is no data flow from the user process to the memory manager, but the user process can get some information from the memory manager through the select system call.

3. Virtual file system architecture
(1) Objectives
Virtual file system provides a unified access interface for data stored on hardware devices. It can be compatible with different file systems (ext2, ext4, NTF, etc.). Almost all hardware devices in a computer are represented as a common device driver interface. Logical file systems promote compatibility with other operating system standards and allow developers to implement file systems with different policies. Virtual file systems go one step further, allowing system administrators to mount any logical file system on any device. The virtual file system encapsulates the details of physical devices and logical file systems, and allows user processes to access files using a unified interface.

In addition to the traditional file system target, VFS is also responsible for loading new executable files. This task is done by the logical file system module, which enables Linux to support multiple executable files.

(2) Module
20151210114954656.png (659×539)

device driver module
Device independent interface: provides the same view of all devices
Logical file system: for each supported file system
The system independent interface provides an interface independent of hardware resources and logical file system. This module provides all resources through block device node or character device node.
The system call interface provides the user process with unified access to the file system. Virtual file systems shield all special features for user processes.
(3) Data representation
All files are represented by inode. Each inode records the location of a file on the hardware device. Moreover, inode also stores function pointers to logical file system modules and device drivers, which can perform specific read and write operations. By storing function pointers in this form (the idea of virtual functions in object-oriented), specific logical file systems and device drivers can register themselves with the kernel without relying on specific module features.

(4) Dependency, data flow and control flow
A special device driver is ramdisk, which creates an area in main memory and uses it as a persistent storage device. This device driver uses the memory management module to complete the task, so there is a dependency relationship between VFS and the memory management module (the dependency relationship in the figure is reversed, VFS depends on the memory management module), data flow and control flow.

Logical file system supports network file system. This file system accesses files from another machine just like accessing local files. In order to achieve this function, a logical file system completes its task through the network subsystem — this introduces a dependency of VFS on the network subsystem and the control flow and data flow between them.

As mentioned earlier, the memory manager uses VFS to perform memory swapping and memory mapping I / O. In addition, when VFS is waiting for the hardware request to complete, VFS needs to use the process scheduler to block the process; when the request is completed, VFS needs to wake up the process through the process scheduler. Finally, the system call interface allows user process calls to access data. Unlike the previous subsystems, VFS does not provide a mechanism for users to register ambiguous calls, so there is no control flow from VFS to user processes.

4. Network interface architecture
(1) Objectives
The network subsystem enables the Linux system to connect with other systems through the network. This subsystem supports many hardware devices and many network protocols. The network subsystem shields the implementation details of hardware and protocol, and abstracts simple and easy-to-use interfaces for user processes and other subsystems. User processes and other subsystems do not need to know the details of hardware devices and protocols.

(2) Module
20151210115012150.png (652×532)

Network device drivers
The device independent interface module provides a consistent access interface for all hardware devices, so that the high-level subsystem does not need to know the details of the hardware.
Network protocol modules are responsible for the implementation of each network transmission protocol, such as TCP, UDP, IP, HTTP, ARP and so on~
Protocol independent interface provides a consistent interface independent of specific protocols and hardware devices. This allows the rest of the internal nuclear systems to access the network without relying on specific protocols or devices.
The system calls interface module defines the network programming API that the user process can access
(3) Data representation
Each network object is represented as a socket socket. The socket is associated with the process in the same way as the inode node. Through two tasks_ Struct points to the same socket, which can be shared by multiple processes.

(4) Data flow, control flow and dependencies
When the network subsystem needs to wait for the hardware request to complete, it needs to block and wake up the process through the process scheduling system, which forms the control flow and data flow between the network subsystem and the process scheduling subsystem. Moreover, the virtual file system implements the network file system (NFS) through the network subsystem, which forms the data flow and control flow of VFS and network subsystem nails.

7、 Conclusion
1. Linux kernel is a layer in the whole Linux system. The kernel consists of five main subsystems: process scheduler module, memory management module, virtual file system, network interface module and inter process communication module. These modules interact with each other through function call and shared data structure.

2. The Linux kernel architecture contributed to his success. This architecture allows a large number of volunteer developers to work together properly, and makes each specific module easy to expand.

Scalability 1: Linux architecture makes these subsystems scalable through a data abstraction technology – each specific hardware device driver is implemented as a separate module, which supports the unified interface provided by the kernel. In this way, individual developers can add new device drivers to the Linux kernel with minimal interaction with other kernel developers.
Scalability 2: Linux kernel supports a variety of different architectures. In each subsystem, the code related to architecture is divided into separate modules. In this way, when some manufacturers launch their own chips, their kernel development teams only need to re implement the machine related code in the kernel, and then they can transplant the kernel to a new chip to run.