Computer basic summary of 2021 autumn recruitment interview – operating system

Time:2021-1-19

operating system

process

Process is the executing program, which is the basic unit of operating system resource allocation. In general, a process contains instructions, data, and dataPCB

DaemonsIt is a special process running in the background. It is independent of the control terminal and performs some tasks periodically.

Zombie process

The process descriptor of a child process is not released when the child process exits, only when the parent process passeswait()orwaitpid()It will not be released until the child process information is obtained. If the child process exits and the parent process does not callwait()orwaitpid()Then the process descriptor of the child process is still stored in the system.

Orphan process

If a parent process exits and one or more of its child processes are still running, these child processes will become orphan processes. The orphan process will be abandonedinitProcess (process)IDby1Adopted by and byinitThe process completes the state collection for them. Because the orphan process will be destroyedinitProcess adoption, so orphan processes do not harm the system.

thread

Threads are different execution paths within a process, and are the basic unit of independent scheduling of the operating system. A process can have multiple threads that share process resources. For example, wechat and browser are two processes. There are many threads in the browser process, such as HTTP request thread, event response thread, rendering thread, and so on. The concurrent execution of threads makes it possible for the browser to respond to other events of the user when clicking a new link in the browser to initiate an HTTP request.

Two kinds of threads

User thread, kernel level thread

Process and thread

  • The process isProgram in progressThe thread is internal to the processDifferent execution paths
  • Process is the smallest unit of resource allocation, and thread is the smallest unit of program execution (the smallest unit of resource scheduling).
  • A program has at least one process and a process has at least one thread.
  • The process has its own independent address space. Every time a process is started, the system will allocate the address space for it, while threads share the data and address space in the process.
  • The communication between threads is more convenient, because the threads in the same process share global variables, static variables and other data, and the communication between processes needs to be carried out in the form of communication (IPC).
  • Multiprocess programs are more robust. As long as one thread dies in a multithreaded program, the whole process also dies. The death of one process will not affect another process, because the process has its own independent address space.

Concurrency and parallelism

  • Concurrency means that in a period of time, multiple tasks will be processed; however, in the case ofAt the same timeOnly one task is being performed. Single core processor can achieve concurrency. For example, there are two processes a and B. A switches to B after running a time slice, and B switches to a after running a time slice. Because the switching speed is fast enough, soMacroscopicallyIt can run multiple programs at the same time in a period of time.
  • Parallel is in parallelAt the same time, there are multiple tasks in progress. This requires multi-core processors to complete, in the micro can execute multiple instructions at the same time, different programs are put on different processors to run, this is physical multiple processes at the same time.

Process status

  • In the five state model, there are five states in the process, which areCreate, ready, run, terminate, block
  • The running state is that the process is running on the CPU. In a single processor environment, at most one process is running at any one time.
  • Ready state means that the process is ready to run, that is, the process obtains all the required resources except CPU, and can run once it gets CPU.
  • A blocking state is when a process is waiting for an event to stop running, such as waiting for a resource to be available or waiting for I / O to complete. The process cannot run even if the CPU is idle.

Running state → blocking state: it is often caused by waiting for peripherals, waiting for main memory and other resource allocation or waiting for manual intervention.

Blocking state → ready state: the waiting condition has been met, and it can run only after it is allocated to the processor.

Running state → ready state: it is not due to its own reasons, but due to external reasons that the running state of the process gives up the processor, then it becomes ready state. For example, the time slice runs out, or the process with higher priority preempts the processor.

Ready state → running state: the system selects a process in the ready queue to occupy the processor according to a certain policy, and then it becomes running state.

Process scheduling algorithm

  • First come first served

    Non preemptive scheduling algorithm, scheduling according to the order of requests.

    It is good for long jobs, but not good for short jobs, because short jobs must wait for the long jobs to be executed, and long jobs need to be executed for a long time, resulting in long waiting time for short jobs. In addition, it is also disadvantageous to I / O-Intensive processes, which have to queue up again after each I / O operation.

  • Short work first

    Non preemptive scheduling algorithm, scheduling according to the order of the shortest estimated running time.

    A long job may starve to death, waiting for the completion of a short job. Because if there are always short jobs coming, long jobs will never be scheduled.

  • Shortest remaining time first

    The preemptive version with the shortest job priority is scheduled according to the order of remaining running time. When a new job arrives, its whole running time is compared with the remaining time of the current process. If the new process takes less time, the current process is suspended and the new process is run. Otherwise, the new process will wait.

  • Time slice rotation

    According to the principle of FCFS, all the ready processes are arranged in a queue, and the CPU time is allocated to the first process in each scheduling, which can execute a time slice. When the time slice runs out, the timer will send out a clock interrupt, and the scheduler will stop the execution of the process, send it to the end of the ready queue, and continue to allocate CPU time to the process at the head of the queue.

    • The efficiency of time slice rotation algorithm is closely related to the size of time slice
      • Because process switching needs to save process information and load new process information, if the time slice is too small, process switching will be too frequent, and process switching will take too much time.
      • If the time slice is too long, the real-time performance can not be guaranteed.
  • Priority scheduling

    Each process is assigned a priority and scheduled according to the priority.

    In order to prevent the low priority process from never waiting for scheduling, we can increase the priority of the waiting process as time goes on.

Preemptive scheduling and non preemptive scheduling

  • Preemptive mode means that the operating system will suspend the running process, and the scheduler will allocate the CPU to other ready processes.
  • Non preemptive is that once the scheduler allocates the processor to a process, it will let it run until the process is completed or when the process scheduling an event and blocks, it will assign the processor to another process.

Big kernel and micro kernel

  • Big kernel is to put all the functions of the operating system into the kernel, including scheduling, file system, network, device driver, storage management, etc., to form a tightly connected whole. The advantage of big kernel is high efficiency, but it is difficult to locate bugs, and its expansibility is relatively poor. Every time you need to add new functions, you have to recompile the new code with the original kernel code.
  • Microkernel is different from single kernel. Microkernel only adds the most core functions in operation to the kernel, including IPC, address space allocation and basic scheduling. These things all run in kernel state. Other functions are called by the kernel as modules and run in user space. Microkernel is easy to maintain and expand, but its efficiency may not be high, because it needs to switch between kernel mode and user mode frequently.

Time sharing system and real time system

  • Sharing time system is a system that divides CPU time into short time slices and allocates them to multiple jobs in turn. Its advantage is that it can ensure fast enough response time for multiple jobs of multiple users, and effectively improve the utilization of resources.
  • Real time system (RTs) is a system that can process and respond to the external input information within a specified time (deadline). It has the advantages of being able to deal with and respond in time, high reliability and safety.
  • Usually, the computer adopts time-sharing, that is, multiple processes / users share CPU to realize multi task from the situation. If a process is locked, more time can be allocated to it. The real-time operating system is different, software and hardware must comply with strict time limit, the process beyond the time limit may be terminated directly. In such an operating system, every lock needs careful consideration.

Static link and dynamic link

  • Static link is to integrate static library into application program by compiler and connector during compilation, and make it into object file and executable file which can operate independently. A static library is generally a collection of external functions and variables.
  • Static libraries are very convenient, but if we just want to use a function in the library, we still have to link everything in it. A more modern approach is to use shared libraries to avoid a lot of duplication of static libraries in files.
  • Dynamic links can be executed when they are first loaded, or when the program starts to execute. This is done by dynamic linker, such as standard C library( libc.so )Usually, it is dynamically linked, so that all programs can share the same library without being encapsulated separately.

Compilation phase

  • Preprocessing stage: Processing preprocessing commands beginning with #;
  • Compile stage: translate into assembly file;
  • Assembly stage: translate assembly file into relocatable object file;
  • Link phase: merge the relocatable object file with the pre compiled object files such as printf. O to get the final executable object file.

System calls and library functions

  • System call is a way for applications to request services from the system kernel. It can include hardware related services (for example, access to hard disk, etc.), or create new processes, schedule other processes, etc. System call is an important interface between program and operating system.
  • Library function is to put some commonly used functions into a file and call them when writing application program. This is provided by a third party and occurs in user address space.
  • In terms of portability, the system calls of different operating systems are generally different, and the portability is poor; library functions will be relatively better. For example, in all versions of the ANSI C compiler, the C library functions are the same.
  • In terms of call overhead, system calls need to switch between user space and kernel environment, which is high overhead, while library function calls are low overhead.

deadlock

Deadlock occurs when two or more threads are waiting for the other party to finish execution before continuing to execute. The result is that these threads are stuck in an infinite wait.

reason:

  • The resources provided by the system are too few to meet the needs of concurrent processes
  • It is often unreasonable for the process to advance in an inappropriate order, to occupy the resources needed by each other and to request the resources occupied by the other party at the same time
Necessary conditions for deadlock

It needs to have the following four conditions at the same time:

  • Mutual exclusion condition:That is, a resource can only be occupied by one process in a period of time, not by two or more processes at the same time
  • Non preemptive conditions: before the resource obtained by the process is used up, the resource applicant can not forcibly seize the resource from the resource owner, but can only be released by the resource owner process
  • Possession and waiting conditions:A process that has obtained a resource can request a new resource.
  • Cycle waiting condition:There are two or more processes in a loop, and each process in the loop is waiting for the resource occupied by the next process.
How to avoid thread deadlock?
  • There are four ways to deal with deadlock

    (1) Deadlock prevention: by ensuring that a necessary condition for a deadlock will not be met, a deadlock will not occur

    (2) Deadlock detection: allows the occurrence of deadlock, but it can detect the occurrence of deadlock in time through the detection structure set by the system, and take some measures to clear the deadlock

    (3) Deadlock avoidance: in the process of resource allocation, use some method to avoid the system into an unsafe state, so as to avoid deadlock

    (4) Deadlock release: a measure matching with deadlock detection. When it is detected that a deadlock has occurred in the system, it is necessary to extricate the process from the deadlock state.

Common method: cancel or suspend some processes to recycle some resources, and then allocate these resources to the blocked processes.

  • Details of deadlock handling:

1、 Deadlock prevention: one or more of the four conditions for breaking a deadlock

× (1) mutex: allows processes to access resources at the same time (some resources can’t be accessed at the same time and have no practical value)

(2) occupy and wait: in order to prevent the occupy and wait condition, the process can be required to request all the required resources at one time, and block the process until all the requests are met at the same time. (unpredictable use of resources, low utilization and low concurrency)

(3) no preemption: for example, the process can be given priority, and the high priority process can preempt the resources (difficult to implement and reduce the system performance)

(4) circular waiting: prevent by defining the linear order of resource types.

The resources are classified and numbered in advance, and allocated according to the number, so that the process will not form a loop when applying for and occupying resources. All process requests for resources must be put forward strictly in the order of increasing the sequence number of resources (it is difficult to implement the restriction and numbering, which increases the system overhead. Some resources are not needed for the time being, but need to be applied first, which increases the process’s occupation time of resources)

2、 Deadlock avoidance:

Two deadlock avoidance algorithms are proposed

  • Process start rejection: if a process’s request causes a deadlock, the process will not be started.
  • Resource allocation rejection: if a process’s increased resource requests will cause deadlock, this allocation is not allowed (banker algorithm).

Banker’s algorithm:

0. First, carry out the security check, and then enter the subsequent steps after passing,

1. If request < = needed, go to step 2; otherwise, it is considered that there is an error, because the requested resource is larger than the required resource.

2. If request < = available, go to step 3; otherwise, there are not enough resources, and process P is blocked;

3. The system tries to allocate resources to process P and modify the values of available, allocation and need.

4. The system performs security check to check whether the system is in a safe state after the allocation. If it is safe, the resource will be formally allocated to process P. otherwise, the tentative allocation will be invalid, and the system will return to the original state to let process P wait.

*Security status: the system can allocate resources for each process according to a certain process order until it meets the maximum demand of each process for resources, so that each process can be successfully completed.

3、 Deadlock detection

It can allow the system to enter the deadlock state, but it will maintain the resource allocation diagram of a system, and call the deadlock detection algorithm regularly to detect whether there is a deadlock on the way. After the deadlock is detected, the deadlock recovery algorithm is adopted to recover.

The deadlock detection method is as follows:

  • In the resource allocation diagram, find the non blocking and non independent process node, so that the process can obtain the required resources and run. After running, release all the resources it occupies. In other words, the request side and allocation side of the process node are eliminated.
  • Using the above algorithm for a series of simplification, if all edges can be eliminated, it means that there will be no deadlock, otherwise there will be deadlock.

When a deadlock is detected, it needs to be resolved. At present, the operating system mainly adopts the following methods:

  • Cancel all deadlock related threads, simple and crude, but it is indeed the most commonly used
  • Roll back each deadlock thread to some checkpoint and restart it
  • The sequence is based on the specific minimum cost principle
  • Continuously preempt resources until the deadlock is released

4、 Deadlock release

So what if a deadlock happens? Then we have to get rid of this state. We generally have the following ways:

  • Undo deadlock process
  • Deprives the deadlock process of resources until there is no deadlock
  • Ostrich algorithm (i.e. ignore it directly, treat it as if nothing happened, and you may not believe it when you say it. Most operating systems choose this method).

How to synchronize processes

Critical area

Through the serialization of Multithread to access common resources or a piece of code, it is fast and suitable for controlling data access. Only one thread is allowed to access the shared resources at any time. If there are multiple threads trying to access the common resources, then after one thread enters, other threads trying to access the common resources will be suspended, and wait until the thread entering the critical area leaves, and after the critical area is released, other threads can preempt.

Advantages: a simple way to ensure that only one thread can access data at a certain time.

Disadvantages: Although the synchronization speed of critical area is very fast, it can only be used forSynchronize threads in the current process, which cannot be used to synchronize threads in multiple processes.

mutex

The mechanism of mutually exclusive object is adopted. Only threads with mutex have access to public resources. Because there is only one mutex, it can ensure that public resources will not be accessed by multiple threads at the same time. Mutual exclusion can not only realize the security sharing of public resources for the same application, but also realize the security sharing of public resources for different applications.

Semaphore

It is designed to control a system with a limited number of user resources. It allows multiple threads to access the same resource at the same time, but it needs to limit the maximum number of threads to access the resource at the same time. Mutex is a special case of semaphore. When the maximum resource number of semaphore is 1, it is mutex.

PV operation:

  • For the P operation, if the semaphore is less than 0 after the operation, the process of the operation will be blocked, otherwise the operation will continue;
  • For the V operation to release resources, if the semaphore after the operation is less than or equal to 0, a process will be awakened from the blocking queue.
Guancheng

The management process uses the object-oriented idea, which is a centralized synchronization process. Data structures representing shared resources and related operations, including synchronization mechanism, are centralized and encapsulated together.All processes can only access critical resources indirectly through the management processHowever, only one process is allowed to enter and perform operations in the pipe procedure, so as to realize mutual exclusion of processes.

Multiple condition variables are set in the pipe procedure to indicate the conditions under which multiple processes are blocked or suspended. The operation of wait () on the condition variable will cause the calling process to block and let the pipe out to another process. The signal () operation is used to wake up the blocked process.

An important feature of a pipe process is that it can only be used by one process at a time. A process cannot occupy a pipe when it cannot continue to execute, otherwise other processes can never use a pipe.

Method of interprocess communication (IPC)

The Conduit
  • The pipeline is half duplex (walkie talkie), and the data can only flow in one direction at a certain time; if the two sides need to communicate, two pipelines need to be established.
  • Pipes can only be used for parent-child processes or sibling processes or processes with kinship
  • A pipeline is a file for the processes at both ends of the pipeline, but it is not an ordinary file. It does not belong to a file system and only exists in memory.
  • The essence of a pipeline is a kernel buffer. Processes access data from the buffer in a first in first out manner. Processes at one end of the pipeline write data to the buffer in sequence, while processes at the other end read data in sequence. The buffer can be regarded as a circular queue. The read and write positions are automatically increased and cannot be changed at will. A data can only be read once. After reading, the buffer no longer exists. When the buffer is read empty or full, there are certain rules to control the corresponding read process or write process to enter the waiting queue. When the empty buffer has new data written or the full buffer has data read out, the process in the waiting queue will be awakened to continue to read and write.
  • The main limitations of pipeline are reflected in its characteristics, such as only supporting one-way data flow, can only be used between related processes, no name, pipeline buffer is limited and so on.
name pipes

This kind of pipe is also calledFIFO. It is also a half duplex communication mode, but it allows communication between unrelated processes.

Named pipeline is different from pipeline in that it provides a path name associated with it and exists in the file system in the form of named pipeline file. In this way, even processes that do not have kinship with the creating process of named pipeline can communicate with each other through named pipeline as long as they can access this path in the file system. Named pipes strictly follow the principle of first in first out, and do not support data random location. The name of the named pipe exists in the file system, but the content is stored in memory.

Message queuing
  • Message queue is a linked list of messages with a specific format. It is stored in memory, and each message queue has a unique identity.
  • Message queue overcomes the disadvantages of less signal transmission information, pipeline can only carry formatted byte stream and limited buffer size.
  • Message queue allows one or more processes to write and read messages to it. Therefore, by using message queue, a process can send a data block to another process. Each data block has a type. The receiving process can independently receive data structures with different types. This process is asynchronous. We can avoid naming pipes by sending messages Synchronization and blocking problems. However, there is a maximum size limit for the data block of message queue.
Shared memory
  • Shared memory is designed for the low efficiency of other communication mechanisms. It allows multiple processes to directly read and write the same memory space. It is the fastest form of IPC.
  • In order to exchange information among multiple processes, the kernel has reserved a special memory area, which can be mapped to its own private address space by the processes that need to access. The process can read and write this memory directly without copying the data, thus greatly improving the efficiency.
  • Because multiple processes share a piece of memory, it is necessary to rely on some synchronization mechanism to achieve synchronization and mutual exclusion between processes.
Semaphore

Semaphore is a counter that can be used to control the access of multiple processes to shared resources. It is a lock like mechanism, which prevents other processes from accessing the shared resource when the current process is accessing it.

Socket
  • SocketSocket is also a kind of communication mechanism. With this mechanism, two processes that are not on the same host can communicate through the network. Generally, it can be used for the communication between the client and the server.
  • actually,SocketIt is an abstract layer between the application layer and the transport layer. It abstracts the complex operations in the transport layer of the TCP / IP protocol into several simple interfaces, and provides layer calls to realize the communication of processes in the network.

What is the socket communication process?

  • Generally speaking, a socket is established at both ends of the communication, and then the data is transmitted through the socket. Usually, the server is in an infinite loop, waiting for the client to connect.
  • For the client, its process is relatively simple. First, create a socket, connect to the server through TCP, connect the socket with a process of the remote host, and then send data, or read the response data, until the data exchange is completed, close the connection, and end the TCP conversation.
  • For the server, first initialize the socket, establish the streaming socket, bind with the local address and port, and then notify TCP, ready to receive the connection, call accept() to block, waiting for the connection from the client. If the client establishes a connection with the server, the client sends a data request, the server receives the request and processes the request, then sends the response data to the client, and the client reads the data until the data exchange is completed. Finally, close the connection and the interaction ends.

Talking about socket communication process from the perspective of TCP connection

  • First, three handshakesSocketInteraction process.
    1. Server callsocket()bind()listen()After initialization, callaccept()Blocking and waiting;
    2. clientSocketObject callconnect()A message was sent to the serverSYNAnd blocking;
    3. The server completes the first handshake and sends theSYNandACKanswer;
    4. After the client receives the response sent by the server, the clientconnect()Go back and send another oneACKTo the server;
    5. The serverSocketObject to receive the client’s third handshakeACKConfirm, and the server willaccept()Go back and make a connection.
  • Next, the two end connection objects send and receive data from each other.
  • And then four wavesSocketInteraction process.
    1. An application process callclose()Active shutdown, send aFIN
    2. The other end receivesFINAfter the passive execution is closed and sentACKConfirmation;
    3. After that, the closed application process call is executed passivelyclose()closeSocket, and also send oneFIN
    4. Received thisFINFrom one end to the otherACKConfirm.

Disk scheduling algorithm

First come first served
  • Schedule according to the order of disk requests.
  • The advantages are fairness and simplicity. The disadvantage is also obvious, because there is no optimization for seek, so the average seek time may be longer.
Shortest seek time first
  • Priority is given to the track closest to the current head.
  • Although the average seek time is relatively low, it is not fair. If the newly arrived track request is always closer to a waiting track request, then the waiting track request will continue to wait, that is, starvation occurs. Generally speaking, track requests at both ends are more prone to starvation.
Elevator algorithm
  • Also called scan algorithm. That is to say, the read / write head always runs in one direction until there is no request in that direction, and then changes the running direction.
  • Considering the moving direction, all disk requests will be satisfied, which solves the problem of shortest seek time first.

virtual memory

  • It can expand the physical memory into a larger logical memory, so that the program can get more available memory.
  • Virtual memory uses partial loading technology to load some pages of a process or resource into the memory, so that more processes can be loaded, or even processes larger than the memory can be loaded. In this way, it seems that the memory has become larger. In fact, this part of the memory contains disks or hard disks, which is called virtual memory.

Paging system

  • The disk or hard disk is divided into fixed size data blocks, called pages, and then the memory is divided into the same size blocks, called page frames.
  • When a process is executing, it will load the pages of the disk into some page boxes in the memory, and if a page missing interrupt occurs in the executing process, it will also occur.
  • Page and page frame are composed of two parts, one is the page number or page frame number, and the other is the offset. Paging is usually accomplished by hardware. Each page corresponds to a page frame. Their corresponding relationship is stored in a data structure called page table. The page number is used as the index of the page table, and the page frame number is used as the value of the page table. The operating system is responsible for maintaining this page table.

Pagination and segmentation

  • Paging is mainly used to achieve virtual memory, so as to obtain a larger address space; segmentation is mainly to make the program and data can be divided into logically independent address space and help to share and protect.
  • Pagination is transparent to the programmer, but segmentation requires the programmer to explicitly partition each segment.
  • The paging address space is one-dimensional and the segmentation is two-dimensional.
  • The size of the page is immutable, and the size of the segment can be changed dynamically.

Page replacement algorithm

In the process of program running, if the page to be accessed is not in memory, page missing interrupt will occur, so that the page will be transferred into memory. At this point, if there is no free space in memory, the system must transfer a page from memory to the disk swap area to make room.

Theoretical algorithm: the best algorithm

The selected page will not be accessed for the longest time, which can ensure the lowest page missing rate. This is a theoretical algorithm, because it is impossible to know how long a page is no longer visited.

fifo

Select the page to be swapped out as the first page to be entered. Pages that are frequently visited may be swapped out, resulting in a higher page missing rate.

LRU

  • Although we can’t know what page we will use in the future, we can know what page we used in the past.
  • The LRU swapps out the most recently unused pages. In order to implement LRU, it is necessary to maintain a linked list of all pages in memory. When a page is accessed, move the page to the header of the linked list. In this way, we can ensure that the page at the end of the linked list is the most recently visited page.
  • Because the linked list needs to be updated every time, the LRU cost is very high.

Clock algorithm

  • Use the ring list to connect the pages, and then use a pointer to the oldest page. It marks every page of the entire ring list, if the mark is0Then the clock algorithm traverses the whole ring and encounters the problem marked as1Is replaced, otherwise it will be marked as0Marked as1

Linux Filesystem

LinuxThere are files and directories in the file system, forming a tree structure. Each leaf node of the tree represents a file or an empty directory. Each file is basically composed of two parts:

  • inode: one file takes oneinode, record the properties of the file, and record the location of the content of the fileblockNo;
  • block: records the contents of a file. If the file is too large, it will take up more than one fileblock

In addition, it also includes:

  • superblock: records the overall information of the file system, includinginodeandblockThe total amount, usage and surplus of the file system, as well as the format and related information of the file system, etc;
  • block bitmap: recordblockWhether the bitmap is used.

To read the contents of a file, firstinodeFind all the locations where the contents of the file are located inblock, and then put allblockRead out what you want.

Hard link and soft link

  • Hard link is to create an entry in the directory, recording the file name andinodeNumber, this oneinodeIt’s the source fileinode. Delete any entry, the file still exists, as long as the number of references is not0. But hard links are limited. They can’t span file systems or link directories.
  • Soft connection is actually a text file, which contains the location information of another file. It can be understood asWindowsShortcut to. When the source file is deleted, the linked file cannot be opened.

reference resources

The operating system of interview questions

This work adoptsCC agreementReprint must indicate the author and the link of this article