Common questions of operating system interview

Time:2019-12-24

Process vs thread

  • The biggest difference between process and thread isThe process has its own address space, and the threads in one process are invisible to other processes, that is, process a cannot directly read and write the storage area of process B through address transmission。 Interprocess communication (IPC) is required for communication between processes. In contrast,Information can be transferred directly between threads of the same process by passing address or global variable

  • As the basic unit of resources and independent scheduling in the operating system, a process can have multiple threads.Usually a program running in the operating system corresponds to a process. In the same process, thread switching does not cause process switching. Thread switching in different processes, such as from one thread in one process to another, will cause process switching.Compared with process switching, the cost of thread switching is much smaller. The combination of threads and processes can improve the running efficiency of the system.

Threads can be divided into two categories:

  • User level thread: for such threads, all work related to thread management is done by the application program, and the kernel is not aware of the existence of threads. After the application is started, the operating system assigns a process number to the program and its corresponding memory space and other resources. Applications usually run first in a thread, which is called the main thread. At some point in its operation, you can create a new thread running in the same process by calling a function in the thread library.The advantage of user level threads is that they are very efficient, do not need to enter the kernel space, but the concurrency efficiency is not high.

  • Kernel level thread: for this kind of thread, all work related to thread management is done by the kernel. The application program has no code for thread management and can only call the interface of kernel thread. The kernel maintains the process and every thread in it, and the scheduling is also done by the kernel based on the thread architecture. The benefit of kernel level threads is that,The kernel can allocate different threads to different CPUs to realize real parallel computing.

In fact, in the modern operating system, the combination mode is often used to realize multithreading, that is, thread creation is completely completed in user space, and multiple user level threads in an application program are mapped to some kernel level threads, which is a compromise scheme.

Context switch

  • For single core single thread CPU, only one CPU instruction can be executed at a certain time. Context switch is a kind ofA mechanism for allocating CPU resources from one process to another。 From the user’s point of view, the computer can run multiple processes in parallel, which is exactly the result of the operating system through rapid context switching.In the process of switching, the operating system needs to first store the status of the current process (including memory space pointer, currently executed instructions, etc.), then read the status of the next process, and then execute the process.

The difference between system call and library function

  • System call is the way in which a program requests services from the system kernel.It can include hardware related services (for example, accessing the hard disk, etc.), or creating new processes, scheduling other processes, etc. System call is an important interface between program and operating system.

  • Library function: put some commonly used functions into a file and call when writing an application. This is provided by a third party and occurs in the user address space

  • stayTransplantationIn general, the system calls of different operating systems are different with poor portability; in all versions of ANSI C compiler, C library functions are the same.

  • stayCall overhead, system call needs to switch between user space and kernel environment, which is expensive; library function call belongs to “procedure call”, which is less expensive.

Concept of guardian, zombie, orphan process

  • Daemon: a special process running in the background,Independent of the control terminal and periodically perform certain tasks

  • Zombie process: a process fork child process, the child process exits, but the parent process has no wait / waitpid child process, thenThe process descriptor of the child process is still saved in the system, such a process is called a zombie process.

  • Orphan processOne:The parent process exits while one or more of its child processes are still runningThese subprocesses are called orphan processes. (orphan processes will be adopted by the init process and status collection will be completed for them)

The difference between time-sharing system and real-time system

  • Sharing time system: the system divides CPU time into short time slices, which are allocated to multiple jobs in turn. Advantage:For multiple jobs of multiple users, the response time can be fast enough, and the utilization of resources can be effectively improved.

  • Real time system: the system can process and respond to the external input information within the specified time (deadline). Advantage:It can handle and respond in a centralized and timely manner, with high reliability and safety.

  • Generally, the computer uses sharing time, that is, multiple processes / users share the CPU to realize multitasking from the situation.The scheduling among users / processes is not very precise. If a process is locked, more time can be allocated to it. Different from the real-time operating system, the software and hardware must follow the strict deadline, and the process beyond the deadline may be terminated directly. In such an operating system, every time you lock it, you need to think about it carefully.

Semaphore vs mutex

  • When users create multiple threads / processes, if different threads / processes read and write the same content at the same time, it may cause reading and writing errors, or data inconsistency. At this time, you need to control the access rights of critical section by locking. For semaphore, when initializing variables, you can control how many threads / processes are allowed to access a critical area at the same time. Other threads / processes will be blocked until someone unlocks them.

  • Mutex is equivalent to semaphore that only one thread / process is allowed to access. In addition, according to the actual needs, people also implement a read-write lock, which allows multiple readers to exist at the same time, but at most only one writer at any time, and cannot coexist with the reader.

Logical address vs physical address vs virtual memory

  • The so-called logical address refers to the address that computer users (such as program developers) see.For example, when you create an integer array of length 100, the operating system returns a logically contiguous space: the memory address of the first element of the array that the pointer points to. Because the size of an integer element is 4 bytes, the address of the second element is the starting address plus 4, and so on. In fact,The logical address is not necessarily the real address stored by the element, that is, the physical address of the array element (the location in the memory module), which is not continuous, but the operating system maps the logical address into continuous through address mapping, which is more in line with people’s intuitive thinking

  • Another important concept is virtual memory. Operating system can read and write memory several orders of magnitude faster than disk. However, the price of memory is relatively high and cannot be expanded on a large scale. Therefore,The operating system can expand the memory by moving some less commonly used data out of the memory and “storing it in a relatively low price disk cache”。 The operating system can also predict which part of the data stored in the disk cache needs to be read and written by algorithm, and read this part of data back to memory in advance.Virtual memory space is much smaller than disk, so even searching virtual memory space is faster than searching disk directly. The only possibility slower than disk is that there is no required data in memory and virtual memory, and finally it needs to be read directly from the hard disk.This is why memory and virtual memory need to store data that will be read and written repeatedly, otherwise it will lose the meaning of cache. There is a special computerTranslation lookaside buffer (TLB)It is used to realize the fast conversion from virtual address to physical address.

There are also two concepts related to memory / virtual memory:
1) Resident Set

  • When a process is running, the operating system will not load all the data of the process to memory at one time, but only a part of the data in use and expected to be used. Other data may be stored in virtual memory, swap area and hard disk file system.The part loaded into memory is the resident set.

2) Thrashing

  • Because the resident set contains the expected data, ideally, the data used in the process will be gradually loaded into the resident set. But this is often not the case:Whenever the required memory page is not in the resident set, the operating system must read data from the virtual memory or hard disk. This process is called page faults. When the operating system needs to spend a lot of time dealing with page errors, it is thrashing.

file system

  • UNIX style file systems use a tree structure to manage files.Each node has multiple pointers to the disk storage location of the next level node or file. The file node also has the operation information (metadata) of the file, including the modification time, access rights, etc.

  • The access rights of users are realized through capability list and access control list. The former, from the point of view of file, indicates what operation each user can perform on the file. From the user’s point of view, the latter annotates which files a user can operate with what permissions.

  • UNIX file permissions are divided into read, write and execute, and user groups are divided into file owner, group and all users. You can set permissions for three groups of users through the command.

What are the conditions of deadlock? And how to deal with deadlock?

  • Mutual exclusion: the resource cannot be shared and can only be used by one process.

  • Hold and wait: processes that already have resources can request new resources again.

  • No pre emption: allocated resources cannot be forcibly deprived from the corresponding process.

  • Circular wait: several processes in the system form a loop, in which each process is waiting for the resources occupied by the adjacent processes.

How to deal with Deadlock:

  • Ignore the problem。 For example, ostrich algorithm can be used in the case of few deadlock. Why is it called ostrich algorithm? It’s said that ostriches bury their heads under the ground when they see danger. Maybe ostriches don’t think it’s dangerous if they don’t see danger. It’s a bit like stealing.

  • Detect deadlock and recover.

  • Carefully allocate resources dynamically toAvoid deadlock

  • By breaking one of the four necessary conditions of deadlock, deadlock can be prevented.

The difference between dynamic link library and static link library

Static library

  • A static library is a collection of external functions and variables. The file content of static library usually contains a bunch of variables and functions defined by programmers. Its content is not as complex as dynamic link library. During compilation, compiler and connector integrate it into application program, and make it into target file and executable file that can operate independently. The executable and the program that compiles the executable are both static builds.

Common questions of operating system interview

Dynamic library

  • Static library is very convenient, but if we just want to use a function in the library, we still have to link all the content. A more modern approach is to use shared libraries, which avoids a lot of duplication of static libraries in files.

  • Dynamic link can be executed at the time of first loading (load time linking), which is the standard practice of Linux. It will be completed by dynamic linker LD linux.so. For example, standard C library (libc. So) is usually dynamic link, soAll programs can share the same library without encapsulation.

  • Dynamic linking can also be done at the beginning of program execution (run-time linking). In Linux, dlopen() interface is used (function pointer will be used). It is usually used on distributed software and high-performance servers. The shared library can also be shared among multiple processes.

  • Links allow us to construct our programs with multiple object files. It can be done at different stages of the program (compile, load, run). Understanding the link can help us avoid strange errors

Common questions of operating system interview

Interprocess communication

  • The ConduitA pipeline is a one-way, first in, first out, unstructured, fixed size byte stream that connects the standard output of one process with the standard input of another.The write process writes data at the end of the pipeline, and the read process reads data at the end of the pipeline. After the data is read out, it will be removed from the pipeline, and other reading processes can no longer read the data. The pipeline provides a simple flow control mechanism. When a process tries to read an empty pipeline, it will block until data is written to the pipeline. Similarly, when the pipeline is full, the process tries to write to the pipeline again, and the writing process will block until other processes move data from the pipeline.

  • SemaphoreSemaphores are counters that can be used to control the access of multiple processes to shared resources.It is often used as a locking mechanism to prevent a process from accessing a shared resource while other processes are accessing the resource. Therefore, it is mainly used as a means of synchronization between processes and different threads in the same process.

  • Message queueMessage queue is a linked list of messages stored in the kernel and identified by message queue identifier. Message queuing overcomes the shortcomings of less signal transmission information, the pipeline can only carry unformatted byte stream, and the buffer size is limited

  • signal: signal is a kind of complex communication mode, which is used to inform the receiving process that an event has occurred.

  • Shared memoryShared memory is to map a piece of memory that can be accessed by other processes. This shared memory is created by one process, but can be accessed by multiple processes.Shared memory is the fastest IPC mode. It is specially designed for the low efficiency of other inter process communication modes. It is often used with other communication mechanisms (such as semaphores) to achieve synchronization and communication between processes.

  • socketSocket is also an inter process communication mechanism. Unlike other communication mechanisms, it can be used for process communication between different machines.

Interrupts and system calls

The so-called interrupt is in the process of computer executing program, because of some special things, the CPU pauses the execution of the program, and then executes the program to handle this event. Wait for these special things to be handled before going back to the previous program.Interrupts are generally divided into three categories:

  • The interruption caused by the abnormality or failure of computer hardware is calledInternal exception interrupt

  • The interrupt caused by the execution of the instruction causing the interrupt in the program is calledSoft interrupt(this is also the interrupt related to the system call we will explain);

  • Interrupts caused by requests from external devices are calledExternal interrupt。 In short, the understanding of interruption is to deal with some special things.

One of the concepts closely linked to interrupts isInterrupt handlerNow. When interrupts occur, the system needs to deal with them. These interrupts are handled by specific functions in the kernel of the operating system. The specific functions that handle interrupts are what we call interrupt handlers.

Another concept closely related to interrupt isInterrupt priority。 Interrupt priority indicates the level of interrupt that the processor can accept when an interrupt is being processed. The priority of the interrupt also indicates the urgency of the interrupt to be handled.Each interrupt has a corresponding priority. When the processor is processing an interrupt, only the interrupt with higher priority can be accepted and processed by the processor.Interrupts with a lower priority than the currently being processed interrupt will be ignored.

Typical interrupt priorities are as follows:

  • Machine error > clock > disk > network device > Terminal > software interrupt


Before talking about system call, let’s talk about two levels of process execution on the system: user level and core level, also known asUser mode and kernel mode

  • Program execution is generally executed in user mode, but when the program needs to use the services provided by the operating system, such as opening a device, creating a file, reading and writing a file, etc., it needs to send a call service request to the operating system, which is called system call.

  • Linux system has a special function library to provide the access to these requests for operating system services. This function library contains the interface of external services provided by the operating system.When a process makes a system call, its running state will change from user state to kernel state. But at this time, the process itself does not do anything. At this time, the kernel is doing the corresponding operations to complete the requests made by the process

  • The relationship between system calls and interrupts is that,When a process issues a system call request, a software interrupt occurs. After the software interrupt is generated, the system will process the software interrupt. At this time, the process is in the core state

What is the difference between user state and core state?

  • User processes can access their own instructions and data, but not kernel instructions and data (or instructions and data of other processes)

  • The process in the core state can access the kernel and user address. Some machine instructions are privileged instructions. Executing privileged instructions in the user state will cause errors.In a system, the kernel is not a collection of estimated processes parallel to user processes.

Three states of a process

  • Blocking state:Wait for the completion of an event;

  • Ready: wait for the system to allocate processor for operation;

  • Running state: the owning processor is running.

Common questions of operating system interview

Running state → blocking state: it is often caused by waiting for peripherals, waiting for resource allocation such as main memory, or waiting for human intervention.
Blocking state → ready state: the waiting condition has been met, and it can run only after it is allocated to the processor.
Running state → ready state: it is not for its own reasons, but for external reasons that the running process leaves the processor, and then it becomes ready. Such as running out of time slices, or processes with higher priority to preempt the processor, etc.
Ready state → running state: the system selects a process in the ready queue to occupy the processor according to a certain policy, and it becomes running state

Process scheduling

Scheduling type

  • Advanced scheduling(high level scheduling), also known as job scheduling, decides to transfer backup jobs into memory for operation;

  • Low level scheduling(low level scheduling), also known as process scheduling, decides to get CPU from a process in the ready queue;

  • Intermediate-Level Scheduling(intermediate level scheduling) is also known as the introduction of virtual storage, and the process exchange between internal and external memory exchange areas.

Non preemptive scheduling and preemptive scheduling

  • Non preemptiveOnce a dispatcher allocates a processor to a process, it keeps it running until the process finishes or a process scheduling process stops scheduling an event, and then allocates the processor to another process.

  • Preemptive: the scheduling mode in which the operating system forcibly suspends the running process and the scheduler assigns the CPU to other ready processes.

Design of scheduling strategy

  • response time: time from user input to reaction

  • Turnaround time: time from task start to task end

CPU tasks can be divided intoInteractive tasksandBatch task, the ultimate goal of scheduling isReasonable use of CPU makes the response time of interactive tasks as short as possible, and users do not feel delay. At the same time, it makes the turnover time of batch processing tasks as short as possible, and reduces the waiting time of users.

scheduling algorithm

FIFO or first come, first served (FCFS)

  • The order of scheduling is the order in which tasks arrive at the ready queue.

  • Fair, simple (FIFO queue), non preemptive, not suitable for interactive.

  • Without considering the task characteristics, the average waiting time can be reduced.

Shortest Job First (SJF)

  • The shortest jobs (minimum CPU interval length) are scheduled first.

  • SJF can guarantee the minimum average waiting time.

Shortest Remaining Job First (SRJF)

  • The preemptive version of SJF has more advantages than SJF.

  • SJF (srjf): how to know the next CPU interval size? Forecast based on History: exponential average method.

Priority scheduling

  • Each task is associated with a priority, and the task with the highest priority is scheduled.

  • Note: tasks with too low priority have always been ready, not run, and there has been a “hunger” phenomenon.

Round-Robin(RR)

  • Set a time slice to rotate and schedule by time slice (round call algorithm)

  • Advantages: timing response, short waiting time; disadvantages: more context switching times;

  • When the time slice is too large, the response time is too long; when the throughput is smaller, the turnaround time is longer; when the time slice is too long, it degenerates into FCFS.

Multilevel queue scheduling

  • Establish multiple process queues according to certain rules

  • Different queues have fixed priority (high priority has preemptive power)

  • Different queues can give different time slices and adopt different scheduling methods

  • Problem 1: I / O bound and CPU bound cannot be distinguished;

  • Problem 2: there is also a certain degree of “hunger” phenomenon;

Multilevel feedback queue

  • On the basis of multi-level queues, tasks can be moved between queues to distinguish tasks more carefully.

  • You can move the queue based on how much CPU time you “enjoy” to prevent “starvation.”.

  • The most common scheduling algorithm, most OS use this method or its deformation, such as UNIX, windows and so on.

Multi level feedback queue scheduling algorithm description:

Common questions of operating system interview

  • When the process enters the queue to be scheduled and waits,First enter the Q1 waiting with the highest priority.

  • First schedule the processes in the queue with high priority. If there are no scheduled processes in the high priority queue, the processes in the secondary priority queue are scheduled.For example: Q1, Q2 and Q3 queues can only be scheduled when there is no process waiting in Q1. Similarly, Q3 can only be scheduled when Q1 and Q2 are empty.

  • For each process in the same queue, it is scheduled according to the time slice rotation method.For example, if the time slice of Q1 queue is n, the job in Q1 will enter Q2 queue to wait if it has not been completed after experiencing n time slices. If the job cannot be completed after Q2 time slice is used up, it will enter the next level queue until it is completed.

  • When the process in the low priority queue is running, there are new arriving jobs. After running this time slice, the CPU immediately allocates the new arriving jobs (preemptive).

A simple example
Suppose that there are three feedback queues Q1, Q2 and Q3 in the system, and the time slices are 2, 4 and 8 respectively. Now, there are three jobs, i.e. j1j2 and J3, arriving at 0, 1 and 3 time respectively. And the CPU time they need is 3, 2, 1 time slice.

  • Time 0J1 arrived. Then enter the queue 1, run a time slice, the time slice has not arrived, at this time J2 arrived.

  • Time 1J2 arrived. As the time slice is still controlled by the J1, wait. After running 1 timeslice, the limit of 2 timeslices in Q1 has been completed, so the j1is placed in Q2 waiting to be scheduled. Now the processor is assigned to J2.

  • Time 2After entering Q2 to wait for scheduling, J2 gets CPU to start running.

  • Time 3When J3 arrives, because the time slice of J2 is not arrived, J3 waits for dispatching in Q1, and j1waits for dispatching in Q2.

  • Time 4J2 processing is completed. Because J3 and j1are waiting for scheduling, but the priority of J3 queue is higher than that of j1queue, J3 is scheduled, and j1continues to wait in Q2.

  • Time 5J3 is completed after 1 time slice.

  • Time 6Since Q1 is idle, the job in Q2 is scheduled, and then the processor starts to run. After another time slice, the task is completed. So the whole scheduling process is over.

Critical resources and critical areas

  • In the operating system, a process is the smallest unit that occupies resources (a thread can access all resources in its process, but the thread itself does not occupy resources or only occupies a little necessary resources). butFor some resources, they can only be used by one process at a time. These resources that can only be occupied by one process at a time are so-called critical resources。 Typical critical resources, such as physical printers, or some variables and data shared by multiple processes in hard disk or memory (if such resources are not protected as critical resources, it is likely to cause data loss).

  • Access to critical resources must be mutually exclusive.In other words, when the critical resources are occupied, another process of applying for critical resources will be blocked until the critical resources it applies for are released.The code that accesses critical resources in the process is called critical area.

Semaphore

Semaphore is a definite binary (s, q), in which s is an integer variable with non negative initial value, q is a queue with empty initial state, and the integer variable s represents the number of certain resources in the system

  • When the value is ≥ 0, it indicates the number of currently available resources in the system

  • When its value is less than 0, its absolute value indicates the number of processes in the system that are blocked due to requests for such resources

Except for the initial value of semaphore, the value of semaphore can only be changed by P operation and V operation. The operating system uses its state to manage processes and resources.

P operation

P operation is recorded as P (s), where s is a semaphore, and it mainly completes the following actions during execution:

  • s.value = s.value – 1; /It can be understood that 1 resource is occupied. If there is no resource, 1 resource will be “owed”/

If s.value ≥ 0, the process continues to execute, otherwise (i.e. s.value < 0) the process is blocked, and the process is inserted into the waiting queue s.queue of semaphore s.

  • The P operation can be understood as a counter to allocate resources or as a control instruction to keep the process waiting

V operation

V operation is recorded as V (s), where s is a semaphore. When it is executed, it mainly completes the following actions:

  • s.value = s.value + 1;/It can be understood as returning 1 resource. If there is no such resource, it means using this resource to repay 1 debt/

If s.value > 0, the process will continue to execute. Otherwise, remove the first process from the waiting queue s.queue of semaphore s, make it ready, and return to the original process to continue to execute.

The V operation can be understood as a counter to return resources or a control instruction to wake up the process and make it ready

IO multiplexing

IO multiplexing refers to that the kernel notifies a process once it finds that one or more IO conditions specified by the process are ready to be read. IO multiplexing is applicable to the following situations:

  • When a customer processes multiple descriptors (usually interactive input and network socket interface), I / O multiplexing must be used.

  • This is possible, but rare, when a customer is dealing with multiple socket interfaces at the same time.

  • If a TCP server has to deal with both the monitor socket interface and the connected socket interface, I / O multiplexing is generally used.

  • If a server wants to handle both TCP and UDP, I / O multiplexing is usually used.

  • If a server needs to handle multiple services or multiple protocols, I / O multiplexing is generally used.

  • Compared with multiprocess and multithreading technology, the biggest advantage of I / O multiplexing technology is that the system does not have to create processes / threads or maintain these processes / threads, which greatly reduces the cost of the system.

Recommended Today

Comparison and analysis of Py = > redis and python operation redis syntax

preface R: For redis cli P: Redis for Python get ready pip install redis pool = redis.ConnectionPool(host=’39.107.86.223′, port=6379, db=1) redis = redis.Redis(connection_pool=pool) Redis. All commands I have omitted all the following commands. If there are conflicts with Python built-in functions, I will add redis Global command Dbsize (number of returned keys) R: dbsize P: print(redis.dbsize()) […]