Thread is the smallest task scheduling unit. It is a mini process that depends on the process. Like a process, a thread has three states: run, ready, and blocked. I think that thread is the real executor of tasks in the process, and the process provides memory space, CPU, program counter and register for the thread to use.
Why does it exist
For processes, memory space cannot be shared among multiple processes. For some applications, the ability to share memory space is necessary. For multiple processes in the same process, memory space is shared. At the same time, because of this feature, the creation of threads is much faster than that of processes.
Another point, which may be misunderstood by many people, is that threads can speed up the running of programs. This is a wrong view. Multithreading can not speed up the execution of the program, but can make full use of the CPU, which gives us the illusion of faster program speed. For example, during I / O processing, a thread blocks the start of another thread without waiting for I / O to complete.
Here, we should know that in the case of a single CPU, only one process can run at a certain time, and only one thread can run.
User state and kernel state
There are two kinds of threads: kernel thread and user thread.
When a kernel thread is created in a process, the thread will fall into the kernel, occupy CPU, register and program counter at the same time, and store the thread table in the kernel just like the process. The kernel is aware of this thread, and the kernel directly participates in thread scheduling.
The state of the user state thread is stored in the process. The process has a dedicated runtime system for scheduling user threads. Therefore, each process can implement its own scheduling algorithm, which can increase the flexibility of the program. For example, when a thread wants to do something that may cause blocking, it will notify the runtime system. The runtime system will determine the scheduling thread through the thread status of the thread table in the process.
However, because the kernel is not directly involved in scheduling user threads, there is a problem that the process scheduling thread needs to actively give up the thread, so the rights of user threads are very large. If a thread does something that causes a local block without notifying the process, it will keep the whole process blocked, even if other threads in the process are ready.
It seems that the above things are very similar to the more complex processes mentioned now. Yes, I think that the process is the user thread. It has a high initiative to schedule through the process stack, which can effectively solve the problem of concurrency.
It can be seen that both kernel thread and user thread have their own advantages and disadvantages.
Kernel threads are directly scheduled by kernel, and can run simultaneously in multiprocessor system. Moreover, due to kernel scheduling, the thread has no initiative, which can avoid the thread occupying CPU for too long and causing other threads unable to run. But because the kernel thread is directly scheduled by the kernel, it will fall into the kernel when scheduling. This cost is very large.
The user thread is scheduled by the runtime system in the process. All the thread data is stored in the process stack, without sinking into the kernel, context switching or refreshing the cache. Therefore, such scheduling is very fast. At the same time, the user thread scheduling algorithm is implemented by the process, which has strong scalability. However, due to the high initiative of the user thread, it may cause thread blocking but other ready threads have no chance to run. And because the data of the thread is saved in the memory of the process, if there are many threads, it may occupy quite a lot of memory, so some problems occur.
Since kernel threads and user threads have their own advantages, they can be put together. For example, kernel threads can be used, and kernel threads and user threads can be multiplexed to control and use user threads through kernel threads.
Scheduler activation mechanism
The goal of scheduler activation is to simulate the functions of kernel threads, but to provide better performance and more flexibility for threads that can only be realized in user space.
The runtime system of a process that allocates threads to the processor. When the kernel knows that a thread is blocked, it starts the runtime system as a notification to let the runtime system decide how to schedule its own threads.
In the process of running a user thread, a hardware interrupt occurs, and the CPU enters the kernel state. If the thread in the process needs this interrupt, and the process is interested in the interrupt, the process will be suspended by the interrupted thread and saved in the stack, and then select the thread to schedule. If not, resume the interrupted thread.
This paper is a summary and understanding of the author’s reading of modern operating system, only this record has been read in the future. At the same time, I would like to share this knowledge with you. Due to the limited level of the author, I would like to thank you for any mistakes.