Recently, I learned some scheduling strategies in OS. I can’t help but be very interested. I also reviewed the scheduling problems in the algorithm and found some common points.
- Problem: n tasks, with processing time T1, T2,… TN, deadline D1, D2,… DN, with only one machine
- Objective: to minimize the maximum delay of a single task (the maximum delay is defined as the length beyond the deadline)
- Solution: according to the deadline from front to back
- Problem: n tasks with start time, end time and overlaps, only one machine
- Objective: to complete the most tasks
- Solution: add the selected set from front to back according to the end time
- Problem: n tasks with start time, end time and overlaps, and multiple processors
- Objective: to complete all tasks with the minimum number of processors
- Solution: add the processor according to the start time from front to back. If there is a conflict, create a new processor
In the actual CPU scheduling, the main idea is to optimize the average response time, and the main idea is also short time first or short remaining time (earliest end time) first
In the actual implementation of Linux, it is a multi-level trade off, priority queues + round robin operation. In the queue of the same priority, we first complete some short-term tasks, and then change the long-term tasks.
Isn’t life the same? First of all, we should distinguish our own priorities, divide them into several levels, and then we should make the task even and clear in these levels. At one time, we should make this and that… HHH
You may need to pay attention to the task quantity control. After all, your CPU is very expensive to burn, and your multi-processing ability is really not strong.