After reading this article, I’m no longer afraid of the interviewer asking me about thread pool

Time:2021-7-28

After reading this article, I'm no longer afraid of the interviewer asking me about thread pool

1、 Why do I need a thread pool

In actual use, threads occupy system resources very much. If the thread management is not perfect, it is easy to cause system problems. Therefore, in most concurrent frameworks, thread pools are used to manage threads. The main advantages of using thread pools to manage threads are as follows:

  • 1. Using thread pool, existing threads can be reused to continue executing tasks, avoiding the consumption caused by thread creation and destruction
  • 2. Because there is no consumption of thread creation and destruction, the system response speed can be improved
  • 3. Threads can be reasonably managed through threads, and the number of runnable threads can be adjusted according to the bearing capacity of the system

2、 Working principle

flow chart:

After reading this article, I'm no longer afraid of the interviewer asking me about thread pool

The thread pool executes the submitted task process:

▪ 1. For example, we set the number of core thread pools to 30. No matter whether there is a user connection or not, we always guarantee 30 connections. This is the number of core threads. The number of core threads here is not necessarily 30. You can set it according to your needs, business and concurrent access,First, judge whether all threads in the core thread pool are executing tasks, if not, create a new thread to execute the task just submitted. Otherwise, all threads in the core thread pool are executing the task, enter step 2;

▪ 2. If the number of 30 core threads is full, you need to check the blocking queue to determine whether the current blocking queue is full. If not, put the submitted task in the blocking queue for execution; Otherwise, go to step 3;

▪ 3. Judge whether all threads in the thread pool are executing tasks. If not, create a new thread to execute tasks. Otherwise, give it toSaturation strategyProcessing, also known asReject policy, we’ll have a detailed introduction later

Note: here is oneNumber of core threads and one thread pool, these two are different concepts,Number of core threadsRepresents that I can maintain common thread overhead, andNumber of thread poolsRepresents the maximum number of threads I can create. For example, in our rural areas, every household has a draft well, which is basicallyHalf well waterIt can maintain the use of our daily life. The half well deep water here is just like oursNumber of core threads, another half of the capacity is the largest water resource that our well can hold. If it exceeds it, it will not work, and the water will overflow. This is similar to oursNumber of thread pools, I don’t know if you can better understand it here

3、 Classification of thread pools

After reading this article, I'm no longer afraid of the interviewer asking me about thread pool1.newCachedThreadPool:Create a thread pool that can create new threads as needed, but reuse previously constructed threads when they are available, and create new threads using the provided threadfactory when needed

features:

(1) The number of threads in the thread pool is not fixed and can reach the maximum (integer. Max)_ VALUE=2147483647)

(2) Threads in the thread pool can be reused and recycled (the default recycling time is 1 minute)

(3) When there are no available threads in the thread pool, a new thread will be created

2. Newfixedthreadpool: create a reusable thread pool with a fixed number of threads to run these threads in a shared unbounded queue. At any point, most nthreads threads will be in the active state of processing tasks. If the attachment task is submitted when all threads are active, the attachment task will wait in the queue before there are available threads. If any thread is terminated due to failure during execution before shutdown, a new thread will perform subsequent tasks instead of it (if necessary). Threads in the pool will exist until a thread is explicitly closed

features:

(1) Threads in the thread pool are in a certain amount, which can well control the concurrency of threads

(2) Threads can be reused and will exist until the display is turned off

(3) When more than a certain number of threads are submitted, they need to wait in the queue

3. Newsinglethreadexecutor: create an executor that uses a single worker thread to run the thread in an unbounded queue( Note that if this single thread is terminated due to a failure during execution before shutdown, a new thread will perform subsequent tasks instead of it if necessary). It can ensure that each task is executed sequentially, and no more than one thread will be active at any given time, which is equivalent to other threadsnewFixedThreadPool(1)Different, it can ensure that other threads can be used without reconfiguring the executor returned by this method

features:

(1) At most one thread can be executed in the thread pool, and then the submitted threads will be queued for execution

4. Newsinglethreadscheduledexecurator: create a single thread execution program, which can schedule commands to run or execute regularly after a given delay

features:

(1) At most one thread can be executed in the thread pool, and then the thread activities submitted will be executed in sequence in the queue

(2) Thread activities can be scheduled or delayed

5. Newscheduledthreadpool: create a thread pool that can schedule commands to run or execute periodically after a given delay

features:

(1) Threads with the number of executions in the thread pool will be retained even if they are empty

(2) Thread activities can be scheduled or delayed

6. Newworksteelingpool: create a thread pool with parallel level. The parallel level determines the maximum number of threads executing at the same time. If the parallel level parameter is not passed, it will default to the number of CPUs of the current system

We can search the development tools for aExecutorsClass, in which we can see all our above usage methods

After reading this article, I'm no longer afraid of the interviewer asking me about thread pool

4、 Specific implementation of thread pool: ThreadPoolExecutor


Thread tool class – task:

public class Task implements Runnable{
    @Override
    public void run() {
        try {
            //Sleep for 1 second
            Thread.sleep(1000);
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
        //Output thread name
        System.out.println(Thread.currentThread().getName()+"-------running");
    }
}

4.1 newCachedThreadPool

Source code implementation:

public static ExecutorService newCachedThreadPool() {
        return new ThreadPoolExecutor(0, Integer.MAX_VALUE,
                                      60L, TimeUnit.SECONDS,
                                      new SynchronousQueue<Runnable>());
    }

Case:

public class CacheThreadPoolDemo {
    public static void main(String[] args) {
        ExecutorService executorService = Executors.newCachedThreadPool();
        for (int i = 0; i < 20; i++) {
            //Submit task
            executorService.execute(new Task());
        }
        //Start an orderly shutdown in which previously submitted tasks will be executed, but no new tasks will be accepted
        executorService.shutdown();
    }
}

Result output:

From the beginning to the end, we output a total of 20(Pool-1-thread-1 to pool-1-thread-20)Thread

pool-1-thread-2-------running
pool-1-thread-6-------running
pool-1-thread-1-------running
pool-1-thread-3-------running
pool-1-thread-5-------running
pool-1-thread-4-------running
pool-1-thread-7-------running
pool-1-thread-11-------running
pool-1-thread-9-------running
pool-1-thread-10-------running
pool-1-thread-17-------running
pool-1-thread-15-------running
pool-1-thread-18-------running
pool-1-thread-16-------running
pool-1-thread-8-------running
pool-1-thread-20-------running
pool-1-thread-13-------running
pool-1-thread-19-------running
pool-1-thread-14-------running
pool-1-thread-12-------running

4.2 newFixedThreadPool

Source code implementation:

public static ExecutorService newFixedThreadPool(int nThreads) {
        return new ThreadPoolExecutor(nThreads, nThreads,
                                      0L, TimeUnit.MILLISECONDS,
                                      new LinkedBlockingQueue<Runnable>());
    }

Case:

public class FixedThreadPoolDemo {
    public static void main(String[] args) {
        //Create a thread pool that allows up to five threads to execute
        ExecutorService executorService = Executors.newFixedThreadPool(5);
        for (int i = 0; i < 20; i++) {
            //Submit task
            executorService.execute(new Task());
        }
        //Start an orderly shutdown in which previously submitted tasks will be executed, but no new tasks will be accepted
        executorService.shutdown();
    }
}

Output results:

We can see that the number of threads is every five(Pool-1-thread-1 to pool-1-thread-5)Once executed, a maximum of five threads are allowed to execute in the current execution thread

pool-1-thread-4-------running
pool-1-thread-2-------running
pool-1-thread-1-------running
pool-1-thread-3-------running
pool-1-thread-5-------running
pool-1-thread-4-------running
pool-1-thread-5-------running
pool-1-thread-3-------running
pool-1-thread-2-------running
pool-1-thread-1-------running
pool-1-thread-4-------running
pool-1-thread-2-------running
pool-1-thread-1-------running
pool-1-thread-3-------running
pool-1-thread-5-------running
pool-1-thread-4-------running
pool-1-thread-5-------running
pool-1-thread-2-------running
pool-1-thread-1-------running
pool-1-thread-3-------running

4.3 newSingleThreadExecutor

Source code implementation:

public static ExecutorService newSingleThreadExecutor() {
        return new FinalizableDelegatedExecutorService
            (new ThreadPoolExecutor(1, 1,
                                    0L, TimeUnit.MILLISECONDS,
                                    new LinkedBlockingQueue<Runnable>()));
    }

Case:

public class SingleThreadPoolDemo {
    public static void main(String[] args) {
        ExecutorService executorService = Executors.newSingleThreadExecutor();
        for (int i = 0; i < 20; i++) {
            //Submit task
            executorService.execute(new Task());
        }
        //Start an orderly shutdown in which previously submitted tasks will be executed, but no new tasks will be accepted
        executorService.shutdown();
    }
}

Result output:

We can see that thread 1 outputs every time

pool-1-thread-1-------running
pool-1-thread-1-------running
pool-1-thread-1-------running
pool-1-thread-1-------running
pool-1-thread-1-------running
pool-1-thread-1-------running
pool-1-thread-1-------running
pool-1-thread-1-------running
pool-1-thread-1-------running
pool-1-thread-1-------running
pool-1-thread-1-------running
pool-1-thread-1-------running
pool-1-thread-1-------running
pool-1-thread-1-------running
pool-1-thread-1-------running
pool-1-thread-1-------running
pool-1-thread-1-------running
pool-1-thread-1-------running
pool-1-thread-1-------running
pool-1-thread-1-------running

5、 Implementation of thread pool:

ScheduledThreadPoolExecutor

5.1 newScheduledThreadPool

Case:

public static void main(String[] args) {
        ScheduledExecutorService scheduledExecutorService = Executors.newScheduledThreadPool(3);
//        for (int i = 0; i < 20; i++) {
        System.out.println(System.currentTimeMillis());
            scheduledExecutorService.schedule(new Runnable() {
                @Override
                public void run() {
                    System.out.println ("execution delayed for three seconds");
                    System.out.println(System.currentTimeMillis());
                }
            },3, TimeUnit.SECONDS);
//        }

        scheduledExecutorService.shutdown();
    }

}

Output results:

1606744468814
Three second delay in execution
1606744471815

5.2 newSingleThreadScheduledExecutor

Case:

public static void main(String[] args) {
        ScheduledExecutorService scheduledExecutorService = Executors.newSingleThreadScheduledExecutor();
            scheduledExecutorService.scheduleAtFixedRate(new Runnable() {
                int i = 1;
                @Override
                public void run() {
                    System.out.println(i);
                    i++;
                }
            },0,1, TimeUnit.SECONDS);
//        scheduledExecutorService.shutdown();
    }

Output results:

6、 Thread pool lifecycle

After reading this article, I'm no longer afraid of the interviewer asking me about thread poolGenerally speaking, thread pools have only two states. One isRunning, one isTERMINATED, in the middle of the figure are transition states

Running: it can accept newly submitted tasks and also handle tasks in the blocking queue

SHUTDOWN: closed status, no longer accepting newly submitted tasks, but can continue to process saved tasks in the blocking queue

STOP: unable to accept new tasks or process tasks in the queue, which will interrupt the thread processing tasks

TIDYING: if all tasks have been terminated, the workercount (number of valid threads) is 0. After the thread pool enters this state, it will call the terminated () method to enter the terminated state

TERMINATED: enter this state after the execution of the terminated () method is completed, and nothing is done in the default terminated () method

7、 Creation of thread pool

7.1 executors source code

public ThreadPoolExecutor(int corePoolSize,
                              int maximumPoolSize,
                              long keepAliveTime,
                              TimeUnit unit,
                              BlockingQueue<Runnable> workQueue,
                              ThreadFactory threadFactory,
                              RejectedExecutionHandler handler)

7.2 parameter description

corePoolSize: size of the core thread pool

maximumPoolSize: the maximum number of threads that the thread pool can create

keepAliveTime: idle thread lifetime

unit: time unit, specifying the time unit for keepalivetime

workQueue: blocking queue, which is used to save the blocking queue of tasks

threadFactory: create the project class of the thread

handler: saturation policy (reject Policy)

8、 Blocking queue

After reading this article, I'm no longer afraid of the interviewer asking me about thread poolArrayBlockingQueue:

In the implementation of array based blocking queue, a fixed length array is maintained in arrayblockingqueue to cache the data objects in the queue. This is a common blocking queue. In addition to a fixed length array, arrayblockingqueue also stores two shaping variables, which respectively identify the position of the head and tail of the queue in the array.

Arrayblockingqueue shares the same lock object when the producer places data and the consumer obtains data, which also means that the two cannot run in parallel, which is especially different from linkedblockingqueue; According to the analysis of the implementation principle, the arrayblockingqueue can completely adopt the separation lock, so as to realize the complete parallel operation of producer and consumer operations. The reason why Doug lea didn’t do this may be that the data writing and obtaining operations of arrayblockingqueue are light enough to introduce an independent locking mechanism. In addition to bringing additional complexity to the code, it can’t take any advantage in performance.

Another obvious difference between arrayblockingqueue and linkedblockingqueue is that the former will not generate or destroy any additional object instances when inserting or deleting elements, while the latter will generate an additional node object. In the system that needs to process large quantities of data efficiently and concurrently for a long time, its impact on GC is still different. When creating arrayblockingqueue, we can also control whether the internal lock of the object adopts fair lock, and non fair lock is adopted by default.

LinkedBlockingQueue:

Similar to arraylistblockingqueue, the blocking queue based on linked list also maintains a data buffer queue (the queue is composed of a linked list). When the producer puts a data into the queue, the queue will get the data from the producer and cache it in the queue, and the producer will return immediately; Only when the queue buffer reaches the maximum cache capacity (linkedblockingqueue can specify this value through the constructor) will the producer queue be blocked until the consumer consumes a piece of data from the queue, and the producer thread will be awakened. On the contrary, the processing on the consumer side is also based on the same principle. and

Linkedblockingqueue can efficiently process concurrent data because it uses independent locks for producer and consumer to control data synchronization, which also means that in the case of high concurrency, producers and consumers can operate the data in the queue in parallel, so as to improve the concurrency performance of the whole queue.

DelayQueue:

The element in the delayqueue can be obtained from the queue only when the specified delay time has expired. Delayqueue is a queue with no size limit, so the operation (producer) that inserts data into the queue will never be blocked, but only the operation (consumer) that obtains data will be blocked.

Usage scenario:

Delayqueue is rarely used, but it is quite ingenious. Common examples are using a delayqueue to manage a connection queue that does not respond to timeout.

PriorityBlockingQueue:

Priority based blocking queue (priority is determined by the comparator object passed in by the constructor), but it should be noted that

Priorityblockingqueue does not block the data producer, but only the data consumer when there is no data to consume. Therefore, special attention should be paid when using. The speed of producer production data must not be faster than that of consumer consumption data, otherwise all available heap memory space will be exhausted over time. When implementing priorityblockingqueue, the lock for internal thread synchronization adopts fair lock.

SynchronousQueue:

A non buffered waiting queue is similar to the direct transaction without intermediary. It is a bit like the producers and consumers in the primitive society. The producers take the products to the market to sell to the final consumers of the products, and the consumers must go to the market in person to find the direct producer of the goods they want. If one party fails to find a suitable target, I’m sorry, everyone is waiting in the market. Compared with the buffered BlockingQueue, there is no intermediate dealer link (buffer zone). If there is a dealer, the producer directly wholesales the products to the dealer, without worrying that the dealer will eventually sell these products to those consumers. Because the dealer can stock some goods, compared with the direct transaction mode, Generally speaking, the intermediate dealer model will have higher throughput (can be sold in batches); on the other hand, due to the introduction of dealers, additional transaction links are added from producers to consumers, and the timely response performance of a single product may be reduced.

There are two different ways to declare a synchronous queue, which have different behavior. The difference between fair mode and unfair mode: if fair mode is adopted, synchronousqueue will adopt fair lock and cooperate with a FIFO queue to block redundant producers and consumers, so as to the overall fairness strategy of the system;

However, in the case of unfair mode (synchronousqueue default): synchronousqueue uses unfair locks and cooperates with a LIFO queue to manage redundant producers and consumers. In the latter mode, if there is a gap in processing speed between producers and consumers, it is easy to be hungry, Some data can never be processed by producers or consumers.

be careful:

Arrayblockingqueue andlinkedblockqueueDifferences: 1. The implementation of locks in queues is different

1. The locks in the queue implemented by arrayblockingqueue are not separated, that is, production and consumption use the same lock;

The locks in the queue implemented by linkedblockingqueue are separated, that is, putlock is used for production and takelock is used for consumption. 2. The queue size is initialized in different ways

2. The size of the queue must be specified in the queue implemented by arrayblockingqueue;

The size of the queue can not be specified in the queue implemented by linkedblockingqueue, but the default is integer.max_ VALUE

9、 Reject policy

ThreadPoolExecutor. Abortpolicy (system default): discard the task and throw a rejectedexecutionexception exception to make you feel that the task has been rejected. We can choose retry or abandon submission according to the business logic

ThreadPoolExecutor. Discardpolicy: it also discards tasks without throwing exceptions. Relatively speaking, there is a certain risk, because we don’t know that the task will be discarded when submitting, which may cause data loss.

Threadpoolexecutor.discardolddestpolicy: discard the task at the front of the queue and then try to execute the task again (repeat this process). It is usually the task with the longest survival time. It also has a certain risk of data loss

ThreadPoolExecutor. Callerrunspolicy: this task is handled by the calling thread

10、 Execute() and submit() methods

10.1 execute method execution logic

After reading this article, I'm no longer afraid of the interviewer asking me about thread pool

  • If the current running thread is less than corepoolsize, a new thread will be created to execute a new task;
  • If the number of running threads is equal to or greater than corepoolsize, the submitted task will be stored in the blocking queue workqueue;
  • If the current workqueue queue is full, a new thread will be created to execute the task;
  • If the number of threads has exceeded the maximumpoolsize, the saturation policy rejectedexecutionhandler will be used for processing

10.2 Submit

Submit is an extension of the base method executor. Execute (runnable). By creating and returning a future class object, it can be used to cancel execution and / or wait for completion.

After reading this article, I'm no longer afraid of the interviewer asking me about thread pool

11、 Thread pool shutdown

  • To close the thread pool, you can use the shutdown and shutdown now methods
  • Principle: traverse all threads in the thread pool, and then interrupt in turn
  • 1. Shutdownnow first sets the status of the thread pool to stop, then attempts to stop all executing and unexecuted threads, and returns the list of tasks waiting to be executed;
  • 2. Shutdown simply sets the state of the thread pool to shutdown, and then interrupts all threads that are not executing tasks

After reading three things ❤ ️

If you think this article is very helpful to you, I’d like to invite you to help me with three small things:

  1. Praise, forwarding, and your “praise and comments” are the driving force for my creation.
  2. Pay attention to the official account.  Rotten pig skin 』, Share original knowledge from time to time.
  3. At the same time, we can look forward to the follow-up articles

After reading this article, I'm no longer afraid of the interviewer asking me about thread pool