Detailed explanation of six common thread pools in Java


  • Previously, we introduced four kinds of denial strategies of thread pool, and understood the meaning of thread pool parameters. Today, let’s talk about itJavaIn addition, there are several common threads in the pooljdk7AddedForkJoinNew thread pool
  • First, let’s listJavaThe six thread pools in are as follows
Thread pool name describe
FixedThreadPool The number of core threads is the same as the maximum number of threads
SingleThreadExecutor Thread pool of a thread
CachedThreadPool The core thread is 0, and the maximum number of threads is integer. Max_ VALUE
ScheduledThreadPool A timed thread pool that specifies the number of core threads
SingleThreadScheduledExecutor Singleton timed thread pool
ForkJoinPool A new thread pool added by JDK 7
  • To understand the centralized thread pool, let’s familiarize ourselves with the relationships among the main classes,ThreadPoolExecutorClass diagram of, andExecutorsThe main methods are as follows

  • The above class diagram is convenient for understanding and viewing below. We can see a core classExecutorServiceThis is the base class implemented by our thread pool. We will talk about its implementation class next.


  • FixedThreadPoolThe feature of thread pool is that its core thread number is the same as the maximum thread number. We can see its implementation code in theExecutors#newFixedThreadPool(int)As follows:
public static ExecutorService newFixedThreadPool(int nThreads) {
        return new ThreadPoolExecutor(nThreads, nThreads,
                                      0L, TimeUnit.MILLISECONDS,
                                      new LinkedBlockingQueue());

We can see that the creation thread call in the method is actuallyThreadPoolExecutorClass, which is the core executor of the thread poolnThreadParameters are passed in as the number of core threads and the maximum number of threads, and the queue adopts a bounded queue with linked list structure.

  • This kind of thread pool can be regarded as a thread pool with a fixed number of threads. Only when it is initialized, the number of threads will be created from 0, but it will not be destroyed after it is created. Instead, it will be used as a resident thread pool. If you do not understand the thread pool parameters, please refer to the previous articleExplain the meaning of thread pool parameters
  • The third and fourth parameters of this thread pool are meaningless. They are the lifetime of idle threads. Here, they are both resident and non destructed. When a thread cannot handle it, it will be added to the blocking queue. This is a bounded blocking queue with a linked list structure. The maximum length is integer. Max_ VALUE


  • SingleThreadExecutorThe feature of thread is that both the number of core threads and the maximum number of threads are 1. We can also make the task a singleton thread pool. Its implementation code isExecutors#newSingleThreadExcutor()As follows:
public static ExecutorService newSingleThreadExecutor() {
        return new FinalizableDelegatedExecutorService
            (new ThreadPoolExecutor(1, 1,
                                    0L, TimeUnit.MILLISECONDS,
                                    new LinkedBlockingQueue()));

    public static ExecutorService newSingleThreadExecutor(ThreadFactory threadFactory) {
        return new FinalizableDelegatedExecutorService
            (new ThreadPoolExecutor(1, 1,
                                    0L, TimeUnit.MILLISECONDS,
                                    new LinkedBlockingQueue(),
  • In the above code, we found that it has an overloaded function, passing in aThreadFactoryIn our development, our custom thread creation factory will be passed in. If not, the default thread factory will be called
  • We can see that it is associated withFixedThreadPoolThe difference between thread pool is that the number of core threads and the maximum number of threads is changed to 1, that is, no matter how many tasks, it will have only one thread to execute
  • If the thread is destroyed due to an exception in the execution process, the thread pool will also re create a thread to perform subsequent tasks
  • This thread pool is very suitable for scenarios where all tasks need to be executed in the order in which they are submitted. It is a single thread serial.


  • cachedThreadPoolThe feature of thread pool is that its resident number of core threads is 0. As its name is, all counties are created temporarily, and its implementation is in theExecutors#newCachedThreadPool()The code is as follows:
public static ExecutorService newCachedThreadPool() {
        return new ThreadPoolExecutor(0, Integer.MAX_VALUE,
                                      60L, TimeUnit.SECONDS,
                                      new SynchronousQueue());

    public static ExecutorService newCachedThreadPool(ThreadFactory threadFactory) {
        return new ThreadPoolExecutor(0, Integer.MAX_VALUE,
                                      60L, TimeUnit.SECONDS,
                                      new SynchronousQueue(),
  • We can see from the above codeCachedThreadPoolIn the thread pool, the maximum number of threads isInteger.MAX_VALUEMeans that his number of threads can be increased almost infinitely.
  • Because the created threads are temporary threads, they will be destroyed. Here, the destruction time of idle threads is 60 seconds, that is to say, when the threads have no task to execute within 60 seconds, they will be destroyed
  • Here we need to pay attention, it usesSynchronousQueueBecause its capacity is 0, it is only responsible for the delivery and transfer of tasks, and the efficiency will be higher. Because the core threads are all 0, this queue does not have any meaning if it stores tasks.


  • ScheduledThreadPoolThread pool is to support the execution of tasks on a regular or periodic basis, and its creation codeExecutors.newSchedsuledThreadPool(int)As follows:
public static ScheduledExecutorService newScheduledThreadPool(int corePoolSize) {
        return new ScheduledThreadPoolExecutor(corePoolSize);

    public static ScheduledExecutorService newScheduledThreadPool(
            int corePoolSize, ThreadFactory threadFactory) {
        return new ScheduledThreadPoolExecutor(corePoolSize, threadFactory);
  • We found that it calledScheduledThreadPoolExecutorThe constructor of this class. For more information, see theScheduledThreadPoolExecutorClass is an inheritanceThreadPoolExecutorAt the same timeScheduledExecutorServiceInterface. We can see that several constructors of this interface call the parent classThreadPoolExecutorConstructor for
public ScheduledThreadPoolExecutor(int corePoolSize) {
        super(corePoolSize, Integer.MAX_VALUE, 0, NANOSECONDS,
              new DelayedWorkQueue());

    public ScheduledThreadPoolExecutor(int corePoolSize,
                                       ThreadFactory threadFactory) {
        super(corePoolSize, Integer.MAX_VALUE, 0, NANOSECONDS,
              new DelayedWorkQueue(), threadFactory);

    public ScheduledThreadPoolExecutor(int corePoolSize,
                                       RejectedExecutionHandler handler) {
        super(corePoolSize, Integer.MAX_VALUE, 0, NANOSECONDS,
              new DelayedWorkQueue(), handler);

    public ScheduledThreadPoolExecutor(int corePoolSize,
                                       ThreadFactory threadFactory,
                                       RejectedExecutionHandler handler) {
        super(corePoolSize, Integer.MAX_VALUE, 0, NANOSECONDS,
              new DelayedWorkQueue(), threadFactory, handler);
  • From the above code, we can see that there is no difference between the creation of other thread pools, but the task queue here isDelayedWorkQueueAbout blocking and dropping columns, our next article will focus on creating a periodic thread pool to have a look
public static void main(String[] args) {
        ScheduledExecutorService service = Executors.newScheduledThreadPool(5);
        //1. Delay execution for a certain time
        service.schedule(() ->{
            System.out.println ("schedule = = > yunqi code - I- "";
        },2, TimeUnit.SECONDS);

        //2. According to the fixed frequency period
        service.scheduleAtFixedRate(() ->{
            System.out.println ("scheduleatfixedrate = = > yunqi code - I- "";

        //3. According to the fixed frequency cycle
        service.scheduleWithFixedDelay(() -> {
            System.out.println ("schedulewithfixeddelay = = > yunqi code - I- "";

  • The code above is simply created by usnewScheduledThreadPoolAt the same time, we demonstrate the three core methods. First, we look at the results of the implementation

  • First, let’s look at the first methodscheduleIt has three parameters. The first parameter is the thread task, and the second parameter is the thread taskdelayIndicates the delay time of task execution, and the thirdunitThe unit of delay time, as shown in the code above, is to delay the execution of a task after two seconds
public ScheduledFuture> schedule(Runnable command,
                                       long delay, TimeUnit unit);
  • The second method isscheduleAtFixedRateAs follows, it has four parameters,commandThe parameter represents the thread task executed,initialDelayThe parameter indicates the delay time of the first execution,periodThe parameter indicates how often to execute after the first execution, and the last parameter is the time unit. As shown in the case code above, it means to execute the first time in two seconds, and then press to execute every three seconds
public ScheduledFuture> scheduleAtFixedRate(Runnable command,
                                                  long initialDelay,
                                                  long period,
                                                  TimeUnit unit);
  • The third method isscheduleWithFixedDelayAs follows, it is very similar to the above method, and it is also executed periodically, and the meaning of parameters is consistent with the above method. This method andscheduleAtFixedRateThe main difference is that the starting time of time is different
public ScheduledFuture> scheduleWithFixedDelay(Runnable command,
                                                     long initialDelay,
                                                     long delay,
                                                     TimeUnit unit);
  • scheduleAtFixedRateIt is based on the time when the task starts, and the second task will be executed when the time is up, regardless of the time spent on task execution; andscheduleWithFixedDelayThe start of timing is the point at which the task execution ends. As shown below


  • It’s practical andScheduledThreadPoolThread pools are very similar, it’s justScheduledThreadPool This is a special case of, which has only one thread inside, and it just willScheduledThreadPoolThe number of core threads for is set to 1. As shown in the source code:
public static ScheduledExecutorService newSingleThreadScheduledExecutor() {
        return new DelegatedScheduledExecutorService
            (new ScheduledThreadPoolExecutor(1));
  • Above, we introduced five common thread pools. For these thread pools, we can make a simple comparison from three dimensions: the number of core threads, the maximum number of threads, and the survival time, which will help us deepen our memory of these thread pools.
FixedThreadPool SingleThreadExecutor CachedThreadPool ScheduledThreadPool SingleThreadScheduledExecutor
corePoolSize Constructor pass in 1 0 Constructor pass in 1
maxPoolSize Same as corepoolsize 1 Integer. MAX_VALUE Integer. MAX_VALUE Integer. MAX_VALUE
keepAliveTime 0 0 60 0 0


  • ForkJoinPoolThis is aJDK7The main feature of the new thread pool is that it can make full use of multi-coreCPUA task can be divided into multiple subtasks. These subtasks are executed in parallel on different processors. When the execution of these subtasks is finished, these results are combined. This is a divide and conquer idea.
  • ForkJoinPoolJust like its name, step oneForkSplit, the second stepJoinTo merge, let’s take a look at its class diagram structure

  • ForkJoinPoolIs also used by callingsubmit(ForkJoinTask task)orinvoke(ForkJoinTask task)Method to perform the specified task. The type of task isForkJoinTaskClass, which represents a subtask that can be merged, is an abstract class itself, and has two commonly used abstract subclassesRecursiveActionandRecursiveTaskIn whichRecursiveTaskRepresents a task with a return value type, andRecursiveActionIndicates a task with no return value. Here is their class diagram:

  • Let’s take a look at how to use it with a simple codeForkJoinPoolThread pool
 * @url:
 * @author: AnonyStar
 * @time: 2020/11/2 10:01
public class ForkJoinApp1 {

    	Objective: to print the numbers within 0-200, segment them, and test the forkjoin
    public static void main(String[] args) {
        //Create a thread pool,
        ForkJoinPool joinPool = new ForkJoinPool();
        //Create root task
        SubTask subTask = new SubTask(0,200);
        //Submit task
        //Let the thread block wait for all tasks to complete before shutting down
        try {
            joinPool.awaitTermination(2, TimeUnit.SECONDS);
        } catch (InterruptedException e) {

class  SubTask extends RecursiveAction {

    int startNum;
    int endNum;

    public SubTask(int startNum,int endNum){
        this.startNum = startNum;
        this.endNum = endNum;

    protected void compute() {

        if (endNum - startNum < 10){
            //If the difference between the two splitting is less than 10, it will not continue and print directly
            System.out.println(Thread.currentThread().getName()+": [startNum:"+startNum+",endNum:"+endNum+"]");
        }else {
            //Take the middle value
            int middle = (startNum + endNum) / 2;
            //Create two subtasks with recursive thinking,
            SubTask subTask = new SubTask(startNum,middle);
            SubTask subTask1 = new SubTask(middle,endNum);
            //Execute the task, fork () indicates the start of asynchronous execution


  • From the above case, we can see that we have created a lot of thread execution, because the computer I tested is 12 thread, so 12 threads are actually created here, which also shows that the thread processing ability of each process is fully called
  • In fact, we find a familiar flavor in the above case, which is the recursion thought that we have been exposed to before. We can visualize the above case as follows, so that we can see it more intuitively,

  • The above example is a case without a return value. Let’s look at a typical case with a return value. I believe you have heard and are very familiar with Fibonacci sequence. This sequence has a feature that the result of the last item is equal to the sum of the first two items, for example:0,1,1,2,3,5...f(n-2)+f(n-1)That is, item 0 is 0, item 1 is 1, then item 2 is0+1=1, and so on. Our initial solution is to use recursion to calculate the value of the nth term as follows:
private int num(int num){
        if (num <= 1){
            return num;
        num = num(num-1) + num(num -2);
        return num;
  • As you can see from the simple code above, whenn<=1Return ton, ifn>1The value of the previous item is calculatedf1, and calculate the values of the first two itemsf2And then add the two together to get the result. This is a typical recursive problem, which is also corresponding to oursForkJoinThe working mode of the root node is as follows: the root node generates subtasks, and the subtasks derive subtasks again. Finally, the integration and aggregation are carried out to obtain the results.
  • We passedForkJoinPoolTo realize the calculation of Fibonacci sequence, as shown below:
 * @url:
 * @author: AnonyStar
 * @time: 2020/11/2 10:01
public class ForkJoinApp3 {

    public static void main(String[] args) throws ExecutionException, InterruptedException {
        ForkJoinPool pool = new ForkJoinPool();
        //The second is the value of the term
        final ForkJoinTask submit = pool.submit(new Fibonacci(20));
        //Get the result, which is the final result of the asynchronous task


class Fibonacci extends RecursiveTask{

    int num;
    public Fibonacci(int num){
        this.num = num;

    protected Integer compute() {
        if (num <= 1) return num;
        //Create subtask
        Fibonacci subTask1 = new Fibonacci(num - 1);
        Fibonacci subTask2 = new Fibonacci(num - 2);
        //Perform subtask
        //Get the results of the first two items to calculate the sum
        return subTask1.join()+subTask2.join();
  • adoptForkJoinPoolIt can give full play to the advantages of multi-core processor, especially suitable for recursive scenarios, such as tree traversal, optimal path search and so on.
  • It saysForkJoinPoolLet’s talk about its internal structure. For the several thread pools mentioned above, there is only one queue in them, and all threads share one. But in theForkJoinPoolIn addition, each thread has a corresponding double ended queueDequeWhen a task in a thread isForkIf it is split, the split subtasks will be put into the corresponding thread itselfDequeInstead of putting it in a public queue. In this way, the cost of each thread will be reduced a lot. The task can be obtained directly from the queue of its own thread without competing in the public queue, which effectively reduces the resource competition and switching between threads.

  • There is a case where a thread has more than one such ast1,t2,t3...In a certain period of timet1The task is particularly heavy, split dozens of subtasks, but the thread t0There’s nothing to do at this time, its owndeque When the queue is empty, in order to improve efficiency,t0They will try to help t1Mission, that’s it“work-stealing”Meaning.
  • Double ended queuedeque Medium, thread t1The logic of getting a task is last in first out, that isLIFO(Last In Frist Out), and threadst0In“steal”Stealing threadt1Ofdeque The logic of the task in is FIFO, that isFIFO(Fast In Frist Out)As shown in the figure, the scenario in which two threads use the double ended queue to obtain tasks is well described. You can see, use“work-stealing”The algorithm and double ended queue balance the load of each thread well.

This article is published by anonystar and can be reproduced, but the source of the original text should be stated.
Welcome to pay attention to wechat public account: yunqi code to get more quality articles
More articles focus on the author’s blog:Yunqi code I-