Tomcat thread pool policy



Tomcat’s thread pool extends the executor of the JDK, and the queue uses its own task queue, so its strategy is different from that of the JDK. You need to pay attention to it, otherwise it is easy to step on the pit.

Tomcat thread pool policy

  • Scenario 1: accept a request. At this time, the number of threads started by Tomcat has not reached corepoolsize(It's called minsparethreads in Tomcat), Tomcat will start a thread to process the request;

  • Scenario 2: accept a request, when the number of threads started by Tomcat has reached corepoolsize, Tomcat puts the request into the queue(offer)If it is put into the queue successfully, it will return. If it is not put into the queue successfully, it will try to increase the number of working threads. When the number of current threads is less than maxthreads, it can continue to increase the number of threads for processing. If it is more than maxthreads, it will continue to put into the waiting queue. If the waiting queue is not put in, it will throw rejectedexecutionexception;

It is worth noting that when using linkedblockingqueue, the default is to use Integer.MAX_ Value, i.e. unbounded queue (in this case, if the capacity of the queue is not configured, the queue will never be full, and the number of maxthreads cannot be reached by opening new threads, so it is meaningless to configure maxthreads at this time).

The queue capacity of taskqueue is maxqueuesize, which is also the default Integer.MAX_ VALUE。 But,When its thread pool size is smaller than maximumpoolsize, it returns false, which rewrites the logic of queue full to some extent, fixed using linkedblockingqueue with default capacity of Integer.MAX_ When value, maxthreads fails. In this way, you can continue to grow threads to maxthreads, and after that, continue to put them into the queue.

The offer operation of taskqueue

    public boolean offer(Runnable o) {
      //we can't do any checks
        if (parent==null) return super.offer(o);
        //we are maxed out on threads, simply queue the object
        if (parent.getPoolSize() == parent.getMaximumPoolSize()) return super.offer(o);
        //we have idle threads, just add it to the queue
        if (parent.getSubmittedCount()<(parent.getPoolSize())) return super.offer(o);
        //if we have less threads than maximum force creation of a new thread
        if (parent.getPoolSize()<parent.getMaximumPoolSize()) return false;
        //if we reached here, we need to add it to the queue
        return super.offer(o);


     * Start the component and implement the requirements
     * of {@link org.apache.catalina.util.LifecycleBase#startInternal()}.
     * @exception LifecycleException if this component detects a fatal error
     *  that prevents this component from being used
    protected void startInternal() throws LifecycleException {

        taskqueue = new TaskQueue(maxQueueSize);
        TaskThreadFactory tf = new TaskThreadFactory(namePrefix,daemon,getThreadPriority());
        executor = new ThreadPoolExecutor(getMinSpareThreads(), getMaxThreads(), maxIdleTime, TimeUnit.MILLISECONDS,taskqueue, tf);
        if (prestartminSpareThreads) {


It is worth noting that Tomcat’s thread pool uses its own extended taskqueue instead of the linkedblockingqueue used in the executors factory method. (It mainly modifies the logic of offer)

The maxqueuesize here defaults to

     * The maximum number of elements that can queue up before we reject them
    protected int maxQueueSize = Integer.MAX_VALUE;


     * Executes the given command at some time in the future.  The command
     * may execute in a new thread, in a pooled thread, or in the calling
     * thread, at the discretion of the <tt>Executor</tt> implementation.
     * If no threads are available, it will be added to the work queue.
     * If the work queue is full, the system will wait for the specified
     * time and it throw a RejectedExecutionException if the queue is still
     * full after that.
     * @param command the runnable task
     * @param timeout A timeout for the completion of the task
     * @param unit The timeout time unit
     * @throws RejectedExecutionException if this task cannot be
     * accepted for execution - the queue is full
     * @throws NullPointerException if command or unit is null
    public void execute(Runnable command, long timeout, TimeUnit unit) {
        try {
        } catch (RejectedExecutionException rx) {
            if (super.getQueue() instanceof TaskQueue) {
                final TaskQueue queue = (TaskQueue)super.getQueue();
                try {
                    if (!queue.force(command, timeout, unit)) {
                        throw new RejectedExecutionException("Queue capacity is full.");
                } catch (InterruptedException x) {
                    throw new RejectedExecutionException(x);
            } else {
                throw rx;


Note that the default rejected rule of the JDK thread pool has been overridden here, that is, the rejectedexecutionexception has been held by catch. The rule of normal JDK is to throw rejectedexecutionexception when the number of core threads + number of temporary threads > maxsize. If catch lives here, continue to put it in taskqueue.

public boolean force(Runnable o, long timeout, TimeUnit unit) throws InterruptedException {
        if ( parent==null || parent.isShutdown() ) throw new RejectedExecutionException("Executor not running, can't force a command into the queue");
        return super.offer(o,timeout,unit); //forces the item onto the queue, to be used if the task is rejected

Note that it is called here super.offer (O, timeout, unit), that is, linkedblockingqueue. Only when the column is full and returns false, will the rejectedexecutionexception be thrown. (This changes the logic of rejectedexecutionexception thrown by the ThreadPoolExecutor of the JDK, that is, beyond maxthreads, rejectedexecutionexception will not be thrown, but the task will continue to be thrown to the queue, and taskqueue itself is unbounded, so the rejectedexecutionexception can hardly be thrown by default)

JDK thread pool policy

  1. Each time a task is submitted, if the number of threads does not reach coresize, a new thread is created and the task is bound. Therefore, the total number of threads must reach coresize after the task is submitted at coresize, and the previous idle threads will not be reused.

  2. When the number of threads reaches coresize, the new tasks are put into the work queue, and the threads in the thread pool try to use take () to pull work from the work queue.

  3. If the queue is a bounded queue, and if the thread in the thread pool can’t take the task away in time, the work queue may be full, and the inserted task will fail. At this time, the thread pool will urgently create a new temporary thread to remedy it.

  4. Left with nothing whatsoever to do when the time is still idle, the temporary thread is activated by poll (keepAliveTime, timeUnit). It will be dismissed if it is too idle.

  5. If the number of core threads + temporary threads is greater than maxsize, no new temporary threads can be created. Turn to execute rejectexecutionhanlder. The default abortpolicy throws rejectedexecutionexception exceptions. Other options include silently discarding the current task, discarding the oldest task in the work queue, or directly executing by the main thread (callerruns)

Source code

     * Executes the given task sometime in the future.  The task
     * may execute in a new thread or in an existing pooled thread.
     * If the task cannot be submitted for execution, either because this
     * executor has been shutdown or because its capacity has been reached,
     * the task is handled by the current {@code RejectedExecutionHandler}.
     * @param command the task to execute
     * @throws RejectedExecutionException at discretion of
     *         {@code RejectedExecutionHandler}, if the task
     *         cannot be accepted for execution
     * @throws NullPointerException if {@code command} is null
    public void execute(Runnable command) {
        if (command == null)
            throw new NullPointerException();
         * Proceed in 3 steps:
         * 1. If fewer than corePoolSize threads are running, try to
         * start a new thread with the given command as its first
         * task.  The call to addWorker atomically checks runState and
         * workerCount, and so prevents false alarms that would add
         * threads when it shouldn't, by returning false.
         * 2. If a task can be successfully queued, then we still need
         * to double-check whether we should have added a thread
         * (because existing ones died since last checking) or that
         * the pool shut down since entry into this method. So we
         * recheck state and if necessary roll back the enqueuing if
         * stopped, or start a new thread if there are none.
         * 3. If we cannot queue task, then we try to add a new
         * thread.  If it fails, we know we are shut down or saturated
         * and so reject the task.
        int c = ctl.get();
        if (workerCountOf(c) < corePoolSize) {
            if (addWorker(command, true))
            c = ctl.get();
        if (isRunning(c) && workQueue.offer(command)) {
            int recheck = ctl.get();
            if (! isRunning(recheck) && remove(command))
            else if (workerCountOf(recheck) == 0)
                addWorker(null, false);
        else if (!addWorker(command, false))


There are two main differences between Tomcat’s thread pool and JDK’s boundless linkedblockingqueue:

  • The thread pool growth strategy of JDK’s ThreadPoolExecutor is: if the queue is a bounded queue, and if the threads in the thread pool can’t take the task away in time, the work queue may be full, and the inserted task will fail. At this time, the thread pool will urgently create a new temporary thread to remedy it. The taskqueue used by Tomcat’s ThreadPoolExecutor is an unbounded linkedblockingqueue. However, the offer method of taskqueue covers the offer method of linkedblockingqueue, rewrites the rules, and makes it follow the thread growth strategy of JDK’s ThreadPoolExecutor’s bounded queue.

  • In the thread pool of the ThreadPoolExecutor of the JDK, when the number of core threads + the number of temporary threads > maxsize, no new temporary threads can be created. Turn to execute rejectexecutionhanlder. The ThreadPoolExecutor of Tomcat rewrites this rule, that is, catch lives in rejectexecutionhanlder, continues to put it in the queue, and throws rejectexecutionhanlder until the queue is full. The default taskqueue is unbounded.

Question: since taskqueue is unbounded, where to control the receiving request limit of Tomcat server and how to protect itself. In addition, what is the relationship between acceptcount and maxconnections.


  • Tomcat performance optimization

  • Tomcat tuning

  • Talk about concurrency (3) — Analysis and use of java thread pool

  • The correct way to open Java ThreadPool

  • Connector component of Tomcat

  • A troubleshooting of Tomcat Hang’s residence

Recommended Today


Supervisor [note] Supervisor – H view supervisor command help Supervisorctl – H view supervisorctl command help Supervisorctl help view the action command of supervisorctl Supervisorctl help any action to view the use of this action 1. Introduction Supervisor is a process control system. Generally speaking, it can monitor your process. If the process exits abnormally, […]