Logback logs send Kafka, blocking the spring boot main application startup problem


Today, I encountered a problem. I added an appender to the logback to send the log to the springboot. If Kafka link fails or the metadata update fails, the main application will be blocked from starting, as shown in the following figure
Logback logs send Kafka, blocking the spring boot main application startup problem

Kafka producer will update metadata before sending messages. As for the update mechanism of metadata, I think it is more detailed in this blog.
If updating metadata fails, Kafka producer will block max.block.ms After that, continue to try to get the metadata. During the blocking process, the main application of springboot is also in the blocked state, Kafka max.block.ms The default value for is 600000

Solutions can be reduced max.block.ms However, there is a better solution here`


The appender for sending logs to Kafka is defined as follows:

<appender name="kafkaAppender" class="com.github.danielwegener.logback.kafka.KafkaAppender">
<! -- appender link Kafka configuration -- >

use ch.qos.logback . classic.AsyncAppender Package it and send it to Kafka’s appender

<appender name="async" class="ch.qos.logback.classic.AsyncAppender">
        <appender-ref ref="kafkaAppender" />

When logging, use asynchronous appender directly

<root level="INFO">
        <appender-ref ref="STDOUT" />
        <appender-ref ref="async" />

Take a look at the asyncappender class, which inherits an asyncappenderbase class, which defines a blocking queue and a separate thread worker

    public void start() {
        if (isStarted())
        if (appenderCount == 0) {
            addError("No attached appenders found.");
        if (queueSize < 1) {
            addError("Invalid queue size [" + queueSize + "]");
        //When asyncappender is started, a blocking queue will be initialized, and the log will be put into the queue temporarily
        blockingQueue = new ArrayBlockingQueue<E>(queueSize);

        if (discardingThreshold == UNDEFINED)
            discardingThreshold = queueSize / 5;
        addInfo("Setting discardingThreshold to " + discardingThreshold);
        //Worker is a single thread that sends logs to child Appenders
        worker.setName("AsyncAppender-Worker-" + getName());
        //The asyncappenderbase class inherits the unsynchronized appenderbase class, super.start () marks the appender instance as open
        //The worker thread starts and sends messages to the child Appenders

The appender receives the message and puts it into the blocking queue:

//The appender calls the append method when it has a message
    protected void append(E eventObject) {
        if (isQueueBelowDiscardingThreshold() && isDiscardable(eventObject)) {

    private void put(E eventObject) {
        if (neverBlock) {
        } else {
            //Equivalent to blockingQueue.put (eventObject)

Worker is a thread internal class, which is responsible for sending messages to all child Appenders (in the above example, it is sent to kafkaappender, and kafkaappender is responsible for linking Kafka). In this way, the thread for updating Kafka metadata is handed over to the worker for operation. It is separated from the main thread, and the main service will not be blocked.

class Worker extends Thread {

        public void run() {
            AsyncAppenderBase<E> parent = AsyncAppenderBase.this;
            AppenderAttachableImpl<E> aai = parent.aai;

            // loop while the parent is started
            while (parent.isStarted()) {
                try {
                    E e = parent.blockingQueue.take();
                    //Traverse all child Appenders and pass the message to all child Appenders
                } catch (InterruptedException ie) {

            addInfo("Worker thread will flush remaining events before exiting. ");
            //If the appender is closed, close the child appender after sending the remaining messages
            for (E e : parent.blockingQueue) {