The logback log is sent to Kafka to block the main application startup of spring boot

Time:2021-6-11

Today, I encountered a problem. I added an appender on logback to send logs to springboot. If Kafka link fails or metadata update fails, the main application will be blocked from starting, as shown in the figure below
The logback log is sent to Kafka to block the main application startup of spring boot

Kafka producer will update metadata before sending messages. I think the update mechanism of metadata is more detailed in this blog.
If the update of metadata fails, Kafka producer will block max.block.ms and then continue to try to obtain metadata. Even in this blocking process, the main application of springboot is also blocked. The default value of Kafka max.block.ms is 600000

The solution can reduce the value of max.block.ms, but there is a better solution, which is to introduce`
logback-kafka-appender

<dependency>
    <groupId>com.github.danielwegener</groupId>
    <artifactId>logback-kafka-appender</artifactId>
    <version>0.2.0</version>
    <scope>runtime</scope>
</dependency>
<dependency>
    <groupId>ch.qos.logback</groupId>
    <artifactId>logback-classic</artifactId>
    <version>1.2.3</version>
    <scope>runtime</scope>
</dependency>

The appender for sending logs to Kafka is defined as follows:

<appender name="kafkaAppender" class="com.github.danielwegener.logback.kafka.KafkaAppender">
<!--  Appender link Kafka configuration -- >
</appender>

Use ch.qos.logback.classic.asynacappender to package the appender sending Kafka

<appender name="async" class="ch.qos.logback.classic.AsyncAppender">
        <appender-ref ref="kafkaAppender" />
</appender>

The asynchronous appender is directly used when logging

<root level="INFO">
        <appender-ref ref="STDOUT" />
        <appender-ref ref="async" />
</root>

Take a look at the asyncappender class. It inherits an asyncappenderbase class, which defines a blocking queue and a separate thread worker

@Override
    public void start() {
        if (isStarted())
            return;
        if (appenderCount == 0) {
            addError("No attached appenders found.");
            return;
        }
        if (queueSize < 1) {
            addError("Invalid queue size [" + queueSize + "]");
            return;
        }
        //When asyncappender starts, it will initialize a blocking queue, and the log will be put into the queue temporarily
        blockingQueue = new ArrayBlockingQueue<E>(queueSize);

        if (discardingThreshold == UNDEFINED)
            discardingThreshold = queueSize / 5;
        addInfo("Setting discardingThreshold to " + discardingThreshold);
        //Worker is a single thread, which is used to send logs to each child appender
        worker.setDaemon(true);
        worker.setName("AsyncAppender-Worker-" + getName());
        //The asynacappenderbase class inherits the unsynchronized appenderbase. Super. Start() marks the appender instance as opened
        super.start();
        //The worker thread starts to send messages to the child Appenders
        worker.start();
    }

Appender accepts messages and puts them into the blocking queue:

//The appender calls the append method when it has a message
    @Override
    protected void append(E eventObject) {
        if (isQueueBelowDiscardingThreshold() && isDiscardable(eventObject)) {
            return;
        }
        preprocess(eventObject);
        put(eventObject);
    }

    private void put(E eventObject) {
        if (neverBlock) {
            blockingQueue.offer(eventObject);
        } else {
            //Equivalent to BlockingQueue. Put (eventobject)
            putUninterruptibly(eventObject);
        }
    }

Worker is an internal thread class, which is responsible for sending messages to sub Appenders (in the above example, it is sent to kafkaappender, and then kafkaappender is responsible for linking Kafka). In this way, the thread updating Kafka metadata is handed over to worker for operation, separated from the main process, and the main service will not be blocked.

class Worker extends Thread {

        public void run() {
            AsyncAppenderBase<E> parent = AsyncAppenderBase.this;
            AppenderAttachableImpl<E> aai = parent.aai;

            // loop while the parent is started
            while (parent.isStarted()) {
                try {
                    E e = parent.blockingQueue.take();
                    //Traverse all child Appenders and pass the message to all child Appenders
                    aai.appendLoopOnAppenders(e);
                } catch (InterruptedException ie) {
                    break;
                }
            }

            addInfo("Worker thread will flush remaining events before exiting. ");
            //If the appender is closed, close the child appender after sending the remaining messages
            for (E e : parent.blockingQueue) {
                aai.appendLoopOnAppenders(e);
                parent.blockingQueue.remove(e);
            }

            aai.detachAndStopAllAppenders();
        }
    }