Logback logs send Kafka, blocking the spring boot main application startup problem

Time:2020-8-1

Today, I encountered a problem. I added an appender to the logback to send the log to the springboot. If Kafka link fails or the metadata update fails, the main application will be blocked from starting, as shown in the following figure
Logback logs send Kafka, blocking the spring boot main application startup problem

Kafka producer will update metadata before sending messages. As for the update mechanism of metadata, I think it is more detailed in this blog.
If updating metadata fails, Kafka producer will block max.block.ms After that, continue to try to get the metadata. During the blocking process, the main application of springboot is also in the blocked state, Kafka max.block.ms The default value for is 600000

Solutions can be reduced max.block.ms However, there is a better solution here`
logback-kafka-appender

<dependency>
    <groupId>com.github.danielwegener</groupId>
    <artifactId>logback-kafka-appender</artifactId>
    <version>0.2.0</version>
    <scope>runtime</scope>
</dependency>
<dependency>
    <groupId>ch.qos.logback</groupId>
    <artifactId>logback-classic</artifactId>
    <version>1.2.3</version>
    <scope>runtime</scope>
</dependency>

The appender for sending logs to Kafka is defined as follows:

<appender name="kafkaAppender" class="com.github.danielwegener.logback.kafka.KafkaAppender">
<! -- appender link Kafka configuration -- >
</appender>

use ch.qos.logback . classic.AsyncAppender Package it and send it to Kafka’s appender

<appender name="async" class="ch.qos.logback.classic.AsyncAppender">
        <appender-ref ref="kafkaAppender" />
</appender>

When logging, use asynchronous appender directly

<root level="INFO">
        <appender-ref ref="STDOUT" />
        <appender-ref ref="async" />
</root>

Take a look at the asyncappender class, which inherits an asyncappenderbase class, which defines a blocking queue and a separate thread worker

@Override
    public void start() {
        if (isStarted())
            return;
        if (appenderCount == 0) {
            addError("No attached appenders found.");
            return;
        }
        if (queueSize < 1) {
            addError("Invalid queue size [" + queueSize + "]");
            return;
        }
        //When asyncappender is started, a blocking queue will be initialized, and the log will be put into the queue temporarily
        blockingQueue = new ArrayBlockingQueue<E>(queueSize);

        if (discardingThreshold == UNDEFINED)
            discardingThreshold = queueSize / 5;
        addInfo("Setting discardingThreshold to " + discardingThreshold);
        //Worker is a single thread that sends logs to child Appenders
        worker.setDaemon(true);
        worker.setName("AsyncAppender-Worker-" + getName());
        //The asyncappenderbase class inherits the unsynchronized appenderbase class, super.start () marks the appender instance as open
        super.start();
        //The worker thread starts and sends messages to the child Appenders
        worker.start();
    }

The appender receives the message and puts it into the blocking queue:

//The appender calls the append method when it has a message
    @Override
    protected void append(E eventObject) {
        if (isQueueBelowDiscardingThreshold() && isDiscardable(eventObject)) {
            return;
        }
        preprocess(eventObject);
        put(eventObject);
    }

    private void put(E eventObject) {
        if (neverBlock) {
            blockingQueue.offer(eventObject);
        } else {
            //Equivalent to blockingQueue.put (eventObject)
            putUninterruptibly(eventObject);
        }
    }

Worker is a thread internal class, which is responsible for sending messages to all child Appenders (in the above example, it is sent to kafkaappender, and kafkaappender is responsible for linking Kafka). In this way, the thread for updating Kafka metadata is handed over to the worker for operation. It is separated from the main thread, and the main service will not be blocked.

class Worker extends Thread {

        public void run() {
            AsyncAppenderBase<E> parent = AsyncAppenderBase.this;
            AppenderAttachableImpl<E> aai = parent.aai;

            // loop while the parent is started
            while (parent.isStarted()) {
                try {
                    E e = parent.blockingQueue.take();
                    //Traverse all child Appenders and pass the message to all child Appenders
                    aai.appendLoopOnAppenders(e);
                } catch (InterruptedException ie) {
                    break;
                }
            }

            addInfo("Worker thread will flush remaining events before exiting. ");
            //If the appender is closed, close the child appender after sending the remaining messages
            for (E e : parent.blockingQueue) {
                aai.appendLoopOnAppenders(e);
                parent.blockingQueue.remove(e);
            }

            aai.detachAndStopAllAppenders();
        }
    }