The logback log is sent to Kafka to block the main application startup of spring boot


Today, I encountered a problem. I added an appender on logback to send logs to springboot. If Kafka link fails or metadata update fails, the main application will be blocked from starting, as shown in the figure below
The logback log is sent to Kafka to block the main application startup of spring boot

Kafka producer will update metadata before sending messages. I think the update mechanism of metadata is more detailed in this blog.
If the update of metadata fails, Kafka producer will block and then continue to try to obtain metadata. Even in this blocking process, the main application of springboot is also blocked. The default value of Kafka is 600000

The solution can reduce the value of, but there is a better solution, which is to introduce`


The appender for sending logs to Kafka is defined as follows:

<appender name="kafkaAppender" class="com.github.danielwegener.logback.kafka.KafkaAppender">
<!--  Appender link Kafka configuration -- >

Use ch.qos.logback.classic.asynacappender to package the appender sending Kafka

<appender name="async" class="ch.qos.logback.classic.AsyncAppender">
        <appender-ref ref="kafkaAppender" />

The asynchronous appender is directly used when logging

<root level="INFO">
        <appender-ref ref="STDOUT" />
        <appender-ref ref="async" />

Take a look at the asyncappender class. It inherits an asyncappenderbase class, which defines a blocking queue and a separate thread worker

    public void start() {
        if (isStarted())
        if (appenderCount == 0) {
            addError("No attached appenders found.");
        if (queueSize < 1) {
            addError("Invalid queue size [" + queueSize + "]");
        //When asyncappender starts, it will initialize a blocking queue, and the log will be put into the queue temporarily
        blockingQueue = new ArrayBlockingQueue<E>(queueSize);

        if (discardingThreshold == UNDEFINED)
            discardingThreshold = queueSize / 5;
        addInfo("Setting discardingThreshold to " + discardingThreshold);
        //Worker is a single thread, which is used to send logs to each child appender
        worker.setName("AsyncAppender-Worker-" + getName());
        //The asynacappenderbase class inherits the unsynchronized appenderbase. Super. Start() marks the appender instance as opened
        //The worker thread starts to send messages to the child Appenders

Appender accepts messages and puts them into the blocking queue:

//The appender calls the append method when it has a message
    protected void append(E eventObject) {
        if (isQueueBelowDiscardingThreshold() && isDiscardable(eventObject)) {

    private void put(E eventObject) {
        if (neverBlock) {
        } else {
            //Equivalent to BlockingQueue. Put (eventobject)

Worker is an internal thread class, which is responsible for sending messages to sub Appenders (in the above example, it is sent to kafkaappender, and then kafkaappender is responsible for linking Kafka). In this way, the thread updating Kafka metadata is handed over to worker for operation, separated from the main process, and the main service will not be blocked.

class Worker extends Thread {

        public void run() {
            AsyncAppenderBase<E> parent = AsyncAppenderBase.this;
            AppenderAttachableImpl<E> aai = parent.aai;

            // loop while the parent is started
            while (parent.isStarted()) {
                try {
                    E e = parent.blockingQueue.take();
                    //Traverse all child Appenders and pass the message to all child Appenders
                } catch (InterruptedException ie) {

            addInfo("Worker thread will flush remaining events before exiting. ");
            //If the appender is closed, close the child appender after sending the remaining messages
            for (E e : parent.blockingQueue) {