Analysis of Kafka principle (2) – acquisition of producer metadata

Time:2021-10-17

1 overall process

(1) Custom message interceptors are generally useless.
(2) Wait for synchronization and 'pull metadata'. The first issue of topic requires pulling metadata, which is a lazy loading idea. The information of the cluster is pulled. Cluster contains cluster topic broker partition and other information.
(3) Serialize topic, key and value into byte [] array
(4) Calculate the key and value by the partitioner to get which partition to send
(5) Judge the message size, which cannot be greater than the limit size of a single request and the buffer size.
(6) Binding message callback function and interceptor callback function
(7) ` send message to accumulator`
(8) Welcome to the sender thread. If batch full indicates that a batch is full or there is a new batch, it indicates that there is a batch to send, and the sender thread will be awakened.

2. Acquisition of metadata
When sending a message, producer only knows topic, but metadata is unknown when sending a message for the first time.
Analysis of Kafka principle (2) - acquisition of producer metadata
Let’s first look at the send method of producer
(1) First, get the metadata of topic. The first step of real dosend is waitonmetadatablockUntil metadata is obtained.
ClusterAndWaitTime clusterAndWaitTime = waitOnMetadata(record.topic(), record.partition(), maxBlockTimeMs);
(2) Record the topic to be sent in the map of topic in metadata.
(3) First get the cluster information from the metadata. If you send this topic information for the first time, you won’t get the metadata. You can get it later and return directly.
(4) Set the metadata needupdate to true, and record the current metadata version for future comparison.
(5) Wake up the sender thread and block itself. awaitUpdate(final int lastVersion, final long maxWaitMs)
The await process is relatively simple. It is a while loop. Calculate the remaining time according to the configured timeout, and then wait. Either the intermediate sender thread wakes up or wakes up at the time, and then see if the version is updated. It indicates that the data is pulled and not updated.
Analysis of Kafka principle (2) - acquisition of producer metadata
Note that in the whole process, producer manages a timeout time, calculates the remaining time, and reports timeout once it exceeds it.

The producer’s send is waiting. What is the sender thread doing and how to wake up the producer’s send thread? Look at the sender thread:
(1) Sender itself is also a thread. It is started together when kafkaproducer is started. There is a while loop run method.
(2) This.client.ready. Check whether the connection with the broker is established. If not, initiate the connection.

Check connection: connectionstates.canconnect (node. Idstring(), now))
    Initiate connection: initiateconnect (node, long now)
    Some key parameters:
        //Connection non blocking
         socketChannel.configureBlocking(false);
         //Keepalive 2 hours automatic detection,
         socket.setKeepAlive(true);
         //Turn off the Nagle algorithm and do not combine small packets to reduce the delay
         socket.setTcpNoDelay(true);

Because establishing a connection isNon blocking, start the connection here and go directly. There is a place to wait for the connection to complete.

(3) Since there is no startup connection, many processes in the middle can be omitted. Look directly at the end of sender’s run,
this.client.poll(pollTimeout, now) -> metadataUpdater.maybeUpdate(now); Here is a request to pull metadata,
Generally, we only pull metadata information for the topic we send, encapsulate a clientrequest, and call the dosend method. The purpose is to put the metadata request into the inflightrequests queue and add it to the sending object of Kafka’s own encapsulated selectable kafkachannel. Kafkachannel will only send one request at a time, and this component will also be used on the server. Naturally, if the request is put into kafkachannel, there must be a Java channel to send it next.
Analysis of Kafka principle (2) - acquisition of producer metadata

There is another important part of dosend. Pay attention to the corresponding connectop_ Write event,:
Analysis of Kafka principle (2) - acquisition of producer metadata

(4)this.client.poll(pollTimeout, now)
-> this.selector.poll(Utils.min(timeout, metadataTimeout, requestTimeoutMs));
-> pollSelectKeys
Kafka’s own encapsulated selector directly processes various scenarios. It processes different scenarios by distinguishing the events concerned by the selectkey. Let’s look at the connection scenario first:
Analysis of Kafka principle (2) - acquisition of producer metadata
Wait for the connection to be established through finneshconnect (because the connection initiated in the previous step is non blocking and has no result, you have to wait for the connection to be completed), and set the selectkey through the underlying component transportlayerCancel the connect event and add Op_ Read event
Because in the third step, OP is added_ Write event. Therefore, after the connection is completed, it will also enter the subsequent write branch, send the metadata send request just encapsulated through the underlying cannel, and record it in completedsends, indicating that the sending is successful.
Analysis of Kafka principle (2) - acquisition of producer metadata
(5) Ideally, after a period of time, the server returns the response, so it should go to the Op_ Read logic, read the response data and put it into the stagereceives queue,
Analysis of Kafka principle (2) - acquisition of producer metadata
(6) Back to the core poll of the networkclient, the first maybeupdate encapsulates the metadata request, and the poll is responsible for sending and receiving,
The next step of processing the request and response will be carried out later. Only metadata is concerned here. You can only look at it first
handleCompletedSends -> handleResponse -> this.metadata.update(cluster,now) ,
Note that this is the cluster information update. The most important thing isversioin+1Then you can go back to the awaiteupdate in the producer’s send process, because after waiting for the new version again and again, the new version can actually send messages. The process of obtaining metadata is also a process of sending and receiving requests. This path is consistent with the process of sending messages. It is the encapsulation and multi-layer abstraction of NiO. Network components and business components are separated. It is worth thinking and learning through some intermediate queue communication.
Analysis of Kafka principle (2) - acquisition of producer metadata