Analysis of Kafka principle (5) – starting and receiving requests by broker


1 Kafka broker startup
Analysis of Kafka principle (5) - starting and receiving requests by broker
(1) Kafka’s broker represents a node and contains multiple partitions. Partitions may be leaders or followers. In the Kafka class of the core package, start kafkaserverstarteable and kafkaserver successively, and then start several components. Almost every component is started by an encapsulated thread, which is a unified style startup.

(2) Logmanager: disk log file operation component, which is mainly responsible for checking the log directory and loading log files when loading. It also has several functions:

A old log segments are deleted when they exceed the time threshold or size,
B physical disk brushing function. The default operating system controls the disk brushing, and also provides the configuration of fixed time disk brushing.
C. checkpoint recovery checkpoint checkponit records the offset of the last disk swiping. It can be recovered after abnormal shutdown
D. when the partition directory deletion broker receives a stopreplica request, delete the partition and the corresponding segment

(3) The socketserver network component implements the react mode and is responsible for sending and receiving requests.

(4) Replicamanager replica component is responsible for replica management, including replica write data, replica pull data, etc.

(5) Kafkacontroller is responsible for cooperating with ZK, electing leader and ISR, broker change processing, partition allocation processing, etc.

(6) The groupcoordinator is responsible for the management of the consumer group, including consumer joining, heartbeat, leaving, etc.

(7) Kafkarequesthandlerpool is a thread pool, and kafkaapis is a tool class encapsulated in policy mode, which can handle various types of requests.

2. Kafka broker sending and receiving requests

Analysis of Kafka principle (5) - starting and receiving requests by broker

(1) The core of sending and receiving requests is the socketserver component. It starts with kafkaserver startup. The core is to create aAcceptor object and processorArray. You can also see from the name. This must be the reactor network model.

(2) An acceptor is a thread that contains multiple processors (three by default).
Analysis of Kafka principle (5) - starting and receiving requests by brokerIn fact, the acceptor registers the accept event on the nioselector, keeps rotating new accept events, and waits for the connection to be established,

The connection settings are: non blocking connection, disabling tcpdelay (reducing delay), keeping the connection when keepalive is true,

Then throw the newly accepted channel to one of your processors (multiple processor take-off wheels).
Analysis of Kafka principle (5) - starting and receiving requests by broker

Looking at the figure above, each processor has a selector and a connection queue. New connections created from the acceptor will be put into the connection queue of a process, and then the selector of the process will continuously register events for its own connection queue and process them.The received request and the processed return will be placed in the requestchannel list。 The run method of the core thread is as follows:
Analysis of Kafka principle (5) - starting and receiving requests by broker

Of which:

A configurenewconnections(), where you get the new connection and register the read event.
B processnewresponses, as can be seen from the name, is to process the response after the request is processed. It's a little jumping here, because the process of processing the request is below. Here is the processing return. Get the response from the requestchannel list and send the response back to the requester through sendresponse(). The sending process is to first get the specific channel, then know the response he in the sending object of kafkachannel through the send of selector, and register Op_ The write event is followed by the network sending function of the networkclient, which is the same as that of the producer client.

The C poll method and kafkaselector's poll method are consistent with the sending principle of producer. Pollselectionkeys are used to process connections with event arrival, including connect events, read events and write events. Intermediate results are stored through some queues encapsulated by the selector itself. For example, compledreceives represents accepting complete requests, and completedsends represents sending completed requests, Stagereceives are temporary queues, etc. It has been analyzed when producer is used.

D processcompletereceives loop through the above compledreceives from the selector, where the read event read requests are stored, and then encapsulated into request
The requestchannel of the socketserver is a queue for other threads to get requests

E processcompletedsends, process the response after sending, and the channel pays attention to the read event again.

(4) On the right side of the figure above, when kafkaserver is started, requesthandlerpool will be started. This includes kafkaapis tool class, which can operate various types of requests. thisPool is a thread pool from which requestchannel is requestedTake the request and use kafkaapis to process it. This is an exampleHandle processor in policy mode, points to different methods.

Analysis of Kafka principle (5) - starting and receiving requests by broker

Take produce type as an example. After calling replicamanager.appendmessages to process the message sent by the producer, use the callback method to put the response back into requestchannel. The core method is:
requestChannel.sendResponse(new RequestChannel.Response(request, new ResponseSend(request.connectionId, respHeader, respBody)))
Then the network component can send the response. The whole process is complete.