(2) Kafka’s cluster architecture principle

Time:2021-5-3

The principle is very important. It’s impossible to ask you the order during the interview. It’s all about the principle. If you understand the principle online, you can quickly locate it if you have problems with Kafka, instead of covering your face. We have to understand the principle. If we don’t talk about the principle, we will be moving bricks.

Topic

Create a topic of topic. Three partitions are stored in different servers. Note that topic is a logical concept.

(2) Kafka's cluster architecture principle

Partition & copy of partition

Kafka’s topic can be divided into one or more partitions, which are physical concepts. If the number of copies of a topic is set to 3, then each partition will have 3 identical copies. As shown in the figure below, we set up three copies of the partition 0, 1 and 2 of topic respectively, and then store them in broker 0, 1 and 2 respectively.

(2) Kafka's cluster architecture principle

Log segmented storage

(2) Kafka's cluster architecture principle

Because the messages produced by the producer will be added to the end of the log file, Kafka adopts the fragmentation and index mechanism to prevent the log file from being too large and causing low efficiency of data location.

It divides each partition into multiple segments, and each segment corresponds to two files. Index file and log data file.

Leader & Follow

And each copy is divided into roles. They will elect one copy as the leader and the rest as the follower. When the producer sends data, it sends it directly to the leader partition, and then the follower partition goes to the leader for data synchronization. When the consumer consumes data, it also consumes data from the leader( In the figure below, topic-partition-0 is the leader in broker 0. Similarly, other topics-partition-n also have leaders.)

(2) Kafka's cluster architecture principle

Consumer & Consumer group

A consumption group is composed of one or more consumer instances, which is easy to expand and fault tolerance. A partition does not allow multiple consumers in the same consumer group to consume. A consumer can consume data in multiple partitions.

(2) Kafka's cluster architecture principle

(2) Kafka's cluster architecture principle

Network design of Kafka

(2) Kafka's cluster architecture principle

  1. The client sends the request to the acceptor. There are three processor threads in the broker (the default is 3). The acceptor will not do any processing to the client’s request, but encapsulates it into a socketchannel, and then sends it to three processor threads to form a queue. The way of sending is polling, that is, sending to the first processor, then the second and the third
  2. The consumer thread will consume these socketchannels by request request;
  3. There are eight readerthreadpool threads in the thread pool by default. These threads are used to process requests, parse requests, and return response results;
  4. The processor reads the response data from the response and returns it to the client.

So if we need to enhance Kafka tuning, add processors and add processing threads in the thread pool, we can achieve the effect. The part of request and response actually serves as a cache, considering that the processors generate requests too fast and the number of threads is not enough to process them in time.
So this is an enhanced reactor network thread model.

Kafka zero copy

Traditional IO:

(2) Kafka's cluster architecture principle

//Read the file and send it out with socket
buffer = File.read 
Socket.send(buffer)

1. The first time: read the disk file to the operating system kernel buffer;
2. The second time: copy the data in the kernel buffer to the application buffer;
3. Step 3: copy the data in the application buffer to the socket network sending buffer (belonging to the buffer of the operating system kernel);
4. The fourth time: copy the data of the socket buffer to the network card, and the network card transmits the data through the network.

In the traditional way, it is very cumbersome to read disk files and send them to the network after four copies. In the actual IO read / write, the IO interrupt is needed, and the CPU needs to respond to the interrupt (bringing context switch). Although DMA is introduced to take over the interrupt request of CPU later, there is “unnecessary copy” in four copies.

Zero copy:

(2) Kafka's cluster architecture principle

Kafka uses zero copy applications that require the kernel to copy data directly from the disk file to the socket without the need for an application. Zero copy not only greatly improves the performance of the application, but also reduces the context switching between kernel and user mode.

The role of zookeeper in Kafka cluster

1. Broker registration

Brokers are distributed and independent, but a registration system is needed to manage the brokers in the whole cluster. In this case, zookeeper is used. On zookeeper, there will be one dedicated toBroker server list recordNode for:/brokers/ids

When every broker starts, it will register with zookeeper, that is, it will create its own node under / brokers / IDS, such as / brokers / IDS / [0… N].

Kafka uses a globally unique digital ID to refer to each broker server. After creating a node, each broker will send its ownIP address and port informationRecord to this node. Among them, the type of node created by broker is temporary node. Once the broker is down, the corresponding temporary node will be automatically deleted.

2. Topic registration

In Kafka, the correspondence between topic’s message partition and broker is also maintained by zookeeper and recorded by special nodes, such as / borkers / topics

Each topic in Kafka is recorded in the form of / brokers / topics / [Topic], such as / brokers / topics / login and / brokers / topics / search. After the broker server starts, it will register its broker ID on the corresponding topic node (/ brokers / topics) and write the total number of partitions of the topic, such as / brokers / topics / login / 3 – > 2. This means that the node with broker ID 3 provides two partitions for “login” to store messages. Similarly, this partition node is also a temporary node.

3. Consumer registration

① Register node to consumer group. When each consumer server starts, it will create its own one under the designated node of zookeeperConsumer node, such as / consumers / [group]_ id]/ids/[consumer_ After the node is created, consumers will write their subscription topic information to the temporary node.

② In the group of consumersChanges in consumersRegister for monitoring. Every consumer needs to pay attention to the changes of other consumer servers in their consumer groups, that is, to / consumers / [group]_ The ID] / IDS node registers the watcher monitoring of the change of the child node, and once it finds that the consumer increases or decreases, it triggers the load balancing of the consumer.

4. The relationship between zoning and consumers

In Kafka, it is stipulated that each message partition can only be consumed by one consumer in the same group, so it needs to be recorded in zookeeperMessage partition and consumerEach consumer needs to write his / her consumer ID to the temporary node of the message partition corresponding to zookeeper once he / she has determined the right to consume a message partition

/consumers/[group_id]/owners/[topic]/[broker_id-partition_id]

Among them, [broker]_ id-partition_ ID] is the identifier of a message partition, and the node content is the consumer ID of the consumer on the message partition.

5. Message consumption progress offset record

In the process of consumer consuming the specified message partition, it is necessary to divide the partition message into two parts regularlyConsumption progress offsetRecord to zookeeper, so that after the consumer restarts or other consumers take over the message consumption of the message partition, they can continue to consume messages from the previous progress. Offset is recorded by a special node in zookeeper. The node path is as follows:

/consumers/[group_id]/offsets/[topic]/[broker_id-partition_id]

The content of the node is the value of offset.

6. Producer load balancing

Since the same topic message is partitioned and distributed on multiple brokers, the,Producers need to send messages to these distributed brokers reasonablyKafka supports both traditional four layer load balancing and zookeeper load balancing.

(1) Four layer load balancing: usually, a producer only corresponds to a single broker, and then all messages generated by the producer are sent to the broker. The logic of this method is simple, each producer does not need to establish additional TCP connection with other systems, only needs to maintain a single TCP connection with broker. However, it can not achieve real load balancing, because the amount of messages generated by each producer and the amount of message storage of each broker in the actual system are not the same. If some producers produce more messages than other producers, the total number of messages received by different brokers will vary greatly, and at the same time, Producers can’t sense the addition and deletion of brokers in real time.

(2) Zookeeper is used for load balancing. When every broker starts, the broker registration process will be completed, and the producer willThe change of the broker server list is perceived dynamically by the change of the nodeIn this way, dynamic load balancing mechanism can be realized.

7. Consumer load balancing

Similar to producers, consumers in Kafka also need load balancing to realize that multiple consumers receive messages from the corresponding broker server reasonably.