Kafka (1): workflow, storage mechanism, partition strategy

Time:2021-4-7

1、 Preface

Before we start, we should make it clear that Kafka is a distributed flow platform, which is essentially a message queue. When we talk about message queue, we will think of three functions of message queueAsynchronous, peak elimination and decoupling. Kafka is mainly used in the field of real-time processing of big data, which is relatively simple to use. This paper mainly analyzes Kafka’s workflow, storage mechanism, partition strategy, and summarizes from multiple perspectives.

However, it should be noted that Kafka is not the only company in the era of 2020. Pulsar, as a natural multi tenant, cross regional replication and unified messaging platform, has successfully replaced Kafka in many enterprises. For more knowledge about Apache pulsar, you can pay attention to me if you are interested. I will summarize and deepen it later.

2、 Workflow Kafka

Kafka (1): workflow, storage mechanism, partition strategy

  1. Kafka sends the message to thetopicEach message consists of three attributes.

    • Offset: represents the offset of a message in the current partition. It is a logical value. It uniquely determines a message in the partition and can be simply regarded as an ID;
    • Message size: indicates the size of message content data;
    • Details of data: message
  2. In the whole Kafka architecture, producer and consumer adopt publish and subscribe mode. Producer produces message and consumer consumes message. They both perform their own duties and are topic oriented. (required)_ Note: topic is a logical concept, while partition is a physical concept. Each partition corresponds to a log file in which the data produced by producer is stored
  3. The data produced by producer will be continuously appended to the end of the log file, and each data has its own offset.
  4. Each consumer in the consumer group will record which offset they consume in real time, so that they can continue to consume from this offset position after failure and recovery, so as to avoid missing data or repeated consumption.

2、 File storage mechanism

2.1 file storage structure and naming rules

At the beginning of the design of Kafka, considering the situation that the log file is too large after the producer’s production messages are added to the end of the log file, the Kafka is adoptedSlicingandIndexesSpecifically, each partition is divided into multiple segments. Each segment corresponds to three files: index file,. Log file,. Timeindex file(Not in previous versions)。 among. log and. IndexThe file is located in a folder. The naming rule of the folder is as follows:Topic name + partition number. For example, if the topic CSDN has two partitions, the corresponding folders are csdn-0 and csdn-1;

If we open the csdn-0 folder, we will see the following files:

`00000000000000000000.index
00000000000000000000.log
00000000000000150320.index
00000000000000150320.log` 

*   1
*   2
*   3
*   4

From the two logs in this folder, we can conclude that this partition has two segments.

File naming rule: the first segment of partition global starts from 0, and the subsequent file name of each segment is the offset value of the last message of the previous segment file. The value size is 64 bits, and the length of 20 digit characters. No number is filled with 0.

Note: the index file does not start from 0, nor is it incremented by 1 every time. This is because Kafka adopts the method of sparse index storage, and establishes an index every certain byte of data. It reduces the size of index file, enables the index to be mapped to memory, reduces the disk IO overhead during query, and does not bring too much time consumption to query.

Here is an old Kafka storage mechanism diagram without the. Timeindex file:
Kafka (1): workflow, storage mechanism, partition strategy

2.2 document relationship

The relationship between index file and log file: the “. Index” file stores a lot of index information, the “. Log” file stores a lot of data, and the metadata in the index file points to the physical offset address of message in the corresponding data file.
Kafka (1): workflow, storage mechanism, partition strategy

2.3 using offset to find message

Because the file name of each segment is the offset of the last message of the previous segment, when you need to find a message with the specified offset, you can find the segment to which it belongs by binary search in the file names of all segments, and then find its corresponding physical location in the index file to get the message.

For example: Here we take finding message with offset 6 as an example, and the finding process is as follows:

  1. First of all, we need to determine which segment file the offset information is in (because it is read and written in sequence, we use the binary search method here). The first file name is 000000000000000, and the second file name is 0000000000000150320, so the data of 6 offset must be in the first file;
  2. After finding the file, it’s easy to do. Locate [69807] in the file’s 000000000000000. Index file to 9807 in the file’s 000000000000000. Log to read the data.

3、 Partition strategy

3.1. Why partition

Before understanding the partitioning strategy, we need to understand why we need to partition. We can explain this problem from two aspects

  1. Easy to expand in clusterEach partition can be adjusted to adapt to its machine, and a topic can be composed of multiple partitions, so the whole cluster can adapt to any size of data;
  2. Can improve concurrencyAfter partition, read and write in partition unit.
3.2 partition strategy

First of all, you need to know that the data sent by producer needs to be encapsulated into aProducerRecordObject. Let’s look at the methods provided by producer record as follows:

Kafka (1): workflow, storage mechanism, partition strategy
Through this construction method, we know that there are three Kafka partition strategies as follows:

  1. When the partition is specified, the specified value is directly used as the partition value;
  2. When there is no partition value but a key, the hash value of the key and the partition number of the topic are used to get the partition value;
  3. When there is no partition value or key value, an integer will be generated randomly at the first call (it will increase automatically at each subsequent call), and the partition value will be obtained by taking the remainder between this value and the total number of partitions available for topic, which is commonly known as the round robin algorithm.