Kafka operation of spring boot series

Time:2020-1-14

Kafka operation of spring boot series

Kafka introduction

Apache Kafka ® is a distributed streaming platform. There are three key functions:

Publish and subscribe record flows, similar to message queuing or enterprise messaging systems.
Stores the record stream in a fault-tolerant, persistent manner.
Process flow when records occur.

Kafka is commonly used in two broad categories of applications:

Build a real-time streaming data pipeline that can reliably obtain data between systems or applications
Building real-time streaming applications that transform or respond to data flows

Kafka concept

(1) What is streaming?

So called flow processing, my understanding is pipeline processing. For example, each person in the electronic factory is responsible for one function, when it comes
Li, wait if you don't come.

(2) Is partition related to replication and broker?

Partition and replication are the concepts of partition and backup. Even a single broker
Support.

(3) How does consumer set and store the offset of partition? What are the consumption modes? How to determine whether the message is consumed? Will the offset be immediately consumed to the end when it is moved to the front?

Use kafkaconsumer to set partition and offset. There are automatic submission and manual ack mode submission
There are two consumption modes of offset. To move the offset to the front, you need to set it to consumption status and it will be consumed immediately (set
New consumption group).

(4) What are the ackmode modes?

Record: after the record is processed, the listener submits the offset when it returns
Batch: submit offset when processing all records returned by poll()
Time: as long as the acktime since the last commit has been exceeded, the offset will be committed when processing all records returned by poll()
Count: as long as an ackcount record has been received since the last commit, the offset is committed when all records returned by poll() are processed
Count'time: similar to time and count, but commit if any of the conditions are true
Manual: the message listener is responsible for confirming () confirmation. After that, apply the same semantics as batch    
Manual_immediate: the offset is committed immediately when the listener calls the ackknowledsegment. Acknowledge() method

Spring boot uses Kafka

(1) Inject new topic automatically add topic in broker

@Bean
public NewTopic topic() {
    return new NewTopic("topic1", 2, (short) 1);
}  

(2) When kafkatemplate is used to send messages, topic is automatically created. The automatically created partition is 0 and the length is 1

(3) Sending messages using kafkatemplate

@RequestMapping("sendMsgWithTopic")
public String sendMsgWithTopic(@RequestParam String topic, @RequestParam int partition, @RequestParam String key,
                               @RequestParam String value) {

    ListenableFuture<SendResult<String, String>> future = kafkaTemplate.send(topic, partition, key, value);
    return "success";
}

(4) Send message asynchronously

public void sendToKafka(final MyOutputData data) {
    final ProducerRecord<String, String> record = createRecord(data);
    ListenableFuture<SendResult<Integer, String>> future = template.send(record);
    future.addCallback(new ListenableFutureCallback<SendResult<Integer, String>>() {
            @Override
            public void onSuccess(SendResult<Integer, String> result) {
                handleSuccess(data);
            }
            @Override
            public void onFailure(Throwable ex) {
                handleFailure(data, record, ex);
           }
    });
}

(5) Send messages synchronously

public void sendToKafka(final MyOutputData data) {
    final ProducerRecord<String, String> record = createRecord(data);
    try {
            template.send(record).get(10, TimeUnit.SECONDS);
            handleSuccess(data);
    }catch (ExecutionException e) {
            handleFailure(data, record, e.getCause());
    }catch (TimeoutException | InterruptedException e) {
            handleFailure(data, record, e);
    }
}

(6) Transactions

(1) Spring transaction support is used together (@ transactional, transactiontemplate, etc.)
(2) Use template to execute transactions
    boolean result = template.executeInTransaction(t -> {
    t.sendDefault("thing1", "thing2");
    t.sendDefault("cat", "hat");
    return true;
    });

(7) Consumer

(1) Easy to use
 @KafkaListener(id = "myListener", topics = "myTopic",
    autoStartup = "${listen.auto.start:true}", concurrency = "${listen.concurrency:3}")
 public void listen(String data) {
    ...
 }
(2) Configure multiple topics and partitions. Partitions and partitionoffset cannot be used at the same time in a topic partition
 @KafkaListener(id = "thing2", topicPartitions =
    { @TopicPartition(topic = "topic1", partitions = { "0", "1" }),
      @TopicPartition(topic = "topic2", partitions = "0",
         partitionOffsets = @PartitionOffset(partition = "1", initialOffset = "100"))
    })
 public void listen(ConsumerRecord<?, ?> record) {
    ...
 }
(3) Use ack to confirm the mode manually
 @KafkaListener(id = "cat", topics = "myTopic",
      containerFactory = "kafkaManualAckListenerContainerFactory")
 public void listen(String data, Acknowledgment ack) {
    ...
    ack.acknowledge();
 }
 (4) Get the header information of the message
 @KafkaListener(id = "qux", topicPattern = "myTopic1")
 public void listen(@Payload String foo,
        @Header(KafkaHeaders.RECEIVED_MESSAGE_KEY) Integer key,
        @Header(KafkaHeaders.RECEIVED_PARTITION_ID) int partition,
        @Header(KafkaHeaders.RECEIVED_TOPIC) String topic,
        @Header(KafkaHeaders.RECEIVED_TIMESTAMP) long ts
    ) {
    ...
 }
(5) Batch processing
 @KafkaListener(id = "list", topics = "myTopic", containerFactory = "batchFactory")
 public void listen(List<String> list,
    @Header(KafkaHeaders.RECEIVED_MESSAGE_KEY) List<Integer> keys,
    @Header(KafkaHeaders.RECEIVED_PARTITION_ID) List<Integer> partitions,
    @Header(KafkaHeaders.RECEIVED_TOPIC) List<String> topics,
    @Header(KafkaHeaders.OFFSET) List<Long> offsets) {
    ...
 }
(6) Use @ valid to verify data
 @KafkaListener(id="validated", topics = "annotated35", errorHandler = "validationErrorHandler",
   containerFactory = "kafkaJsonListenerContainerFactory")
 public void validatedListener(@Payload @Valid ValidatedClass val) {
    ...
 }
 @Bean
 public KafkaListenerErrorHandler validationErrorHandler() {
    return (m, e) -> {
        ...
    };
 }
(7) Different methods of topic mapping according to parameter types
 @KafkaListener(id = "multi", topics = "myTopic")
 static class MultiListenerBean {
    @KafkaHandler
    public void listen(String cat) {
        ...
    }
    @KafkaHandler
    public void listen(Integer hat) {
        ...
    }
    @KafkaHandler
    public void delete(@Payload(required = false) KafkaNull nul, @Header(KafkaHeaders.RECEIVED_MESSAGE_KEY) int key) {
        ...
    }
 }

Springboot uses Kafka to step on the pit

(1) You need to modify the listener host address of server.properties or Java will not get the message.

(2) Different service configurations have the same groupid. Only one listener can receive messages

Kafka tool

Download address: http://www.kafkatool.com/down

Please leave a message if you have any questions! Original address: