Reactive spring practice — responsive Kafka interaction

Time:2021-10-16

This article shares how to deploy a Kafka cluster using kraft and how to implement Kafka responsive interaction in spring.

KRaft

We know that Kafka uses zookeeper to store metadata such as broker and consumer group for Kafka, and uses zookeeper to select the master of the broker.
Although the use of zookeeper simplifies Kafka’s work, it also makes Kafka’s deployment and operation and maintenance more complex.

Kafka 2.8.0 began to remove zookeeper and replaced zookeeper with Kafka’s internal quorum controller, which is officially called “Kafka raft metadata mode”, i.e. Kraft mode. From then on, users can deploy the Kafka cluster without zookeeper, which makes fafka simpler and lightweight.
After using kraft mode, users only need to focus on maintaining Kafka cluster.

Note: due to major changes in this function, the Kraft mode provided by kafka2.8 is a test version and is not recommended for use in the production environment. It is believed that the Kraft version available for production will be available soon in the subsequent version of Kafka.

The following describes how to deploy a Kafka cluster using Kafka.
Here, three Kafka nodes are deployed using three machines, and the Kafka version used is 2.8.0.

1. Generate clusterid and configuration file.
(1) Use kafka-storage.sh to generate clusterid.

$ ./bin/kafka-storage.sh random-uuid
dPqzXBF9R62RFACGSg5c-Q

(2) Generate configuration file using clusterid

$ ./bin/kafka-storage.sh format -t <uuid> -c ./config/kraft/server.properties
Formatting /tmp/kraft-combined-logs

Note: you only need to generate a clusterid and use it to generate configuration files on all machines, that is, the clusterid used by all nodes in the cluster must be the same.

2. Modify the configuration file
The configuration file generated by the script can only be used for a single Kafka node. If you are deploying a Kafka cluster, you need to modify the configuration file.

(1) Modify config / kraft / server.properties (use this configuration to start Kafka later)

process.roles=broker,controller 
node.id=1
listeners=PLAINTEXT://172.17.0.2:9092,CONTROLLER://172.17.0.2:9093
advertised.listeners=PLAINTEXT://172.17.0.2:9092
[email protected]:9093,[email protected]:9093,[email protected]:9093

Process.roles specifies the role of the node, with the following values

  • Broker: This machine will be just a broker
  • Controller: as the controller node of raft quorum
  • Broker, controller: includes the above two functions

Node.ids of different nodes in a cluster need to be different.
Controller.quorum.voters need to configure all controller nodes in the cluster in the format of < nodeid > @ < IP >: < port >.

(2)
The configuration generated by the kafka-storage.sh script stores Kafka data in / TMP / kraft combined logs / by default,
We also need the node.id in the / TMP / kraft combined logs / meta.properties configuration to keep it with the server.properties configuration.

node.id=1

3. Start Kafka
Start the Kafka node using the kafka-server-start.sh script

$ ./bin/kafka-server-start.sh ./config/kraft/server.properties

Let’s test the Kafka cluster
1. Create a theme

$ ./bin/kafka-topics.sh --create --partitions 3 --replication-factor 3 --bootstrap-server 172.17.0.2:9092,172.17.0.3:9092,172.17.0.4:9092 --topic topic1 

2. Production message

$ ./bin/kafka-console-producer.sh --broker-list 172.17.0.2:9092,172.17.0.3:9092,172.17.0.4:9092 --topic topic1

3. Consumption news

$ ./bin/kafka-console-consumer.sh --bootstrap-server 172.17.0.2:9092,172.17.0.3:9092,172.17.0.4:9092 --topic topic1 --from-beginning

The use of these commands is consistent with the lower version of Kafka.

Kafka’s function is not perfect yet. This is a simple deployment example.
Kafka documentation:https://github.com/apache/kaf…

Spring Kafka and spring cloud stream can be used to realize Kafka responsive interaction in spring.
Let’s take a look at the use of these two frameworks.

Spring-Kafka

1. Add reference
Add spring Kafka reference

<dependency>
    <groupId>org.springframework.kafka</groupId>
    <artifactId>spring-kafka</artifactId>
    <version>2.5.8.RELEASE</version>
</dependency>

2. Prepare the configuration file as follows:

spring.kafka.producer.bootstrap-servers=172.17.0.2:9092,172.17.0.3:9092,172.17.0.4:9092
spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.LongSerializer
spring.kafka.producer.value-serializer=org.springframework.kafka.support.serializer.JsonSerializer

spring.kafka.consumer.bootstrap-servers=172.17.0.2:9092,172.17.0.3:9092,172.17.0.4:9092
spring.kafka.consumer.key-deserializer=org.apache.kafka.common.serialization.LongDeserializer
spring.kafka.consumer.value-deserializer=org.springframework.kafka.support.serializer.JsonDeserializer
spring.kafka.consumer.group-id=warehouse-consumers
spring.kafka.consumer.properties.spring.json.trusted.packages=*

They are the corresponding configurations of producers and consumers, which are very simple.

3. Send message
In spring kakfa, you can use the reactivekafkaproducertemplate to send messages.
First, we need to create an instance of reactivekafkaproducertemplate. (at present, springboot will automatically create an instance of kafkatemplate, but will not create a reactivekafkaproducertemplate instance).

@Configuration
public class KafkaConfig {
    @Autowired
    private KafkaProperties properties;

    @Bean
    public ReactiveKafkaProducerTemplate reactiveKafkaProducerTemplate() {
        SenderOptions options = SenderOptions.create(properties.getProducer().buildProperties());
        ReactiveKafkaProducerTemplate template = new ReactiveKafkaProducerTemplate(options);
        return template;
    }
}

The kafkaproperties instance is automatically created by springboot and reads the corresponding configuration in the above configuration file.

Next, you can send a message using the reactivekafka producer template

    @Autowired
    private ReactiveKafkaProducerTemplate template;

    public static final String WAREHOUSE_TOPIC = "warehouse";
    public Mono<Boolean> add(Warehouse warehouse) {
        Mono<SenderResult<Void>> resultMono = template.send(WAREHOUSE_TOPIC, warehouse.getId(), warehouse);
        return resultMono.flatMap(rs -> {
            if(rs.exception() != null) {
                logger.error("send kafka error", rs.exception());
                return Mono.just(false);
            }
            return Mono.just(true);
        });
    }

The reactivekafkaproducertemplate#send method returns a mono (which is the core object in spring reactor). The mono carries senderresult. The recordmetadata and exception in senderresult store the metadata of the record (including offset, timestamp and other information) and the exception of sending operation.

4. Consumption news
Spring Kafka uses reactivekafkaconsumertemplate to consume messages.

@Service
public class WarehouseConsumer {
    @Autowired
    private KafkaProperties properties;

    @PostConstruct
    public void consumer() {
        ReceiverOptions<Long, Warehouse> options = ReceiverOptions.create(properties.getConsumer().buildProperties());
        options = options.subscription(Collections.singleton(WarehouseService.WAREHOUSE_TOPIC));
        new ReactiveKafkaConsumerTemplate(options)
                .receiveAutoAck()
                .subscribe(record -> {
                    logger.info("Warehouse Record:" + record);
                });
    }
}

This is different from the message listener previously implemented with @ kafkalistener annotation, but it is also very simple. It is divided into two steps:
(1) The receiveroptions#subscription method associates receiveroptions with the Kafka topic
(2) Create reactivekafkaconsumertemplate and register the callback function of subscribe to consume messages.
Tip: the receiveautoack method will automatically submit the offset of the consumption group.

Spring-Cloud-Stream

Spring cloud stream is a framework provided by spring for building message driven microservices.
It provides a flexible and unified programming model for different message middleware products, which can shield the differences of different message components at the bottom. At present, it supports rabbitmq, Kafka, rocketmq and other message components.

Here is a simple example of implementing Kafka responsive interaction in spring cloud stream, without in-depth introduction to the application of spring cloud stream.

1. Introduce the reference of spring cloud starter stream Kafka

    <dependency>
      <groupId>org.springframework.cloud</groupId>
      <artifactId>spring-cloud-starter-stream-kafka</artifactId>
    </dependency>

2. Add configuration

spring.cloud.stream.kafka.binder.brokers=172.17.0.2:9092,172.17.0.3:9092,172.17.0.4:9092
spring.cloud.stream.bindings.warehouse2-out-0.contentType=application/json
spring.cloud.stream.bindings.warehouse2-out-0.destination=warehouse2
#Message format
spring.cloud.stream.bindings.warehouse3-in-0.contentType=application/json
#Message destination can be understood as Kafka subject
spring.cloud.stream.bindings.warehouse3-in-0.destination=warehouse2
#Define consumer consumption group, which can be understood as Kafka consumption group
spring.cloud.stream.bindings.warehouse3-in-0.group=warehouse2-consumers
#Mapping method name
spring.cloud.function.definition=warehouse2;warehouse3

After spring cloud stream version 3.1, @ enablebinding, @ output and other streamapi annotations are marked as obsolete and provide a more concise functional programming model.
After this version, users do not need to use annotations. As long as the methods to be bound are specified in the configuration file, spring cloud stream will bind these methods to the underlying message components for users. Users can directly call these methods to send messages, or spring cloud stream will call these methods to consume messages when receiving messages.

Define the relevant attributes of input and output functions in the following format:
Output (send message):<functionName> + -out- + <index>
Input (consumption message):<functionName> + -in- + <index>
For a typical single input / output function, index is always 0, so it is only related to functions with multiple input and output parameters.
Spring cloud stream supports functions with multiple inputs (function parameters) / outputs (function return values).

The spring.cloud.function.definition configuration specifies the method name to be bound. Without adding this configuration, spring cloud stream will automatically try to bind the method with the return type of supplier / function / consumer. However, using this configuration can avoid confusion in spring cloud stream binding.

3. Send message
Users can write a method with a return type of supplier and send messages regularly

@PollableBean
    public Supplier<Flux<Warehouse>> warehouse2() {
        Warehouse warehouse = new Warehouse();
        warehouse.setId(333L);
        Warehouse.setname ("the best warehouse in the world");
        Warehouse.setlabel ("primary warehouse");

        logger.info("Supplier Add : {}", warehouse);
        return () -> Flux.just(warehouse);
    }

After defining this method, spring cloud stream calls this method once per second to generate a warehouse instance and send it to Kafka.
(here, the method name warehouse3 has been configured in spring.cloud.function.definition.)

In general, applications do not need to send messages regularly, but the business scenario triggers the message sending operation, such as the rest interface,
At this point, you can use the streambridge interface

    @Autowired
    private StreamBridge streamBridge;

    public boolean add2(Warehouse warehouse) {
        return streamBridge.send("warehouse2-out-0", warehouse);
    }

How streambridge implements responsive interaction has not been found yet.

4. Consumption news
To consume messages, an application only needs to define a method with a return type of function / consumer. as follows

    @Bean
    public Function<Flux<Warehouse>, Mono<Void>> warehouse3() {
        Logger logger = LoggerFactory.getLogger("WarehouseFunction");
        return flux -> flux.doOnNext(data -> {
            logger.info("Warehouse Data: {}", data);
        }).then();
    }

Note: method name and<functionName> + -out- + <index>/<functionName> + -in- + <index>
The configuration in spring.cloud.function.definition needs to be consistent to avoid errors.

Springcloudstream documentation:https://docs.spring.io/spring…

Article complete code:https://gitee.com/binecy/bin-…

If you think this article is good, please pay attention to my WeChat official account, and the series articles are continuously updated. Your attention is the driving force of my persistence!
Reactive spring practice -- responsive Kafka interaction