Reactive spring practice — responsive Kafka interaction


This article shares how to deploy a Kafka cluster using kraft and how to implement Kafka responsive interaction in spring.


We know that Kafka uses zookeeper to store metadata such as broker and consumer group for Kafka, and uses zookeeper to select the master of the broker.
Although the use of zookeeper simplifies Kafka’s work, it also makes Kafka’s deployment and operation and maintenance more complex.

Kafka 2.8.0 began to remove zookeeper and replaced zookeeper with Kafka’s internal quorum controller, which is officially called “Kafka raft metadata mode”, i.e. Kraft mode. From then on, users can deploy the Kafka cluster without zookeeper, which makes fafka simpler and lightweight.
After using kraft mode, users only need to focus on maintaining Kafka cluster.

Note: due to major changes in this function, the Kraft mode provided by kafka2.8 is a test version and is not recommended for use in the production environment. It is believed that the Kraft version available for production will be available soon in the subsequent version of Kafka.

The following describes how to deploy a Kafka cluster using Kafka.
Here, three Kafka nodes are deployed using three machines, and the Kafka version used is 2.8.0.

1. Generate clusterid and configuration file.
(1) Use to generate clusterid.

$ ./bin/ random-uuid

(2) Generate configuration file using clusterid

$ ./bin/ format -t <uuid> -c ./config/kraft/
Formatting /tmp/kraft-combined-logs

Note: you only need to generate a clusterid and use it to generate configuration files on all machines, that is, the clusterid used by all nodes in the cluster must be the same.

2. Modify the configuration file
The configuration file generated by the script can only be used for a single Kafka node. If you are deploying a Kafka cluster, you need to modify the configuration file.

(1) Modify config / kraft / (use this configuration to start Kafka later)

[email protected]:9093,[email protected]:9093,[email protected]:9093

Process.roles specifies the role of the node, with the following values

  • Broker: This machine will be just a broker
  • Controller: as the controller node of raft quorum
  • Broker, controller: includes the above two functions

Node.ids of different nodes in a cluster need to be different.
Controller.quorum.voters need to configure all controller nodes in the cluster in the format of < nodeid > @ < IP >: < port >.

The configuration generated by the script stores Kafka data in / TMP / kraft combined logs / by default,
We also need the in the / TMP / kraft combined logs / configuration to keep it with the configuration.

3. Start Kafka
Start the Kafka node using the script

$ ./bin/ ./config/kraft/

Let’s test the Kafka cluster
1. Create a theme

$ ./bin/ --create --partitions 3 --replication-factor 3 --bootstrap-server,, --topic topic1 

2. Production message

$ ./bin/ --broker-list,, --topic topic1

3. Consumption news

$ ./bin/ --bootstrap-server,, --topic topic1 --from-beginning

The use of these commands is consistent with the lower version of Kafka.

Kafka’s function is not perfect yet. This is a simple deployment example.
Kafka documentation:…

Spring Kafka and spring cloud stream can be used to realize Kafka responsive interaction in spring.
Let’s take a look at the use of these two frameworks.


1. Add reference
Add spring Kafka reference


2. Prepare the configuration file as follows:



They are the corresponding configurations of producers and consumers, which are very simple.

3. Send message
In spring kakfa, you can use the reactivekafkaproducertemplate to send messages.
First, we need to create an instance of reactivekafkaproducertemplate. (at present, springboot will automatically create an instance of kafkatemplate, but will not create a reactivekafkaproducertemplate instance).

public class KafkaConfig {
    private KafkaProperties properties;

    public ReactiveKafkaProducerTemplate reactiveKafkaProducerTemplate() {
        SenderOptions options = SenderOptions.create(properties.getProducer().buildProperties());
        ReactiveKafkaProducerTemplate template = new ReactiveKafkaProducerTemplate(options);
        return template;

The kafkaproperties instance is automatically created by springboot and reads the corresponding configuration in the above configuration file.

Next, you can send a message using the reactivekafka producer template

    private ReactiveKafkaProducerTemplate template;

    public static final String WAREHOUSE_TOPIC = "warehouse";
    public Mono<Boolean> add(Warehouse warehouse) {
        Mono<SenderResult<Void>> resultMono = template.send(WAREHOUSE_TOPIC, warehouse.getId(), warehouse);
        return resultMono.flatMap(rs -> {
            if(rs.exception() != null) {
                logger.error("send kafka error", rs.exception());
                return Mono.just(false);
            return Mono.just(true);

The reactivekafkaproducertemplate#send method returns a mono (which is the core object in spring reactor). The mono carries senderresult. The recordmetadata and exception in senderresult store the metadata of the record (including offset, timestamp and other information) and the exception of sending operation.

4. Consumption news
Spring Kafka uses reactivekafkaconsumertemplate to consume messages.

public class WarehouseConsumer {
    private KafkaProperties properties;

    public void consumer() {
        ReceiverOptions<Long, Warehouse> options = ReceiverOptions.create(properties.getConsumer().buildProperties());
        options = options.subscription(Collections.singleton(WarehouseService.WAREHOUSE_TOPIC));
        new ReactiveKafkaConsumerTemplate(options)
                .subscribe(record -> {
          "Warehouse Record:" + record);

This is different from the message listener previously implemented with @ kafkalistener annotation, but it is also very simple. It is divided into two steps:
(1) The receiveroptions#subscription method associates receiveroptions with the Kafka topic
(2) Create reactivekafkaconsumertemplate and register the callback function of subscribe to consume messages.
Tip: the receiveautoack method will automatically submit the offset of the consumption group.


Spring cloud stream is a framework provided by spring for building message driven microservices.
It provides a flexible and unified programming model for different message middleware products, which can shield the differences of different message components at the bottom. At present, it supports rabbitmq, Kafka, rocketmq and other message components.

Here is a simple example of implementing Kafka responsive interaction in spring cloud stream, without in-depth introduction to the application of spring cloud stream.

1. Introduce the reference of spring cloud starter stream Kafka


2. Add configuration,,
#Message format
#Message destination can be understood as Kafka subject
#Define consumer consumption group, which can be understood as Kafka consumption group
#Mapping method name;warehouse3

After spring cloud stream version 3.1, @ enablebinding, @ output and other streamapi annotations are marked as obsolete and provide a more concise functional programming model.
After this version, users do not need to use annotations. As long as the methods to be bound are specified in the configuration file, spring cloud stream will bind these methods to the underlying message components for users. Users can directly call these methods to send messages, or spring cloud stream will call these methods to consume messages when receiving messages.

Define the relevant attributes of input and output functions in the following format:
Output (send message):<functionName> + -out- + <index>
Input (consumption message):<functionName> + -in- + <index>
For a typical single input / output function, index is always 0, so it is only related to functions with multiple input and output parameters.
Spring cloud stream supports functions with multiple inputs (function parameters) / outputs (function return values).

The configuration specifies the method name to be bound. Without adding this configuration, spring cloud stream will automatically try to bind the method with the return type of supplier / function / consumer. However, using this configuration can avoid confusion in spring cloud stream binding.

3. Send message
Users can write a method with a return type of supplier and send messages regularly

    public Supplier<Flux<Warehouse>> warehouse2() {
        Warehouse warehouse = new Warehouse();
        Warehouse.setname ("the best warehouse in the world");
        Warehouse.setlabel ("primary warehouse");"Supplier Add : {}", warehouse);
        return () -> Flux.just(warehouse);

After defining this method, spring cloud stream calls this method once per second to generate a warehouse instance and send it to Kafka.
(here, the method name warehouse3 has been configured in

In general, applications do not need to send messages regularly, but the business scenario triggers the message sending operation, such as the rest interface,
At this point, you can use the streambridge interface

    private StreamBridge streamBridge;

    public boolean add2(Warehouse warehouse) {
        return streamBridge.send("warehouse2-out-0", warehouse);

How streambridge implements responsive interaction has not been found yet.

4. Consumption news
To consume messages, an application only needs to define a method with a return type of function / consumer. as follows

    public Function<Flux<Warehouse>, Mono<Void>> warehouse3() {
        Logger logger = LoggerFactory.getLogger("WarehouseFunction");
        return flux -> flux.doOnNext(data -> {
  "Warehouse Data: {}", data);

Note: method name and<functionName> + -out- + <index>/<functionName> + -in- + <index>
The configuration in needs to be consistent to avoid errors.

Springcloudstream documentation:…

Article complete code:…

If you think this article is good, please pay attention to my WeChat official account, and the series articles are continuously updated. Your attention is the driving force of my persistence!
Reactive spring practice -- responsive Kafka interaction