Springboot integrates Kafka


Kafka is a high-throughput distributed publish subscribe message system, which has the following characteristics: it provides message persistence through O (1) disk data structure, which can maintain long-term stable performance for even terabytes of message storage. High throughput: even very common hardware Kafka can support millions of messages per second. It supports partitioning messages through Kafka server and consumer cluster. Support Hadoop parallel data loading.

The basic construction and configuration of springboot. I have given code examples in previous articles. If you don’t know about it, you can configure too much according to spring MVC? Try springboot to learn. So how can the popular spring boot and Kafka be perfectly combined? Talk is soap, show me your code!

Install Kafka

Because the installation of Kafka needs the support of zookeeper, so when installing windows, you need to install zookeeper first, and then install Kafka. Next, I’ll give you the steps of MAC installation and the points you need to pay attention to. The configuration of windows is almost the same except for the location.

brew install kafka 

Yes, it’s so simple. A command on the Mac can do it. The installation process may take a while. It should be related to the network condition. There may be an error message in the installation prompt information, such as “error: could not link: / usr / local / share / Doc / Hometree”, which does not matter and is automatically ignored. Finally we see the following and we succeed.

 ==> Summary 🍺/usr/local/Cellar/kafka/1.1.0: 157 files, 47.8MB 

Install the configuration file location is as follows, according to their own needs to modify the port number or something.

Location of installed zoopeeper and Kafka / usr / local / cellular/

Configuration file / usr / local / etc / Kafka/ server.properties /usr/local/etc/kafka/ zookeeper.properties

Start zookeeper

  ./bin/zookeeper-server-start /usr/local/etc/kafka/zookeeper.properties & 

Start Kafka

 ./bin/kafka-server-start /usr/local/etc/kafka/server.properties & 

Create a topic for Kafka. The topic name is test. You can configure it to the name you want. You can configure it correctly in the code later.

 ./bin/kafka-topics --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test 

Code examples pom.xml








    context-path: /
  port: 8080
    #We can use the default configuration for most producers. Here are some important properties
      #Number of messages sent per batch
      batch-size: 16
      #Setting a value greater than 0 will cause the client to resend any data once it fails. Note that these retries are no different than when the client receives a send error. If both message records are sent to the same partition, the first message fails and the second message succeeds, and the second message appears earlier than the first message.
      retries: 0
      #Producer can be used to cache the memory size of the data. If the speed of data generation is faster than that of sending to the broker, producer will block or throw an exception to“ block.on.buffer . full ". This setting will be related to the total memory that producer can use, but it is not a rigid limit, because not all the memory that producer uses is used for caching. Some of the extra memory is used for compression (if compression is introduced), as well as for maintenance requests.
      buffer-memory: 33554432
      #Key serialization method
      key-serializer: org.apache.kafka.common.serialization.StringSerializer
      value-serializer: org.apache.kafka.common.serialization.StringSerializer
    #Configuration of consumers
      #When there is no initial offset in Kafka or if the current offset no longer exists on the server, the default area is the latest, with three options [latest, earliest, none]
      auto-offset-reset: latest
      #Do you want to turn on automatic submission
      enable-auto-commit: true
      #Time interval for automatic submission
      auto-commit-interval: 100
      #Key decoding method
      key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
      #Decoding method of value
      value-deserializer: org.apache.kafka.common.serialization.StringDeserializer
      #In / usr / local / etc / Kafka/ consumer.properties Configuration in
      group-id: test-consumer-group

Producer message producer

public class Producer {

    private KafkaTemplate kafkaTemplate;

    private static Gson gson = new GsonBuilder().create();

    //Sending message method
    public void send() {
        Message message = new Message();
        message.setSendTime(new Date());
        kafkaTemplate.send("test", gson.toJson(message));

public class Message {

    private String id;

    private String msg;

    private Date sendTime;

    public String getId() {
        return id;

    public void setId(String id) {
        this.id = id;

    public String getMsg() {
        return msg;

    public void setMsg(String msg) {
        this.msg = msg;

    public Date getSendTime() {
        return sendTime;

    public void setSendTime(Date sendTime) {
        this.sendTime = sendTime;

Consumer message consumer

public class Consumer {

    @KafkaListener(topics = {"test"})
    public void listen(ConsumerRecord<?, ?> record){

        Optional<?> kafkaMessage = Optional.ofNullable(record.value());

        if (kafkaMessage.isPresent()) {

            Object message = kafkaMessage.get();



Test interface case

Here we use an interface to test whether our message will be received by consumers.

public class SendController {

    private Producer producer;

    @RequestMapping(value = "/send")
    public String send() {
        return "{\"code\":0}";

Access in the browser after the springboot startup class startshttp:// 8080 / Kafka / send, we can see the output results in the IDE console, at this time our integration is basically completed. The specific code can be found inhttps://github.com/xiaour/Spr…Access, also welcome to start ⭐ My project.

Recommended Today

Interface control design of NAND flash

NAND flash is one of the flash memories. NAND flash adopts nonlinear macro cell mode and provides a cheap and effective solution for the realization of solid-state mass memory.NAND FLASHMemory has the advantages of large capacity and fast rewriting speed, which is suitable for the storage of large amounts of data, so it has been […]