How to use Kafka in. Net core

Time:2021-1-12

install

CentOS install Kafka

Kafka : http://kafka.apache.org/downloads

ZooLeeper : https://zookeeper.apache.org/releases.html

Download and unzip

#Download and unzip
$ wget https://archive.apache.org/dist/kafka/2.1.1/kafka_2.12-2.1.1.tgz
$ tar -zxvf kafka_2.12-2.1.1.tgz
$ mv kafka_2.12-2.1.1.tgz /data/kafka

#Download zookeeper and unzip it
$ wget https://mirror.bit.edu.cn/apache/zookeeper/zookeeper-3.5.8/apache-zookeeper-3.5.8-bin.tar.gz
$ tar -zxvf apache-zookeeper-3.5.8-bin.tar.gz
$ mv apache-zookeeper-3.5.8-bin /data/zookeeper

Start zookeeper

#Copy configuration template
$ cd /data/kafka/conf
$ cp zoo_sample.cfg zoo.cfg

#See if the configuration needs to be changed
$ vim zoo.cfg

#Command
$ ./bin/ zkServer.sh  Start # start
$ ./bin/ zkServer.sh  Status # status
$ ./bin/ zkServer.sh  Stop # stop
$ ./bin/ zkServer.sh  Restart

#Using client test
$ ./bin/zkCli.sh -server localhost:2181
$ quit

Start Kafka

#Backup configuration
$ cd /data/kafka
$ cp config/server.properties config/server.properties_copy

#Modify configuration
$ vim /data/kafka/config/server.properties

#In cluster configuration, the ID of each broker must be different
# broker.id=0

#Monitoring address setting (intranet)
# listeners=PLAINTEXT://ip:9092

#IP and port for external services
# advertised.listeners=PLAINTEXT://106.75.84.97:9092

#Modify the default partition parameters of each topic num.partitions , the default value is 1, and the appropriate value should be determined according to the server configuration process, UCloud.ukafka  = 3
# num.partitions=3

#Zookeeper configuration
# zookeeper.connect=localhost:2181

#Start Kafka by configuration
$ ./bin/kafka-server-start.sh config/server.properties&

#Status view
$ ps -ef|grep kafka
$ jps

Install Kafka under docker


docker pull wurstmeister/zookeeper
docker run -d --name zookeeper -p 2181:2181 wurstmeister/zookeeper

docker pull wurstmeister/kafka
docker run -d --name kafka --publish 9092:9092 --link zookeeper --env KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 --env KAFKA_ADVERTISED_HOST_NAME=192.168.1.111 --env KAFKA_ADVERTISED_PORT=9092 wurstmeister/kafka

introduce

  • Broker: message middleware processing node. A Kafka node is a broker. Multiple brokers can form a Kafka cluster.
  • Topic: a kind of message, such as page view log and click log, can exist in the form of topic. Kafka cluster can be responsible for the distribution of multiple topics at the same time.
  • Partition: a physical grouping of topics. A topic can be divided into multiple partitions, and each partition is an ordered queue.
  • Segment: a partition is physically composed of multiple segments, as detailed in 2.2 and 2.3 below.
  • Offset: each partition is composed of a series of ordered and immutable messages, which are continuously added to the partition. Each message in the partition has a continuous sequence number called offset, which is used to uniquely identify a message in the partition.

The relationship between the number of Kafka partition and consumer

  • If the number of consumers is more than that of partitions, it is a waste. Because Kafka is designed on a partition, concurrency is not allowed, so the number of consumers should not be greater than that of partitions.
  • If the number of consumers is less than that of partitions, one consumer will correspond to multiple partitions. Here, we mainly allocate the number of consumers and partitions reasonably. Otherwise, the data in the partition will be taken unevenly. It is best that the number of partitions is an integral multiple of the number of consumers, so the number of partitions is very important. For example, if you take 24, it is easy to set the number of consumers.
  • If the consumer reads data from multiple partitions, it does not guarantee the order of the data. Kafka only guarantees that the data in one partition is in order, but multiple partitions will be different according to the order in which you read them
  • Increasing or decreasing the number of consumers, brokers, and partitions will lead to rebalance, so the partition corresponding to the consumer will change after rebalance and start quickly

Installing components in a. Net core project


Install-Package Confluent.Kafka

Open source address:https://github.com/confluentinc/confluent-kafka-dotnet

add toIKafkaServiceService interface

public interface IKafkaService
{
  /// <summary>
  ///Send message to specified subject
  /// </summary>
  /// <typeparam name="TMessage"></typeparam>
  /// <param name="topicName"></param>
  /// <param name="message"></param>
  /// <returns></returns>
  Task PublishAsync<TMessage>(string topicName, TMessage message) where TMessage : class;

  /// <summary>
  ///Subscribe to messages from the specified topic
  /// </summary>
  /// <typeparam name="TMessage"></typeparam>
  /// <param name="topics"></param>
  /// <param name="messageFunc"></param>
  /// <param name="cancellationToken"></param>
  /// <returns></returns>
  Task SubscribeAsync<TMessage>(IEnumerable<string> topics, Action<TMessage> messageFunc, CancellationToken cancellationToken) where TMessage : class;
}

realizationIKafkaService

public class KafkaService : IKafkaService
{
  public async Task PublishAsync<TMessage>(string topicName, TMessage message) where TMessage : class
  {
    var config = new ProducerConfig
    {
      BootstrapServers = "127.0.0.1:9092"
    };
    using var producer = new ProducerBuilder<string, string>(config).Build();
    await producer.ProduceAsync(topicName, new Message<string, string>
    {
      Key = Guid.NewGuid().ToString(),
      Value = message.SerializeToJson()
    });
  }

  public async Task SubscribeAsync<TMessage>(IEnumerable<string> topics, Action<TMessage> messageFunc, CancellationToken cancellationToken) where TMessage : class
  {
    var config = new ConsumerConfig
    {
      BootstrapServers = "127.0.0.1:9092",
      GroupId = "crow-consumer",
      EnableAutoCommit = false,
      StatisticsIntervalMs = 5000,
      SessionTimeoutMs = 6000,
      AutoOffsetReset = AutoOffsetReset.Earliest,
      EnablePartitionEof = true
    };
    //const int commitPeriod = 5;
    using var consumer = new ConsumerBuilder<Ignore, string>(config)
               .SetErrorHandler((_, e) =>
               {
                 Console.WriteLine($"Error: {e.Reason}");
               })
               .SetStatisticsHandler((_, json) =>
               {
                 Console.WriteLine ($" - { DateTime.Now : yyyy MM DD HH: mm: SS} > message listening.. ");
               })
               .SetPartitionsAssignedHandler((c, partitions) =>
               {
                 string partitionsStr = string.Join(", ", partitions);
                 Console.WriteLine ($"- allocated Kafka partition: {partitions STR} ');
               })
               .SetPartitionsRevokedHandler((c, partitions) =>
               {
                 string partitionsStr = string.Join(", ", partitions);
                 Console.WriteLine ($"- recovered Kafka's partition: {partitions STR} ');
               })
               .Build();
    consumer.Subscribe(topics);
    try
    {
      while (true)
      {
        try
        {
          var consumeResult = consumer.Consume(cancellationToken);
          Console.WriteLine($"Consumed message '{consumeResult.Message?.Value}' at: '{consumeResult?.TopicPartitionOffset}'.");
          if (consumeResult.IsPartitionEOF)
          {
            Console.WriteLine ($" - { DateTime.Now : yyyy MM DD HH: mm: SS} has gone to the end:{ consumeResult.Topic }, partition { consumeResult.Partition }, offset { consumeResult.Offset }.");
            continue;
          }
          TMessage messageResult = null;
          try
          {
            messageResult = JsonConvert.DeserializeObject<TMessage>(consumeResult.Message.Value);
          }
          catch (Exception ex)
          {
            var errorMessage = $" - { DateTime.Now : yyyy MM DD HH: mm: SS} [exception message deserialization failed, value:{ consumeResult.Message.Value }】 :{ ex.StackTrace? .ToString()}";
            Console.WriteLine(errorMessage);
            messageResult = null;
          }
          if (messageResult != null/* && consumeResult.Offset % commitPeriod == 0*/)
          {
            messageFunc(messageResult);
            try
            {
              consumer.Commit(consumeResult);
            }
            catch (KafkaException e)
            {
              Console.WriteLine(e.Message);
            }
          }
        }
        catch (ConsumeException e)
        {
          Console.WriteLine($"Consume error: {e.Error.Reason}");
        }
      }
    }
    catch (OperationCanceledException)
    {
      Console.WriteLine("Closing consumer.");
      consumer.Close();
    }
    await Task.CompletedTask;
  }
}

injectionIKafkaServiceIt can be called directly where it needs to be used.


public class MessageService : IMessageService, ITransientDependency
{
  private readonly IKafkaService _kafkaService;
  public MessageService(IKafkaService kafkaService)
  {
    _kafkaService = kafkaService;
  }

  public async Task RequestTraceAdded(XxxEventData eventData)
  {
    await _kafkaService.PublishAsync(eventData.TopicName, eventData);
  }
}

The above is equivalent to a producer. When we send a message queue, we need a consumer to consume it, so we can use a console project to receive messages to process business.

var cts = new CancellationTokenSource();
Console.CancelKeyPress += (_, e) =>
{
  e.Cancel = true;
  cts.Cancel();
};

await kafkaService.SubscribeAsync<XxxEventData>(topics, async (eventData) =>
{
  // Your logic

  Console.WriteLine ($" - { eventData.EventTime :yyyy-MM-dd HH:mm:ss} 【{ eventData.TopicName }] - > processed ');
}, cts.Token);

stayIKafkaServiceThe interface of subscription message has been written in. Here, you can use it directly after injection.

Examples of producers and consumers

producer


static async Task Main(string[] args)
{
  if (args.Length != 2)
  {
    Console.WriteLine("Usage: .. brokerList topicName");
    // 127.0.0.1:9092 helloTopic
    return;
  }

  var brokerList = args.First();
  var topicName = args.Last();

  var config = new ProducerConfig { BootstrapServers = brokerList };

  using var producer = new ProducerBuilder<string, string>(config).Build();

  Console.WriteLine("\n-----------------------------------------------------------------------");
  Console.WriteLine($"Producer {producer.Name} producing on topic {topicName}.");
  Console.WriteLine("-----------------------------------------------------------------------");
  Console.WriteLine("To create a kafka message with UTF-8 encoded key and value:");
  Console.WriteLine("> key value<Enter>");
  Console.WriteLine("To create a kafka message with a null key and UTF-8 encoded value:");
  Console.WriteLine("> value<enter>");
  Console.WriteLine("Ctrl-C to quit.\n");

  var cancelled = false;

  Console.CancelKeyPress += (_, e) =>
  {
    e.Cancel = true;
    cancelled = true;
  };

  while (!cancelled)
  {
    Console.Write("> ");

    var text = string.Empty;

    try
    {
      text = Console.ReadLine();
    }
    catch (IOException)
    {
      break;
    }

    if (string.IsNullOrWhiteSpace(text))
    {
      break;
    }

    var key = string.Empty;
    var val = text;

    var index = text.IndexOf(" ");
    if (index != -1)
    {
      key = text.Substring(0, index);
      val = text.Substring(index + 1);
    }

    try
    {
      var deliveryResult = await producer.ProduceAsync(topicName, new Message<string, string>
      {
        Key = key,
        Value = val
      });

      Console.WriteLine($"delivered to: {deliveryResult.TopicPartitionOffset}");
    }
    catch (ProduceException<string, string> e)
    {
      Console.WriteLine($"failed to deliver message: {e.Message} [{e.Error.Code}]");
    }
  }
}

consumer


static void Main(string[] args)
{
  if (args.Length != 2)
  {
    Console.WriteLine("Usage: .. brokerList topicName");
    // 127.0.0.1:9092 helloTopic
    return;
  }

  var brokerList = args.First();
  var topicName = args.Last();

  Console.WriteLine($"Started consumer, Ctrl-C to stop consuming");

  var cts = new CancellationTokenSource();
  Console.CancelKeyPress += (_, e) =>
  {
    e.Cancel = true;
    cts.Cancel();
  };

  var config = new ConsumerConfig
  {
    BootstrapServers = brokerList,
    GroupId = "consumer",
    EnableAutoCommit = false,
    StatisticsIntervalMs = 5000,
    SessionTimeoutMs = 6000,
    AutoOffsetReset = AutoOffsetReset.Earliest,
    EnablePartitionEof = true
  };

  const int commitPeriod = 5;

  using var consumer = new ConsumerBuilder<Ignore, string>(config)
             .SetErrorHandler((_, e) =>
             {
               Console.WriteLine($"Error: {e.Reason}");
             })
             .SetStatisticsHandler((_, json) =>
             {
               Console.WriteLine($" - {DateTime.Now:yyyy-MM-dd HH:mm:ss} > monitoring..");
               //Console.WriteLine($"Statistics: {json}");
             })
             .SetPartitionsAssignedHandler((c, partitions) =>
             {
               Console.WriteLine($"Assigned partitions: [{string.Join(", ", partitions)}]");
             })
             .SetPartitionsRevokedHandler((c, partitions) =>
             {
               Console.WriteLine($"Revoking assignment: [{string.Join(", ", partitions)}]");
             })
             .Build();
  consumer.Subscribe(topicName);

  try
  {
    while (true)
    {
      try
      {
        var consumeResult = consumer.Consume(cts.Token);

        if (consumeResult.IsPartitionEOF)
        {
          Console.WriteLine($"Reached end of topic {consumeResult.Topic}, partition {consumeResult.Partition}, offset {consumeResult.Offset}.");

          continue;
        }

        Console.WriteLine($"Received message at {consumeResult.TopicPartitionOffset}: {consumeResult.Message.Value}");

        if (consumeResult.Offset % commitPeriod == 0)
        {
          try
          {
            consumer.Commit(consumeResult);
          }
          catch (KafkaException e)
          {
            Console.WriteLine($"Commit error: {e.Error.Reason}");
          }
        }
      }
      catch (ConsumeException e)
      {
        Console.WriteLine($"Consume error: {e.Error.Reason}");
      }
    }
  }
  catch (OperationCanceledException)
  {
    Console.WriteLine("Closing consumer.");
    consumer.Close();
  }
}

This article about the method and steps of using Kafka under. Net core is introduced here. For more information about using Kafka under. Net core, please search previous articles of developer or continue to browse the following related articles. I hope you can support developer more in the future!