Kafka in MICRONAUT microservices

Time:2019-12-6

Today, we will passApache KafkaTopic builds micro services that communicate asynchronously with each other. We useMicronautFramework forKafkaIntegration provides specialized libraries. Let’s briefly introduce the architecture of the sample system. We have four microservices:Order serviceItinerary serviceDriver serviceandPassenger service。 The implementation of these applications is very simple. They all have memory storage and are connected to the sameKafkaExample.

The main goal of our system is to arrange itinerary for customers. The order service application also acts as a gateway. It receives requests from customers, saves history, and sends events toordersTopic. All other microservices are listeningordersThis topic, and deal withorder-serviceOrder sent. Each microservice has its own dedicated topic, which sends events containing change information. Such events are received by some other microservices. The architecture is shown in the figure below.

img

Before reading this article, it is necessary to get familiar with itMicronautFrame. You can read the previous article, which describes how to use theThe process of building microservice communication with rest API: a quick guide to building microservices using the microaut framework.

1. Running Kafka

To run on the local machineApache Kafka, we can use its docker image. The latest image is shared by https://hub.docker.com/u/wurstmeister. Start upKafkaBefore the container, we have to startkafkaUsedZooKeeperThe server. If inWindowsUp operationDocker, the default address of its virtual machine is192.168.99.100。 It must also be set toKafkaThe environment of the container.

ZookeeperandKafkaContainers will all start on the same network. Run zookeeper in docker tozookeeperThe name of provides the service and is exposed in the2181Port.KafkaContainers need to be used in environment variablesKAFKA_ZOOKEEPER_CONNECTAddress.

$ docker network create kafka
$ docker run -d --name zookeeper --network kafka -p 2181:2181 wurstmeister/zookeeper
$ docker run -d --name kafka -p 9092:9092 --network kafka --env KAFKA_ADVERTISED_HOST_NAME=192.168.99.100 --env KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 wurstmeister/kafka

2. Introduce MICRONAUT Kafka dependency

UseKafkaConstructedmicroautApplications can be started with or without an HTTP server. To enableMicronaut Kafka, need to addmicronaut-kafkaLibrary to dependency. If you want to exposeHTTP API, you should also addmicronaut-http-server-netty:

io.micronaut.configuration
    micronaut-kafka


    io.micronaut
    micronaut-http-server-netty

3. Build order micro service

Order micro serviceIs the only one to start the embedded HTTP server and exposeREST APIApplication. That’s why we canKafkaBuilt inMicronauthealth examination. To do this, we should first addmicronaut-managementDependence:

io.micronaut
    micronaut-management

For convenience, we will pass theapplication.ymlThe following configuration is defined in to enable all management endpoints and disable their HTTP authentication.

endpoints:
  all:
    enabled: true
    sensitive: false

Now, you can use it at the address http: / / localhost: 8080 / healthhealth check。 Our sample application will also exposeAdd new orderandList all previously created ordersSimpleREST API。 Here’s what exposes these endpointsMicronautController implementation:

@Controller("orders")
public class OrderController {

    @Inject
    OrderInMemoryRepository repository;
    @Inject
    OrderClient client;

    @Post
    public Order add(@Body Order order) {
        order = repository.add(order);
        client.send(order);
        return order;
    }

    @Get
    public Set findAll() {
        return repository.findAll();
    }

}

Each microservice is implemented using a memory repository. Below isOrder serviceRepository implementation in:

@Singleton
public class OrderInMemoryRepository {

    private Set orders = new HashSet<>();

    public Order add(Order order) {
        order.setId((long) (orders.size() + 1));
        orders.add(order);
        return order;
    }

    public void update(Order order) {
        orders.remove(order);
        orders.add(order);
    }

    public Optional findByTripIdAndType(Long tripId, OrderType type) {
        return orders.stream().filter(order -> order.getTripId().equals(tripId) && order.getType() == type).findAny();
    }

    public Optional findNewestByUserIdAndType(Long userId, OrderType type) {
        return orders.stream().filter(order -> order.getUserId().equals(userId) && order.getType() == type)
                .max(Comparator.comparing(Order::getId));
    }

    public Set findAll() {
        return orders;
    }

}

Memory repository storageOrderObject instance.OrderThe object is also sent to theordersKafkatopic. Below isOrderClass implementation:

public class Order {

    private Long id;
    private LocalDateTime createdAt;
    private OrderType type;
    private Long userId;
    private Long tripId;
    private float currentLocationX;
    private float currentLocationY;
    private OrderStatus status;

    // ... GETTERS AND SETTERS
}

4. Use Kafka asynchronous communication

Now, let’s think of a use case that can be implemented through the example system——Add a new trip

We createdOrderType.NEW_TRIPNew order of type. After that, (1)Order serviceCreate an order and send it toordersTopic. Orders are received by three microservices:Driver servicePassenger serviceandItinerary service
(2) all these applications process the new order.Passenger serviceThe app checks if there is enough money on the passenger account. If not, it cancels the trip, otherwise it can do nothing.Driver serviceLooking for recently available drivers, (3)Itinerary serviceCreate and store new trips.Driver serviceandItinerary serviceSend events to their topics(drivers, trips), which contains information about the changes.

Each event can bemicroservicesVisit, for example, (4)Itinerary serviceInterception fromDriver serviceTo assign a new driver to the trip

The following figure illustrates the process of communication between our microservices when adding new journeys.
在这里插入图片描述Now, let’s move on to implementation details.

4.1. Send order

First, we need to create a Kafka client to send messages to topic. We created an interface namedOrderClient, add@KafkaClientAnd declare one or more methods for sending messages. Every method should pass@TopicAnnotation sets the target topic name. For method parameters, we can use three annotations@KafkaKey@Bodyor@Header@KafkaKeyFor partitioning, which is what our sample application needs. In the following available client implementations, we only use@BodyAnnotations.

@KafkaClient
public interface OrderClient {

    @Topic("orders")
    void send(@Body Order order);

}

4.2. Receiving orders

Once the client sends an order, it is monitoredordersAll other microservices received by topic. Below isDriver serviceMonitor implementation in. Listener classOrderListenerShould add@KafkaListenerAnnotations. We can declaregroupIdAs an annotation parameter, to prevent multiple instances of a single application from receiving the same message. We then declare the method used to process the incoming message. As with the client method, you should use the@TopicAnnotation sets the target topic name because we are listeningOrderObject, so you should use@BodyAnnotation – same as the corresponding client method.

@KafkaListener(groupId = "driver")
public class OrderListener {

    private static final Logger LOGGER = LoggerFactory.getLogger(OrderListener.class);

    private DriverService service;

    public OrderListener(DriverService service) {
        this.service = service;
    }

    @Topic("orders")
    public void receive(@Body Order order) {
        LOGGER.info("Received: {}", order);
        switch (order.getType()) {
            case NEW_TRIP -> service.processNewTripOrder(order);
        }
    }

}

4.3. Send to other topics

Now, let’s take a lookDriver serviceMediumprocessNewTripOrderMethod.DriverServiceInject two differentKafka Client
bean: OrderClientandDriverClient。 When processing a new order, it will try to find the driver closest to the passenger who sent the order. After finding him, change the driver’s status toUNAVAILABLE, and will haveDriverObject’s events are sent todriverstopic。

@Singleton
public class DriverService {

    private static final Logger LOGGER = LoggerFactory.getLogger(DriverService.class);

    private DriverClient client;
    private OrderClient orderClient;
    private DriverInMemoryRepository repository;

    public DriverService(DriverClient client, OrderClient orderClient, DriverInMemoryRepository repository) {
        this.client = client;
        this.orderClient = orderClient;
        this.repository = repository;
    }

    public void processNewTripOrder(Order order) {
        LOGGER.info("Processing: {}", order);
        Optional driver = repository.findNearestDriver(order.getCurrentLocationX(), order.getCurrentLocationY());
        driver.ifPresent(driverLocal -> {
            driverLocal.setStatus(DriverStatus.UNAVAILABLE);
            repository.updateDriver(driverLocal);
            client.send(driverLocal, String.valueOf(order.getId()));
            LOGGER.info("Message sent: {}", driverLocal);
        });
    }

    // ...
}

This isKafka ClientstayDriver serviceImplementation of, fordriverTopic sends a message. Because we need toDriverAndOrderSo we use@HeaderAnnotatedorderIdParameters. There’s no need to include itDriverClass, which is assigned to the correct path on the listener side.

@KafkaClient
public interface DriverClient {

    @Topic("drivers")
    void send(@Body Driver driver, @Header("Order-Id") String orderId);

}

4.4. Communication between services

fromDriverListenerReceived@KafkaListenerstayItinerary serviceChina statement. It listens for incomingtripTopic. The parameters of the receiving method are similar to those of the client sending method, as follows:

@KafkaListener(groupId = "trip")
public class DriverListener {

    private static final Logger LOGGER = LoggerFactory.getLogger(OrderListener.class);

    private TripService service;

    public DriverListener(TripService service) {
        this.service = service;
    }

    @Topic("drivers")
    public void receive(@Body Driver driver, @Header("Order-Id") String orderId) {
        LOGGER.info("Received: driver->{}, header->{}", driver, orderId);
        service.processNewDriver(driver);
    }

}

The last step is toorderIdItinerary foundTripAnddriverIdAssociation, so the whole process ends.

@Singleton
public class TripService {

    private static final Logger LOGGER = LoggerFactory.getLogger(TripService.class);

    private TripInMemoryRepository repository;
    private TripClient client;

    public TripService(TripInMemoryRepository repository, TripClient client) {
        this.repository = repository;
        this.client = client;
    }


    public void processNewDriver(Driver driver, String orderId) {
        LOGGER.info("Processing: {}", driver);
        Optional trip = repository.findByOrderId(Long.valueOf(orderId));
        trip.ifPresent(tripLocal -> {
            tripLocal.setDriverId(driver.getId());
            repository.update(tripLocal);
        });
    }

    // ... OTHER METHODS

}

5. tracking

We can useMicronaut KafkaEasily enable distributed tracing. First, we need to enable and configureMicronautTrack. To do this, you should first add some dependencies:

io.micronaut
    micronaut-tracing


    io.zipkin.brave
    brave-instrumentation-http
    runtime


    io.zipkin.reporter2
    zipkin-reporter
    runtime


    io.opentracing.brave
    brave-opentracing


    io.opentracing.contrib
    opentracing-kafka-client
    0.0.16
    runtime

We still need toapplication.ymlIn the configuration file, configure the tracking address of Zipkin, etc.

tracing:
  zipkin:
    enabled: true
    http:
      url: http://192.168.99.100:9411
    sampler:
      probability: 1

Before we start the application, we have to runZipkinContainer:

$ docker run -d --name zipkin -p 9411:9411 openzipkin/zipkin

6. summary

In this article, you will learn aboutApache KafkaThe process of building a microservice architecture using asynchronous communication. I’ve shown youMicroaut KafkaThe most important feature of the library, which allows you to easily declareKafkaProducers and consumers of topic, enabling for your microserviceHealth examinationandDistributed tracking。 I have described the implementation of a simple scenario for our system, including adding a new itinerary according to the customer’s request. For the overall implementation of this sample system, please check the source code on GitHub

Original link: https://piotrminkowski.wordpress.com/2019/08/06/kafka-in-microservices-with-micronaut/

By Piotr’s

Translator: Li Dong