The use of Vertx, one of the microservice framework ServiceComb source code analysis

Time:2022-11-24

1. Vertx use

serviceComb is built based on vertx. When the consumer sends a request to the provider, it is finally sent through the eventloop thread of vertx.

The last built-in Handler of the consumer: TransportClientHandler, is responsible for preprocessing before sending, sending requests, preprocessing after receiving responses, and processing response results. Sending depends on vertx.
The use of Vertx, one of the microservice framework ServiceComb source code analysis

The use of Vertx, one of the microservice framework ServiceComb source code analysis
Since vertx is asynchronous and event-driven, to understand the vertx sending process, you must first understand some concepts:
AsynResult, Future, Promise, Context, the following is the relationship between them

The use of Vertx, one of the microservice framework ServiceComb source code analysis
The use of Vertx, one of the microservice framework ServiceComb source code analysis

1.1 ServiceComb server Vertx deployment

//1. SCBEngine.java initializes transport
  private void doRun() throws Exception {
...
    transportManager.init(this);
...
}

//2. TransportManager.java find transport
  public void init(SCBEngine scbEngine) throws Exception {
    for (Transport transport : transportMap.values()) {
      if (transport.init()) {
...
      }
    }
  }

//3.VertxRestTransport.java deploys vertx

  @Override
  public boolean init() throws Exception {
    restClient = RestTransportClientManager.INSTANCE.getRestClient();
...
//Two important parameters are passed to the vertx framework, and finally to the Verticle instance
    SimpleJsonObject json = new SimpleJsonObject();
    json.put(ENDPOINT_KEY, getEndpoint());
    json.put(RestTransportClient.class.getName(), restClient);
    options.setConfig(json);
...
    return VertxUtils.blockDeploy(transportVertx, TransportConfig.getRestServerVerticle(), options);
  }

//TransportConfig.getRestServerVerticle() is RestServerVerticle.java
  @Override
  public void init(Vertx vertx, Context context) {
    super.init(vertx, context);
// Take out the endpoint information and start the deployment in the start method
    this.endpoint = (Endpoint) context.config().getValue(AbstractTransport.ENDPOINT_KEY);
    this.endpointObject = (URIEndpointObject) endpoint.getAddress();
  }

Since the eventloop thread of vertx is used, each verticle instance is bound to an eventloop thread. Forming similar to Actor-Pattern, this mode does not have resource competition thread safety issues

1.2 ServiceComb client verticle deployment

// 1. CseApplicationListener.java
  @Override
  public void setApplicationContext(ApplicationContext applicationContext) throws BeansException {
...
    HttpClients.load();
...
  }

//2.HttpClients.java
  private static ClientPoolManager<HttpClientWithContext> createClientPoolManager(HttpClientOptionsSPI option) {
    Vertx vertx = getOrCreateVertx(option);
    ClientPoolManager<HttpClientWithContext> clientPoolManager = new ClientPoolManager<>(vertx,
        new HttpClientPoolFactory(HttpClientOptionsSPI.createHttpClientOptions(option)));

    DeploymentOptions deployOptions = VertxUtils.createClientDeployOptions(clientPoolManager,
        option.getInstanceCount())
        .setWorker(option.isWorker())
        .setWorkerPoolName(option.getWorkerPoolName())
        .setWorkerPoolSize(option.getWorkerPoolSize());
    try {
      VertxUtils.blockDeploy(vertx, ClientVerticle.class, deployOptions);
      return clientPoolManager;
    } catch (InterruptedException e) {
      throw new IllegalStateException(e);
    }
  }

2.Future and Promise

Asynchronous programming involves the question of how to obtain asynchronous results. Any programming problem is a calculation problem. The three elements of calculation are business logic (what to calculate), executor (who will calculate), and output results. Asynchronous calculation, how to recycle the result, Java uses Future object to package, then there must be an executor in the Future, after the executor executes the business logic, it notifies the Future object whether it succeeds or fails, and what is the output result. Therefore, we can regard Future as the encapsulation of business logic, executor, and result output.

For asynchronous tasks, we often mix the user thread and the thread that executes the task. In fact, to achieve asynchrony, there must be two or more executors. One executor is the entrance, which is often called the main thread. An executor is a thread of an asynchronous task, created in the main thread, executes business logic asynchronously, fills the result of business logic execution into the Future, and the main thread main obtains the result of the asynchronous task from the Future.

The following code is taken from the reference link at the end of the article

The use of Vertx, one of the microservice framework ServiceComb source code analysis

This code, from the perspective of development, is a piece of code arranged in order from top to bottom. But from the perspective of the running state, it is different. This code has two participants, one is the main that is executed by 1, and the other is 2, which is the executor 2 created in the main, and the executor 2 is responsible for executing the business. logic, and communicate through Future and main

Future implementation ideas
The use of Vertx, one of the microservice framework ServiceComb source code analysis

Then we finish talking about the class Future, this is the future of java jdk. In fact, Nettey’s future is slightly different, see the content of the reference link for details. But the details don’t matter. Nettty’s Future supports setting callbacks, but in order to prevent callback hell, a Promise-like object is introduced. In fact, Promise is similar to Futrue, acting as the interactive role between main and asynchronous executors. Promise is used to better handle nested callbacks.

Further, the CompletableFuture added by java8 is a more advanced callback method to make up for the lack of Java’s own Future.

Summary: Through the above, we know that the asynchronous task mode Future has different implementation methods, such as the Future that comes with jdk, and the Future written by netty, and netty also adds Promise. After java 8, jdk added CompletableFuture. All these are all from different angles to solve the problem of easy-to-use asynchronous programming. Essentially nothing has been added. That is to say, the continuous development of java from 1.0 to 8.0 is not to add any new technology, but to optimize the existing things and make them easier to use. That’s why it appeared. For example, there are dozens of ways to write thread execution tasks, which make people dizzy. Therefore, it seems that there is a lot of knowledge, but in fact it is just those old rice that are constantly being fried. Rice is still rice, and no noodles or dishes have been added.


Future mode and Promise mode
Chapter 4 Future and Promise

3.Verticle

Verticle has Standard Verticles and Worker Verticles

  • Standard Verticles
    1. Execute using event loop thread
    2.This means we can guarantee that all the code in your verticle instance is always executed on the same event loop (as long as you don’t create your own threads and call it!).
  • Worker Verticles
    1. Execute with the worker pool
    2.Worker verticle instances are never executed concurrently by Vert.x by more than one thread, but can executed by different threads at different times.

3.1 Verticle instance and work

instance is the number of Verticle instances, the key code is as follows

// vertx source code DeploymentManager.java

    int nbInstances = options.getInstances();
    Set<Verticle> verticles = Collections.newSetFromMap(new IdentityHashMap<>());
    for (int i = 0; i < nbInstances; i++) {
      Verticle verticle;
      try {
        verticle = verticleSupplier.call();
      } catch (Exception e) {
        return Future.failedFuture(e);
      }
      if (verticle == null) {
        return Future.failedFuture("Supplied verticle is null");
      }
      verticles.add(verticle);
    }

What is the difference between standard Verticle and work Verticle, the following is the key code can be seen
The former uses eventLoop thread (netty network thread), the latter uses work thread (thread created by vertx itself)

// vertx source code DeploymentManager.java

    for (Verticle verticle: verticles) {
      CloseFuture closeFuture = new CloseFuture(log);
      WorkerPool workerPool = poolName != null ? vertx.createSharedWorkerPool(poolName, options.getWorkerPoolSize(), options.getMaxWorkerExecuteTime(), options.getMaxWorkerExecuteTimeUnit()) : null;
      ContextImpl context = (options.isWorker() ? vertx.createWorkerContext(deployment, closeFuture, workerPool, tccl) :
        vertx.createEventLoopContext(deployment, closeFuture, workerPool, tccl));
      VerticleHolder holder = new VerticleHolder(verticle, context, workerPool, closeFuture);
      deployment.addVerticle(holder);
      context.runOnContext(v -> {
        try {
          verticle.init(vertx, context);
          Promise<Void> startPromise = context.promise();
          Future<Void> startFuture = startPromise.future();
          verticle.start(startPromise);
          startFuture.onComplete(ar -> {
            if (ar.succeeded()) {
              if (parent != null) {
                if (parent.addChild(deployment)) {
                  deployment.child = true;
                } else {
                  // Orphan
                  deployment.doUndeploy(vertx.getOrCreateContext()).onComplete(ar2 -> promise.fail("Verticle deployment failed.Could not be added as child of parent verticle"));
                  return;
                }
              }
              deployments.put(deploymentID, deployment);
              if (deployCount.incrementAndGet() == verticles.length) {
                promise.complete(deployment);
              }
            } else if (failureReported.compareAndSet(false, true)) {
              deployment.rollback(callingContext, promise, context, holder, ar.cause());
            }
          });
        } catch (Throwable t) {
          if (failureReported.compareAndSet(false, true))
            deployment.rollback(callingContext, promise, context, holder, t);
        }
      });
    }

3.2 ServiceComb RestServerVerticle Deployment

1. Verticle instance number configuration

TransportConfig.getThreadCount()
Prioritize servicecomb.rest.server.verticle-count from the configuration file

// If not configured, take the default value default value
count = Runtime.getRuntime().availableProcessors() > 8 ? 8 : Runtime.getRuntime().availableProcessors();

2.workerPoolSize configuration, default 20
options.setWorkerPoolSize(VertxOptions.DEFAULT_WORKER_POOL_SIZE);

3. Eventloop number configuration
It is preferentially obtained from the configuration servicecomb.transport.eventloop.size
The default is twice the CPU core
DEFAULT_EVENT_LOOP_POOL_SIZE = 2 * CpuCoreSensor.availableProcessors();
4.internalBlockingPoolSize defaults to 20

The following is to configure the microservice of business-1-1-0 with verticle-count: 8, concurrent requests, and you can see that all 8 evenloop threads are mobilized.
The use of Vertx, one of the microservice framework ServiceComb source code analysis

3.3 vertx thread creation

The eventloop thread is created by VertxThread, and VertxThread is a subclass of netty FastThreadLocalThread

//VertxImpl constructor

    eventLoopThreadFactory = createThreadFactory(maxEventLoopExecTime, maxEventLoopExecTimeUnit, "vert.x-eventloop-thread-", false);
    eventLoopGroup = transport.eventLoopGroup(Transport.IO_EVENT_LOOP_GROUP, options.getEventLoopPoolSize(), eventLoopThreadFactory, NETTY_IO_RATIO);
    ThreadFactory acceptorEventLoopThreadFactory = createThreadFactory(options.getMaxEventLoopExecuteTime(), options.getMaxEventLoopExecuteTimeUnit(), "vert.x-acceptor-thread-", false);
    // The acceptor event loop thread needs to be from a different pool otherwise can get lags in accepted connections
    // under a lot of load
    acceptorEventLoopGroup = transport.eventLoopGroup(Transport.ACCEPTOR_EVENT_LOOP_GROUP, 1, acceptorEventLoopThreadFactory, 100);

Question to be continued

1. The number of verticle instances is greater than the number of eventloops; if the business processing is slow, how many verticle instances are attached to one eventloop? High concurrency, what happens when all instances are used up?
2. How does vertx distribute requests to different verticle instances, and acceptorEventLoopGroup circulates to eventLoopGroup?