Java language asynchronous non blocking design pattern (principle)

Time:2021-9-9

Java language asynchronous non blocking design pattern (principle)

There are 2 articles in this series to popularize the asynchronous non blocking mode of Java language. Principles explains the principle of asynchronous non blocking model and the basic characteristics of the core design pattern “promise”. The application section will show richer application scenarios, introduce promise variants, such as exception handling and scheduling strategies, and compare promise with existing tools.

Limited to personal level and space, this series focuses on popular science, with more emphasis on principle, API design and application practice, but will not explain the specific details of concurrent optimization in depth.

1. General

Asynchronous non blocking [a] is a high-performance thread model, which is widely used in io intensive systems. In this model, the system does not need to wait for a response after initiating a time-consuming request, and can perform other operations during the process; After receiving the response, the system receives the notification and performs subsequent processing. Because unnecessary waiting is eliminated, this model can make full use of CPU, thread and other resources and improve resource utilization.

However, asynchronous non blocking mode not only improves the performance, but also brings the complexity of coding implementation. The request and response may be separated into different threads, and additional code needs to be written to complete the transmission of response results. Promise design pattern can reduce this complexity and encapsulate implementation details such as data transmission, timing control and thread safety, so as to provide a concise API form.

This paper first introduces asynchronous non blocking mode, and analyzes the difference between blocking mode and non blocking mode from the perspective of thread model. Then it introduces the application scenario and workflow of promise design pattern. Finally, a simple java implementation is provided, which can realize the basic functional requirements and achieve thread safety.

Before formally exploring technical issues, let’s take a look at what isAsynchronous non blocking model。 As shown in Figure 1-1, the scenario of communication between two villains is shown:

  1. Two little people represent two people who communicate with each otherthread , such as the client and server of the database; They can be deployed on different machines.
  2. Small people deliver apples to each other, which means they want toMessages delivered。 Depending on the specific business scenario, these messages may be called request, response, packet, document, record, etc.
  3. Need between villainsEstablish channelThe message was delivered. According to the scenario, channels are called channels, connections, etc.

Suppose the villain on the left initiates a request, while the villain on the right processes the request and sends a response: the villain on the left first throws an apple request and is received by the villain on the right; After processing, the villain on the right throws an apple response, which is received by the villain on the left. We investigate the behavior of the villain on the left during waiting for response. According to whether he can handle other work during waiting for response, we classify it into “synchronous blocking” and “asynchronous non blocking” modes.

Java language asynchronous non blocking design pattern (principle)
Figure 1-1 communication between two villains

First, let’s look at the flow of synchronous blocking communication, as shown in figure 1-2a.

  1. Delivery。 The villain on the left delivers the request and waits to receive the response.
  2. wait for。 While waiting to receive the response, the left villain rests. Whether there are other requests to be delivered or other work to be handled, he turns a blind eye and will never interrupt his rest.
  3. response。 After receiving the response, the villain wakes up from the rest and processes the response.

Java language asynchronous non blocking design pattern (principle)
Figure 1-2a synchronous blocking communication

Next, let’s look at the flow of asynchronous non blocking communication, as shown in figure 1-2b.

  1. cache。 The villain on the left delivers the request and waits to receive the response. Different from the synchronous blocking mode, the villain does not need to catch the apple response by hand, but puts a plate called “buffer” on the ground; If the villain is not present for the time being, the apples received can be stored on the plate and processed later.
  2. temporarily part。 Due to the existence of plate buffer, the villain can leave temporarily after delivering the request to deal with other work. Of course, he can also deliver the next request; If you need to deliver requests to different channels, the villain can place several more plates to correspond to the channel one by one.
  3. response。 After the villain leaves, once a plate receives a response, a “big horn” will sound, issue a “channelread” notice, and call the villain to come back and deal with the response. If you want to process multiple responses or multiple channels, the channelread notification also needs to carry parameters to indicate which response is received from which channel.

The big horn here worksNiO or AIOTo achieve. In short,NIOIt means constantly polling each plate and sending a notice once you see an apple; AIO refers to the process of directly triggering notifications when apple is received without polling. Of course, readers of this series of articles do not need to know more implementation details. They just need to know that the asynchronous non blocking mode depends on the “big speaker”, which replaces the villain waiting to receive the response, so as to liberate the villain to deal with other work.

Java language asynchronous non blocking design pattern (principle)
Figure 1-2b asynchronous non blocking communication

According to the above analysis, the synchronization mode has the following serious problemsshortcoming

  1. Synchronous blocking mode is very inefficientVillains rest most of the time. They wake up occasionally for a short time only when delivering requests and processing responses; In asynchronous non blocking mode, villains never rest, constantly deliver requests, process responses, or handle other work.
  2. Synchronous blocking mode causes delays

We consider the following two cases, as shown in Figure 1-3.

  • Channel multiplexing, that is, the villain on the left sends multiple messages continuously on one channel. In the synchronous blocking mode, only one request (apple 1) can be delivered in one round (i.e. request + response), while subsequent requests (Apple 2-4) can only wait in line. The villain on the right needs to wait many rounds to receive all the expected messages. In addition, while waiting to receive a response, the villain on the left has no opportunity to process other received messages, resulting in a delay in data processing. I have to sigh that the villain on the left is too lazy!
  • Thread reuseThat is, one thread (villain) sends messages to multiple channels (Apple 1-3, respectively to different channels). The villain on the left can only do one thing at a time, either at work or at rest; After delivering apple 1, he lay down and waited for a response, regardless of the villains 2 and 3 on the right, who were still waiting for the apples 2 and 3 they wanted.

Java language asynchronous non blocking design pattern (principle)
Figure 1-3a channel multiplexing

Java language asynchronous non blocking design pattern (principle)
Figure 1-3b thread reuse

In this chapter, we preliminarily experience the synchronous blocking mode and asynchronous non blocking mode in the form of comics, and analyze the differences between the two modes. Next, we start with Java threads to make a more formal and practical analysis of the two modes.

2. Asynchronous non blocking model

2.1 java thread status

In a java program, a thread is the unit that schedules execution. Threads can get CPU usage to execute code to do meaningful work. When the work is in progress, it is sometimes suspended because of waiting for lock acquisition, waiting for network IO and other reasons, commonly known as “synchronization” or “blocking”; If multiple tasks can be carried out at the same time without constraints and waiting for each other, this situation is called “asynchronous” or “non blocking”.
Limited by memory, number of system threads and context switching overhead, Java programs cannot create threads indefinitely; Therefore, we can only create a limited number of threads and try to improve the utilization of threads, that is, increase their working time and reduce their blocking time. Asynchronous non blocking model is an effective means to reduce blocking and improve thread utilization. Of course, this model can not eliminate all blocking. Let’s first look at the states of Java threads, which blocking is necessary and which blocking can be avoided.

Java thread status includes:

  • RUNNABLE: the thread is performing meaningful work
    As shown in figure 2-1a, if the thread is performing pure memory operation, it is in the runnable state
    According to whether the CPU usage right is obtained, it is divided into two sub states: ready and running
  • BLOCKED/WAITING/TIMED_WAITING: thread blocking
    As shown in figures 2-1B, 2-1c and 2-1d, the thread is in one of the following states according to the blocking reason
    Blocked: synchronized wait for lock acquisition
    WAITING/TIMED_ Waiting: lock waits for the lock to be acquired. The difference between the two states is whether to set the timeout length

Java language asynchronous non blocking design pattern (principle)
Figure 2-1 java thread status

In addition, if the java thread is performing network IO, the thread status is runnable, but actually blocking occurs. Taking socket programming as an example, as shown in Figure 2-2, InputStream. Read() will block before receiving data, and the thread state is runnable.

Java language asynchronous non blocking design pattern (principle)
Figure 2-2 network IO

To sum up, java thread status includes: runnable, blocked, waiting and timed_ WAITING。 Among them, the runnable state is divided into memory computing (non blocking) and network IO (blocking), while the other states are blocked.
According to the reasons for blocking, this paper classifies java thread status into the following three categories: runnable, IO and blocked

  1. RUNNABLE: the java thread status is runnable and is performing useful memory calculations without blocking
  2. IO: java thread status is runnable, but network IO is in progress and blocking occurs
  3. BLOCKED: java thread status is blocked / waiting / timed_ Waiting, under the control of the concurrency tool, the thread waits to obtain a certain lock and blocks

To improve thread utilization, it is necessary to increase the length of time the thread is in the runnable state and reduce the length of time in the IO and blocked states. The blocked state is generally inevitable, because threads need to communicate and control the concurrency of the critical area; However, if the appropriate thread model is adopted, the duration of IO state can be reduced, which is the asynchronous non blocking model.

2.2 thread model: blocking vs non blocking

Asynchronous non blocking model can reduce IO blocking time and improve thread utilization. Next, take database access as an example to analyze the thread models of synchronous and asynchronous APIs. As shown in Figure 3, the process involves three functions:

  1. Writesync() or writeasync(): database access, send request
  2. Process (result): process the server response (represented by result)
  3. Do otherthings(): any other operation that is logically independent of the server response

Synchronization APIAs shown in Figure 3-A, the caller first sends the request, and then waits for the response data from the server on the network connection. The API will be blocked until a response is received; During, the caller thread cannot perform other operations, even if the operation does not depend on the server response. PracticalExecution sequenceIs:

  1. writeSync()
  2. process(result)
  3. Dootherthings() / / the current thread cannot perform other operations until the result is received

Asynchronous APIAs shown in figure 2-3b, the caller sends a request and registers a callback, and then the API returns immediately. Next, the caller can perform any operation. Later, the underlying network connection receives the response data and triggers the callback registered by the caller. PracticalExecution sequenceIs:

  1. writeAsync()
  2. Dootherthings() / / you can perform other operations without waiting for a response
  3. process(result)

Java language asynchronous non blocking design pattern (principle)
Figure 2-3 synchronous API & asynchronous API

In the above process, the function dootherthings () does not depend on the server response. In principle, it can be executed simultaneously with database access. However, for the synchronization API, the caller is forced to wait for the server response before executing dootherthings(); That is, during database access, the thread is blocked in io state, unable to perform other useful operations, and the utilization is very low. The asynchronous API has no such limitation and is more compact and efficient.

In io intensive systems, the appropriate use of asynchronous non blocking model can improve the database access throughput. Consider a scenario where multiple database access requests need to be executed, and the requests are independent of each other without dependencies. Using synchronous API and asynchronous API, the process of thread state changing over time is shown in Figure 2-4.
Threads are alternately in the runnable and IO states. In the runnable state, threads perform memory calculations, such as submitting requests and processing responses. In the IO state, the thread waits for response data on the network connection. In the actual system, the speed of memory calculation is very fast, and the duration of runnable state can be basically ignored; The time-consuming of network transmission will be relatively longer (tens to hundreds of milliseconds), and the duration of IO status will be more considerable.

a.Synchronization API: the caller thread can submit only one request at a time; The next request cannot be submitted until the request returns. Thread utilization is very low, and most of the time is consumed in io state.

b.Asynchronous API: the caller thread can submit multiple requests in a row, and none of the previously submitted requests have received a response. The caller thread registers some callbacks, which are stored in memory; After receiving the response data on the network connection, a receiving thread is notified to process the response data, fetch the registered callback from memory and trigger the callback. Under this model, requests can be submitted and responded continuously, thus saving the time of IO status.

Java language asynchronous non blocking design pattern (principle)
Figure 2-4 thread timeline: database access

Asynchronous non blocking mode is widely used in io intensive systems. Common middleware, such as HTTP request [D], redis [e], Mongo DB [F], elasticsearch [g], and influx DB [H], support asynchronous APIs. Readers can refer to the sample code of these asynchronous APIs in the references. As for the asynchronous API of middleware, there are several considerations:

  1. Common redis clients are jedis and lettuce [e]. Lettuce provides asynchronous API, while jedis can only provide synchronous API; For the comparison between the two, see article [i].
  2. The send () method of Kafka producer [J] also supports asynchronous API, but the API is not purely asynchronous [k]: when the underlying cache is full or the server (broker) information cannot be obtained, the send () method will be blocked. Personally, I think this is a very serious design defect. Kafka is often used in low latency log collection scenarios. The system will write logs to Kafka server through the network to reduce blocking in threads and improve thread throughput; Later, other processes will consume the logs written from Kafka for persistent storage. Imagine a real-time communication system. A single thread needs to process tens of thousands to hundreds of thousands of messages per second, and the response time is generally a few milliseconds to tens of milliseconds. During processing, the system needs to call send () frequently to report the log. If a delay of even 1 second occurs for each call (it may actually be up to tens of seconds), the accumulated delay will seriously degrade the throughput and delay.

Finally, there are many implementations of asynchronous APIs, including thread pool, select (such as netty 4. X [l]), epoll, etc. The common point is that the caller does not need to block on a certain network connection to wait for data to be received; On the contrary, a limited number of threads reside at the bottom of the API. When data is received, a thread is notified and triggers a callback. This model, also known as the “responsive” model, is very appropriate. Due to space limitation, this paper mainly focuses onAsynchronous API design, without explaining the implementation principle of asynchronous API.

3. Promise design mode

3.1 API forms: synchronous, asynchronous listener, asynchronous promise

The previous chapter introduced asynchronous non blocking mode and the functional form of asynchronous API. Asynchronous APIs have the following characteristics:

  1. Register the callback when submitting the request;
  2. After submitting the request, the function returns immediately without waiting for a response;
  3. After receiving the response, trigger the registered callback; According to the underlying implementation, a limited number of threads can be used to receive response data and execute callbacks in these threads.

On the basis of retaining the asynchronous feature, the form of asynchronous API can be further optimized. Figure 2-3b in the previous chapter shows the listener version of the asynchronous API. The feature is that exactly one callback must be registered when submitting a request; Therefore, in the following scenarios, the listener API will be difficult to meet the functional requirements, and the caller needs to do further processing:

  1. Multiple objects pay attention to the response data, that is, multiple callbacks need to be registered; However, listener only supports registering a callback.
  2. You need to convert an asynchronous call to a synchronous call. For example, some frameworks (such as spring) need to return synchronously, or we want the main thread to block until the operation is completed, and then the main thread ends and the process exits; However, listener only supports pure asynchrony, and the caller needs to repeatedly write the code from asynchrony to synchronization.

To cope with the above scenario, we can use promise design pattern to reconstruct asynchronous API to support multiple callbacks and synchronous calls. The function forms of synchronous API, asynchronous listener API and asynchronous promise API are compared below, as shown in Figure 3-1:

  • a. Synchronization: call writesync() method and block; After receiving the response, the function stops blocking and returns the response data
  • b. Asynchronous listener: call writeasync() method and register listener, and the function returns immediately; After receiving the response, trigger the registered listener in other threads;
  • c. Asynchronous promise: writeasync() is called, but the listener does not need to be registered in the function. The function immediately returns the promise object. The caller can call asynchronous promise. Await (listener) to register any number of listeners, which will be triggered in sequence after receiving the response; Alternatively, you can call synchronous promise. Await() to block until a response is received.

Java language asynchronous non blocking design pattern (principle)
Figure 3-1 API types: synchronous, asynchronous listener, asynchronous promise

To sum up, promise API provides higher flexibility on the premise of maintaining asynchronous characteristics. The caller is free to choose whether the function is blocked and register any number of callbacks.

3.2 promise features and Implementation

The previous section introduced the use examples of promise API. Its core is a promise object, which supports registering a listener and synchronously obtaining response results; This section will define promise’s functions in more detail. Note that this section does not limit a specific implementation of promise (e.g. JDK completable future, netty defaultpromise), but only shows common and necessary features; Without these features, promise will not be able to complete the asynchronous transfer of response data.

3.2.1 functional characteristics

  • Promise basic method

Promise’s basic function is to transmit response data. The following methods need to be supported, as shown in table 3-1:

Java language asynchronous non blocking design pattern (principle)

Take the database access API in the above section as an example to demonstrate the promise workflow, as shown in Figure 3-2:

  • a. The caller calls the writeasync () API to submit the database access request and obtain the promise object; Then Promise.await (listener) is invoked to register the listener of the response data. Promise objects can also be passed to other parts of the program, so that other codes concerned about response data can register more listeners.
  • b. Inside writeasync(), a promise object is created and associated with the request, assuming that it is identified by requestid.
  • c. Writeasync() has a limited number of resident threads at the bottom, which are used to send requests and receive responses. Taking netty as an example, after receiving the response data from the network, one of the threads is notified and executes the channelread() function for processing; The function takes out the response data and the corresponding promise object, and calls promise. Signalall() to notify. Note that this is pseudo code, which is slightly different from the actual signature of the callback function in netty.

Java language asynchronous non blocking design pattern (principle)
Figure 3-2a submitting database access request

Java language asynchronous non blocking design pattern (principle)
Figure 3-2b creating promise object

Java language asynchronous non blocking design pattern (principle)
Figure 3-2c notification promise object

-Promise timing

Promise’s method needs to ensure the following timing. Here, the timing is described by “a is visible to B”, that is, if operation a (registering a listener) will produce a permanent effect (permanently recording the listener), and then operation B (notifying result) must consider this effect and execute the corresponding processing (triggering the previously recorded listener).

  1. Await (listener) is visible to signalall (result): after registering several listeners, each listener must be triggered when notifying the result. Omission is not allowed.
  2. Signalall (result) is visible to await (listener): after notifying the result, registering the listener will trigger immediately.
  3. The first signalall (result) is visible to subsequent signalall (result). After the result is notified for the first time, the result is uniquely determined and will never change. After that, the result will be ignored without any side effects. Request timeout is a typical application of this feature: create a scheduled task while submitting a request; If the response data can be correctly received within the timeout, notify promise to end normally; Otherwise, the scheduled task times out, and promise will be notified of the abnormal end. No matter which of the above events occurs first, it is guaranteed that only the first notice will be adopted to make the request result uniquely determined.

In addition, an await (listener) should be visible to subsequent awaits (listeners) to ensure that the listener is triggered in strict accordance with the registration order.

-Non thread safe implementation of promise

If thread safety is not considered, the following code list can realize the basic features of promise; The implementation of thread safety is shown in the next section. The code listing shows the implementation of await (listener): void, signalall (result), await (): result in turn. How many are therematters needing attention

  1. The field listeners stores the listener registered by await (listener)。 The field type is LinkedList to store any number of listeners and maintain the trigger order of listeners.
  2. Whether the field issigned record has notified result。 If issigned = true, the listener will be triggered immediately when await (listener) is called, and will be ignored when signalall (result) is called. In addition, we use issigned = true instead of result = null to judge whether result has been notified, because in some cases, null itself can also be used as response data. For example, we use promise < exception > to indicate the result of database writing, null to indicate the success of writing, and exception object (or a subclass) to indicate the failure reason.
  3. Signalall (t result) calls listeners. Clear() at the end to free up memory, because listeners have been triggered, they no longer need to be stored in memory.
public class Promise<T> {

    private boolean isSignaled = false;
    private T result;

    private final List<Consumer<T>> listeners = new LinkedList<>();

    public void await(Consumer<T> listener) {
        if (isSignaled) {
            listener.accept(result);
            return;
        }

        listeners.add(listener);
    }

    public void signalAll(T result) {
        if (isSignaled) {
            return;
        }

        this.result = result;
        isSignaled = true;
        for (Consumer<T> listener : listeners) {
            listener.accept(result);
        }
        listeners.clear();
    }

    public T await() {
        //Block appropriately until signalall() is called; See Section 3.3 for actual implementation
        return result;
    }
}

3.2.2 thread safety features

Section 3.2.1 of the previous chapter explains the functions of promise and provides a non thread safe implementation. This section shows how to use the concurrency tool to implement thread safe promise, as shown below. There are the following precautions:

  1. Thread safe. Each field is accessed by multiple threads, so it belongs to the critical area and needs to be locked with appropriate thread safety tools, such as synchronized and lock. The simplest implementation is to bring all code into the critical area, lock when entering the method, and lock when leaving the method. Note: when using return to return in advance, don’t forget to put the lock.
  2. Trigger the listener outside the critical zone to reduce the length of stay in the critical zone and reduce the potential risk of deadlock.
  3. Synchronize await(). You can use any kind of synchronization waiting tool, such as countdownlatch and condition. The condition implementation is used here. Note that according to Java syntax, you must first obtain the lock associated with the condition when operating the condition.
public class Promise<T> {

    private final ReentrantLock lock = new ReentrantLock();
    private final Condition resultCondition = lock.newCondition();

    private boolean isSignaled = false;
    private T result;

    private final List<Consumer<T>> listeners = new LinkedList<>();

    public void await(Consumer<T> listener) {
        lock.lock();
        if (isSignaled) {
            lock.unlock(); //  Don't forget to put the lock
            listener.accept(result); //  Trigger listener outside critical area
            return;
        }

        listeners.add(listener);
        lock.unlock();
    }

    public void signalAll(T result) {
        lock.lock();
        if (isSignaled) {
            lock.unlock(); //  Don't forget to put the lock
            return;
        }

        this.result = result;
        isSignaled = true;

        //Copy of this.listeners
        List<Consumer<T>> listeners = new ArrayList<>(this.listeners);
        this.listeners.clear();
        lock.unlock();

        for (Consumer<T> listener : listeners) {
            listener.accept(result); //  Trigger listener outside critical area
        }

/*Condition must be locked when operating*/
        lock.lock();
        resultCondition.signalAll();
        lock.unlock();
    }

    public T await() {
        lock.lock();
        if (isSignaled) {
            lock.unlock(); //  Don't forget to put the lock
            return result;
        }

        while (!isSignaled) {
            resultCondition.awaitUninterruptibly();
        }
        lock.unlock();

        return result;
    }
}

The above implementation is only for demonstration, and there is still much room for improvement. For the implementation principle of production environment, readers can refer to JDK completable futre and netty defaultpromise. Areas for improvement include:

  1. Setting response data using CAS。 The fields issigned and result can be combined into one data object, and then CAS can be used to set the value, so as to further reduce the blocking time.
  2. Timing of triggering listener。 In the above code, promise. Signalall() will trigger listener in turn; During this period, if other threads call asynchronous await (listener), the thread will also trigger listener because promise’s response data has been notified. In the above process, two threads trigger the listener at the same time, so the trigger sequence is not strictly guaranteed. As an improvement, similar to netty defaultpromise, promise. Signalall() can set a cycle to continuously trigger the listener until the listeners are empty, so as to prevent new listeners from being registered during the period; During this period, newly registered listeners can be directly added to listeners instead of being triggered immediately.
  3. Removal of listener。 Before notifying the response data, promise holds the reference of listener for a long time, so that the listener object cannot be GC. You can add a remove (listener) method or allow only weak references to listener to be held.

3.2.3 characteristics to be avoided

The previous section shows the features and implementation principle of promise. Pure promise is a tool for asynchronous transmission of response data. It should only realize the necessary data transmission characteristics, and should not be mixed with logic such as request submission and data processing. Next, let’s take a look at what features promise should avoid when implementing, so as not to limit the decisions that callers can make.

1. Asynchronous await () is blocked; This rule applies not only to promise, but also to any asynchronous API. Asynchronous APIs are often used in delay sensitive scenarios such as real-time communication to reduce thread blocking and avoid delaying subsequent operations. Once blocking occurs, the response speed and throughput of the system will be seriously impacted.

Take the continuous submission of database requests as an example. As shown in figure 3-3a, the caller calls an asynchronous API, submits write requests three times in a row, and registers a callback on the returned promise.

Let’s examine the impact on the caller if writeasync() and await() are blocked, as shown in figure 3-3b. The submit request is a pure memory operation, and the thread is in the runnable state; If writeasync() or await() is blocked, the thread is in the blocked state, pausing work and unable to perform subsequent operations. When blocking occurs, the caller has to wait for a period of time every time he submits a request, which reduces the frequency of submitting requests, and then delays the server’s response to these requests, reducing the throughput and increasing the delay of the system. In particular, if the system adopts the multiplexing mechanism, that is, one thread can process multiple network connections or multiple requests, thread blocking will seriously slow down the processing of subsequent requests, resulting in faults that are difficult to troubleshoot.

commonBlocking causeinclude:

  • Thread.sleep()
  • Submit the task to the queue and call BlockingQueue. Put() and take(); Should be changed to non blocking offer() and poll()
  • Submit a task to the thread pool, executorservice. Submit(). If the thread pool rejection policy is callerrunspolicy and the task itself is time-consuming.
  • Blocked functions are called, including InputStream. Read(), synchronous promise. Await(), kafkaproducer. Send(). Note that although kafkaproducer. Send() is an asynchronous API in form, the send() method will still block when the underlying cache is full or the server (broker) information cannot be obtained.

Java language asynchronous non blocking design pattern (principle)
Figure 3-3a continuous submission request

Java language asynchronous non blocking design pattern (principle)
Figure 3-3b request processing timeline

2. Bind the thread pool (executorservice) to execute requests. As shown in Figure 3-4, thread pool is an optional model of asynchronous API, but it is not the only implementation.

  • Thread pool model。 In order not to block the caller, the API has built-in thread pool to submit requests and process responses; The caller can submit multiple requests to the thread pool in succession, but does not need to wait for a response. After the caller submits a request, a thread in the thread pool will be monopolized and wait for the response to be received and processed, but no other requests can be processed before that; After processing, the thread becomes idle again and can continue to process subsequent requests.
  • Responsive model。 Similarly, the API has built-in send and receive threads to submit requests and process responses, and the caller does not need to wait synchronously. After the caller submits a request, the sending thread sends the request to the network; After sending, the thread immediately becomes idle and can send subsequent requests. When receiving the response data, the receiving thread is notified to process the response; After processing, the thread immediately becomes idle and can process subsequent response data. In the above process, any thread will not be monopolized by a request, that is, the thread can process the request at any time without waiting for the previous request to be responded.

To sum up, if the thread pool is bound, promise realizes compatibility with other models (such as responsive models).

Java language asynchronous non blocking design pattern (principle)

Figure 3-4 thread timeline: thread pool vs select

3. Define how to submit a request when the constructor creates a promise object. This method can only define how to process a single request, but cannot realize batch processing of requests.

Taking database access as an example, modern databases generally support batch reading and writing. At the cost of slightly increasing the delay of single access, the throughput is significantly improved; If the throughput is improved, the average latency will decrease. The following code fragment shows a batch request API: the data object bulkrequest can carry multiple ordinary requests, so as to realize batch submission.

/*Submit a single request*/
client.submit(new Request(1));
client.submit(new Request(2));
client.submit(new Request(3));

/*Submit batch request*/
client.submit(new BulkRequest(
        new Request(1),
        new Request(2),
        new Request(3)
));

In order to make full use of the characteristics of “batch requests”, the caller needs to carry out “macro-control” across multiple requests. After the request is generated, it can be cached first; After waiting for a period of time, take out the cached multiple requests, assemble a batch request and submit it. Therefore, as shown in the following code fragment, it is meaningless to specify how to submit a single request when constructing promise, and this part of the code (client. Submit (new request (…)) will not be executed; The code you actually want to execute is to submit a batch request (client. Submit (New bulkrequest (…)).

/*Promise: submit a single request*/
new Promise<>(() -> client.submit(new Request(1)));
new Promise<>(() -> client.submit(new Request(2)));
new Promise<>(() -> client.submit(new Request(3)));

4. When the constructor creates the promise object, it defines how to process the response data, and does not allow subsequent callback registration for the response data. As shown in the following code snippet, when constructing promise object, the processing process (result) of response data is registered; However, in addition, other codes may also care about the response data and need to register callbacks process1 (result) and process2 (result). If promise can only register a unique callback at construction time, other followers cannot register the required callback function, that is, promise API degenerates back to listener API.

/*Defines how response data is processed*/
Promise<String> promise = new Promise<>(result -> process(result));

/*Other codes also care about response data*/
promise.await(result -> process1(result));
promise.await(result -> process2(result));

To sum up, promise should be a pure data object whose responsibility is to store callback functions and response data; At the same time, do a good job in timing control to ensure that there is no omission of trigger callback function and trigger sequence. In addition, promise should not be coupled with any implementation strategy and should not mix the logic of submitting requests and processing responses.

4. Summary

This paper explains the asynchronous non blocking design pattern, and compares the synchronous API, asynchronous listener API and asynchronous promise API. Compared with the other two APIs, promise API has unparalleled flexibility. Callers can freely decide whether to return synchronously or asynchronously, and allow multiple callback functions to be registered for response data. Finally, this paper explains the implementation of promise’s basic functions, and preliminarily realizes the thread safety feature.

There are 2 articles in this series, and this is the first principle. In the next application, we will see the rich application scenarios of promise design pattern, combine or compare it with existing tools, further deform and encapsulate promise API, and provide exception handling, scheduling strategy and other features.

reference

[A] Asynchronous non blocking IO
https://en.wikipedia.org/wiki…

[B] Promise
https://en.wikipedia.org/wiki…

[C] Java thread state
https://segmentfault.com/a/11…

[D] HTTP asynchronous API example: Apache httpasyncclient
https://hc.apache.org/httpcom…

[E] Redis asynchronous API example: lettuce
https://github.com/lettuce-io…

[F] Mongo DB asynchronous API example: asyncmongoclient
https://mongodb.github.io/mon…

[G] Elasticsearch asynchronous API example: resthighlevelclient
https://www.elastic.co/guide/…

[H] Example of informdb asynchronous API: informdb Java
https://github.com/influxdata…

[I] jedis vs lettuce
https://redislabs.com/blog/je…

[J] kafka
http://cloudurable.com/blog/k…

[K] Kafkaproducer. Send() blocked
https://stackoverflow.com/que…

[L] netty
https://netty.io/wiki/user-gu…