IM-cloud Distributed Middleware Analysis (IV) – Logic Node Implementation

Time:2019-8-24

github:http://github.com/brewlin/im-…

  • Im-cloud builds distributed push middleware based on swoole native collaboration
  • Installation and deployment of im-cloud distributed middleware
  • IM-cloud <> GOIM distributed middleware concurrent pressure measurement comparison
  • Analysis of im-cloud distributed middleware (1) – Communication Protocol
  • Analysis of im-cloud distributed middleware (2) – Implementation of cloud node
  • Analysis of im-cloud distributed middleware (3) – Implementation of job node
  • IM-cloud Distributed Middleware Analysis (IV) - Logic Node Implementation

1. Overview

Logic node as producer and client, as business node, provides push push resetapi interface, which can expand multiple nodes to do nginx load balancing

IM-cloud Distributed Middleware Analysis (IV) - Logic Node Implementation

[email protected]

Start 10 message queue connection pools by default to create a co-process asynchronous production task for each character in the task process

Asynchronous task

Direct call component task interface for delivery to task process execution, task delivery is non-blocking operation, and it will return directly after execution, which greatly improves the ability of worker to handle concurrent requests. The only impact is that if the consumption capacity of multiple task processes is less than the delivery speed of worker, the processing capacity of worker will also be affected. So we need to make a trade-off.

  • Specific message queue generation is executed in task process
  • Task process enabled the collaboration mode, and by default a collaboration is created for each task delivered
use Task\Task;
/**
* @var LogicPush
*/
Task::deliver(LogicPush::class,"pushMids",[(int)$arg["op"],$arg["mids"],$arg["msg"]]);
  • Relevant asynchronous tasks are stored in namespacesApp\Tasklower

Relevant optimization

Containerization

The lifecycle of a single request process execution generates multiple objects, up to 10. In the case of large concurrency, GC will almost hang up first, and it will take time to wait, so there is room for optimization of new objects.

The project scans the relevant code during initialization, collects the annotated code, instantiates it into container container, and reuses the code directly when it is used again and again, without requiring many new objects. This saves space and time greatly. The following figure is to create a coordinator to perform the task. Relevant objects are retrieved from containers

Co::create(function ()use($op,$mids,$msg){
    /** @var RedisDao $servers */
    $servers = \container()->get(RedisDao::class)->getKeysByMids($mids);
    $keys = [];
    foreach($servers as $key => $server){
        $keys[$server][] = $key;
    }
    foreach($keys as $server => $key){
        // Drop it in the queue and let Job deal with it.
        \container()->get(QueueDao::class)->pushMsg($op,$server,$key,$msg);
    }

},true);
// The second parameter, true, denotes the use of Context:: waitGroup () to wait for task execution to complete

As shown in the figure above, multiple methods provided by components can be invoked to obtain container objects.

  • container()->get(class)
  • bean(class)
  • Both can obtain container objects.

Improving concurrency performance

Even if the main time-consuming tasks are put into the task process, there will still be a small amount of waiting time in the worker process. Now, the way is to get data when the request arrives, directly reply to the end of the current connection, and then continue to perform the task, so that you do not have to wait until the task is delivered before ending the current connection. Great improvement of concurrency capability, although time-consuming performance may not change much, but the concurrency capability is greatly improved. As follows:

/**
     * @return \Core\Http\Response\Response|static
     */
    public function mids()
    {
        Context::get()->getResponse()->end();
        $post  = Context::get()->getRequest()->input();
        if(empty($post["operation"]) || empty($post["mids"]) ||empty($post["msg"])){
            Return $this - > error ("missing parameters");
        }
        $arg = [
            "op" => $post["operation"],
            "mids" => is_array($post["mids"])?$post["mids"]:[$post["mids"]],
            "msg" => $post["msg"]
        ];
        Log::debug("push mids post data:".json_encode($arg));
        /**
         * @var LogicPush
         */
        Task::deliver(LogicPush::class,"pushMids",[(int)$arg["op"],$arg["mids"],$arg["msg"]]);
    }
  • Direct use as shown aboveContext::get()->getResponse()->end();Get the reponse object from the co-process context to terminate the current connection directly, then continue to perform the current task and release memory

Recommended Today

Redis design and implementation 4: Dictionary Dict

In redis, the dictionary is the infrastructure. Redis database data, expiration time and hash type all take the dictionary as the underlying structure. Structure of dictionary Hashtable The implementation code of hash table is as follows:dict.h/dictht The dictionary of redis is implemented in the form of hash table. typedef struct dictht { //Hash table array, […]