Principles and comparison of common registration centers of acquisition technology

Time:2021-10-19

At present, Eureka, zookeeper, consul and Nacos are commonly used registration centers. Recently, I learned the overall framework and implementation of these four registries, and mainly learned the specific implementation of service registration and subscription from the perspective of source code for Nacos. Finally, the differences between the four registries are compared.

1、 Eureka

Principles and comparison of common registration centers of acquisition technology

Eureka client in the upper left corner is a service provider: register and update your own information with Eureka server, and obtain information about other services from Eureka server registry. There are four specific operations:

  • Register registration: the client side registers its own metadata with the server side for service discovery;
  • Renew: send heartbeat to the server to maintain and update the validity of service instance metadata in the registry. When the server does not receive the heartbeat information from the client within a certain period of time, it will offline the default service and delete the information of the service instance from the registry;
  • Cancel offline: when the client shuts down, it actively logs off the service instance metadata from the server. At this time, the client’s service instance data will be deleted from the server registry;
  • Get registry: the client requests registry information from the server for service discovery, so as to initiate remote calls between services.

Eureka   Server service registry: provides the functions of service registration and discovery. Each Eureka client registers its own information with Eureka server, and can also obtain the information of other services through Eureka server to achieve the purpose of discovering and calling other services.

Eureka client service consumer: obtain the information of other services registered on Eureka server, so as to find the required services and initiate remote calls according to the information.

Replicate synchronous replication: synchronous replication of registry information between Eureka servers, so that the service instance information in different registries in Eureka server cluster is consistent. Because the synchronous replication between clusters is carried out through HTTP, based on the unreliability of the network, the registry information between Eureka servers in the cluster inevitably has asynchronous time nodes, which does not meet the C (data consistency) in the cap.

Make   Remote   Call remote call: remote call between service clients.

2、 Zookeeper

2.1 overall framework of zookeeper

Principles and comparison of common registration centers of acquisition technology

  • Leader: zookeeper is the core of cluster work. It is the only scheduler and processor of transaction requests (write operations) to ensure the sequencing of cluster transaction processing; The scheduler of each service in the cluster. For create, set   Data, delete and other write requests need to be uniformly forwarded to the leader for processing. The leader needs to determine the number and perform operations. This process is called a transaction.
  • Follower: handles non transaction (read operation) requests from clients and forwards transaction requests to leaders participating in the cluster.
  • Observer: the observer role is a new role for the zookeeper cluster with a large number of visits. Observe the latest state changes of the zookeeper cluster and synchronize these states. It can process non transaction requests independently, and forward transaction requests to the leader server for processing. It will not participate in any form of voting and only provides services. It is usually used to improve the non transaction processing capacity of the cluster without affecting the transaction processing capacity of the cluster, and to increase concurrent requests.

2.2 zookeeper storage structure

The following figure describes the tree structure of the zookeeper file system for memory representation. The zookeeper node is called znode. Each znode is identified by a name and separated by a sequence of paths (/). In the figure, there is a znode separated by “/”. There are two logical namespaces config and workers under the root directory. The config namespace is used for centralized configuration management, and the workers namespace is used for naming.

Under the config namespace, each znode can store up to 1MB of data. This is similar to UNIX file systems, except that the parent znode can also store data. The main purpose of this structure is to store synchronization data and describe the metadata of znode. This structure is called zookeeper data model. Each node in the zookeeper namespace is identified by a path.
Principles and comparison of common registration centers of acquisition technology

Znode has two characteristics: file and directory. It not only maintains data structures such as data length, meta information, ACL and timestamp like a file, but also can be used as a part of path identification like a directory:

  • Version number – each znode has a version number, which means that whenever the data associated with the znode changes, its corresponding version number will increase. When multiple zookeeper clients try to perform operations on the same znode, the use of the version number is important.
  • Operation control list (ACL) – ACL is basically an authentication mechanism for accessing znode. It manages all znode read and write operations.
  • Timestamp – a timestamp indicates the time elapsed to create and modify a znode. It is usually measured in milliseconds. Zookeeper identifies each change in znode from the “transaction ID” (zxid). Zxid is unique and reserves time for each transaction so that you can easily determine the time elapsed from one request to another.
  • Data length – the total amount of data stored in znode is the data length. Up to 1MB of data can be stored.

Zookeeper also has the concept of transient nodes. These znodes exist as long as the session that created them is active. At the end of the session, the znode is deleted.

2.3 zookeeper monitoring function

Zookeeper supports the concept of watch. The client can set observation on znode. When znode changes, monitoring is triggered and deleted. After monitoring is triggered, the client will receive a packet indicating that znode has changed. If the client is disconnected from one of the zookeeper servers, the client will receive a local notification. New function in 3.6.0: the client can also set permanent recursive monitoring on znode. These monitoring will not be deleted when triggered, and will trigger the changes of registered znode and all child znodes recursively.

2.4 zookeeper election process

Principles and comparison of common registration centers of acquisition technology

Zookeeper needs at least three nodes to work. Generally, there are four zookeeper node states:

  • Looking: indicates the node in which the election is in progress. In this state, you need to enter the election process
  • Leading: leader status. The node in this status indicates that the role is already a leader
  • Following: follower status, indicating that the leader has been elected, and the current node role is follower
  • Observer: observer status, indicating that the current node role is observer. Observer means that it will not enter the election and only accept the election results, that is, it will not become a leader node, but it provides services like a follower node.

The process of selecting leaders is shown in the following figure:
Principles and comparison of common registration centers of acquisition technology

In the cluster initialization stage, when one server ZK1 starts, the leader election cannot be conducted and completed separately. When the second server ZK2 starts, the two machines can communicate with each other. Each machine tries to find the leader, so it enters the leader election process. The electoral process began as follows:

(1) One vote per server. Since it is an initial situation, ZK1 and ZK2 will vote as leader servers. Each vote will include the ID of the recommended server and zxid (transaction ID), expressed by (ID, zxid). At this time, ZK1’s vote is (1,0) and ZK2’s vote is (2,0), and then they will send this vote to other machines in the cluster.

(2) Accept votes from various servers. After each server in the cluster receives a vote, it first judges the validity of the vote, such as checking whether it is the current round of voting and whether it comes from a server in the looking state.

(3) Process votes. For each vote, the server needs to compare the votes of others with its own. The rules are as follows:

  • Check zxid first. Zxid’s larger servers are preferred as leaders.
  • If zxid is the same, the server ID is compared. The server with a larger ID is used as the leader server.

For ZK1, its vote is (1, 0), and the vote received by ZK2 is (2, 0). First, compare the zxid of both, which is 0, and then compare the ID. at this time, the ID of ZK2 is larger, so ZK2 wins. ZK1 updates its vote to (2, 0) and resends the vote to ZK2.

(4) Count votes. After each vote, the server will count the voting information to determine whether more than half of the machines have received the same voting information. For ZK1 and ZK2, it is counted that two machines in the cluster have received (2,0) voting information. At this time, it is considered that ZK2 has been selected as the leader.

(5) Change the server state. Once the leader is determined, each server will update its status. If it is a follower, it will be changed to following. If it is a leader, it will be changed to leading. When the new zookeeper node ZK3 is started, it is found that there is already a leader. There is no election. Directly change the status from looking to following.

3、 Consul

3.1 consul overall framework

Principles and comparison of common registration centers of acquisition technology

Consul supports multiple data centers. In the figure above, there are two data centers   Center, they are interconnected on the Internet through Wan gossip. At the same time, in order to improve communication efficiency, only server nodes join cross data center communication. Therefore, consumer can support WAN based synchronization between multiple data centers.

In a single data center, consul is divided into client and server nodes (all nodes are also called agents).

  • Server nodes: participate in consensus arbitration, store cluster status (log storage), process queries, and maintain relationships with surrounding (LAN / WAN) nodes
  • Agent node: it is responsible for the health check of the micro service registered to consumer through this node, converting the client registration request and query into RPC request to the server, and maintaining the relationship with surrounding (LAN / WAN) nodes

They communicate with each other through grpc. In addition, there is also a LAN gossip communication between the server and the client, which is used to enable the surviving nodes to perceive in time when there are topology changes in the LAN. For example, after the server node goes down, the client will trigger the stripping of the corresponding server node from the available list. All server nodes form a cluster. They run raft protocol and elect leaders through consensus arbitration. All business data are written to the cluster through the leader for persistence. When more than half of the nodes store the data, the server cluster will return ACK, so as to ensure strong data consistency. Of course, a large number of servers will also affect the efficiency of writing data. All followers will follow the leader’s footsteps to ensure that they have the latest data copies. Consul nodes in the cluster maintain membership through the gossip protocol, such as which nodes are still in the cluster, and whether these nodes are clients or servers.

The rumor protocol of a single data center uses both TCP and UDP communication, and both use port 8301. The cross data center rumor protocol also uses TCP and UDP communication, and the port uses 8302. Data read / write requests in the cluster can be sent directly to the server or forwarded to the server through the client using RPC. The requests will eventually reach the leader node.

4、 Nacos

4.1 Nacos overall framework

Principles and comparison of common registration centers of acquisition technology

During service registration, the service will be registered locally by polling the cluster node address of the registration center. On the registration center, that is, Nacos server, map is used to save the instance information, and the services configured with persistence will be saved to the database. In order to ensure the dynamic perception of the local service instance list, Nacos is different from other registration centers, The mode of simultaneous operation of pull / push is adopted.

4.2 Nacos election

Nacos cluster is similar to zookeeper, which is divided into leader role and follower role. From the name of this role, we can see that there is an election mechanism in this cluster. Because if you don’t have the election function, the name of the role may be master / slave.

Election algorithm:

Nacos cluster is implemented by raft algorithm, which is relatively simple compared with zookeeper’s election algorithm. The core of the election algorithm is in raftcore, including data processing and data synchronization.

In raft, nodes have three roles:

  • Leader: responsible for receiving requests from clients
  • Candidate: a role (campaign status) used to elect a leader
  • Follower: responsible for responding to requests from leaders or candidates

When all nodes are started, they are in the follower state. If the leader’s heartbeat is not received within a period of time (there may be no leader or the leader may hang up), the follower will become candidate. Then launch an election. Before the election, term will be added. This term is the same as epoch in zookeeper.

Follower will vote for itself and send ticket information to other nodes. When other nodes reply, several situations may occur:

  • If more than half of the votes received are passed, it becomes a leader
  • If you are told that other nodes have become leaders, you can switch to follower
  • If no more than half of the votes are received within a period of time, the election is re launched. Constraints in any term, a single node can only cast one vote at most

In the first case, after winning the election, the leader will send messages to all nodes to prevent other nodes from triggering a new election.

In the second case, for example, there are three nodes a, B, C. A and B initiate the election at the same time, and a’s election message arrives at C first, and C votes for A. when B’s message arrives at C, the constraints mentioned above can not be met, that is, C will not vote for B, while a and B obviously will not vote for each other. After a wins, it will send a heartbeat message to B and C. node B finds that the term of node a is not lower than its own term. It knows that there is already a leader, so it is converted to a follower.

In the third case, no node obtains a majority of votes, which may be a flat vote. A total of four nodes (A / B / C / D) are added, and node C and node d become candidates at the same time. However, node a voted for node D and node B voted for node C, resulting in an equal vote. At this time, everyone was waiting until the election was re launched after the timeout. In case of flat ticket, the system unavailability will be prolonged. Therefore, raft introduces random deletion timeouts to avoid flat ticket as much as possible.

4.3 source code of Nacos service registration process

Nacos source code is inhttps://github.com/alibaba/nacosDownload the latest version of 2.0.0-bugfix (MAR 30th, 2021).

When registration is required, spring cloud will inject the instance Nacos service registry.

@Override
    public void registerInstance(String serviceName, String groupName, Instance instance) throws NacosException {
        NamingUtils.checkInstanceIsLegal(instance);
        String groupedServiceName = NamingUtils.getGroupedName(serviceName, groupName);
        //Add heartbeat information
        if (instance.isEphemeral()) {
            BeatInfo beatInfo = beatReactor.buildBeatInfo(groupedServiceName, instance);
            beatReactor.addBeatInfo(groupedServiceName, beatInfo);
        }
        //Call the service proxy class to register
        serverProxy.registerService(groupedServiceName, groupName, instance);
    }

Then call the registerService method to register, construct the request parameters, and initiate the request.

public void registerService(String serviceName, String groupName, Instance instance) throws NacosException {

        NAMING_LOGGER.info("[REGISTER-SERVICE] {} registering service {} with instance: {}", namespaceId, serviceName,
                instance);

        final Map<String, String> params = new HashMap<String, String>(16);
        params.put(CommonParams.NAMESPACE_ID, namespaceId);
        params.put(CommonParams.SERVICE_NAME, serviceName);
        params.put(CommonParams.GROUP_NAME, groupName);
        params.put(CommonParams.CLUSTER_NAME, instance.getClusterName());
        params.put("ip", instance.getIp());
        params.put("port", String.valueOf(instance.getPort()));
        params.put("weight", String.valueOf(instance.getWeight()));
        params.put("enable", String.valueOf(instance.isEnabled()));
        params.put("healthy", String.valueOf(instance.isHealthy()));
        params.put("ephemeral", String.valueOf(instance.isEphemeral()));
        params.put("metadata", JacksonUtils.toJson(instance.getMetadata()));

        reqApi(UtilAndComs.nacosUrlInstance, params, HttpMethod.POST);

    }

After entering the reqapi method, we can see that the service will poll the address of the configured registry when registering:

public String reqApi(String api, Map<String, String> params, Map<String, String> body, List<String> servers,
            String method) throws NacosException {

        params.put(CommonParams.NAMESPACE_ID, getNamespaceId());

        if (CollectionUtils.isEmpty(servers) && StringUtils.isBlank(nacosDomain)) {
            throw new NacosException(NacosException.INVALID_PARAM, "no server available");
        }

        NacosException exception = new NacosException();
        //There is only one service
        if (StringUtils.isNotBlank(nacosDomain)) {
            for (int i = 0; i < maxRetry; i++) {
                try {
                    return callServer(api, params, body, nacosDomain, method);
                } catch (NacosException e) {
                    exception = e;
                    if (NAMING_LOGGER.isDebugEnabled()) {
                        NAMING_LOGGER.debug("request {} failed.", nacosDomain, e);
                    }
                }
            }
        } else {
            Random random = new Random(System.currentTimeMillis());
            int index = random.nextInt(servers.size());

            for (int i = 0; i < servers.size(); i++) {
                String server = servers.get(index);
                try {
                    return callServer(api, params, body, server, method);
                } catch (NacosException e) {
                    exception = e;
                    if (NAMING_LOGGER.isDebugEnabled()) {
                        NAMING_LOGGER.debug("request {} failed.", server, e);
                    }
                }
                //Polling
                index = (index + 1) % servers.size();
            }
        }

Finally, the call is initiated through CallServer (API, params, server, method)

public String callServer(String api, Map<String, String> params, Map<String, String> body, String curServer,
            String method) throws NacosException {
        long start = System.currentTimeMillis();
        long end = 0;
        injectSecurityInfo(params);
        Header header = builderHeader();

        String url;
        //Send HTTP request
        if (curServer.startsWith(UtilAndComs.HTTPS) || curServer.startsWith(UtilAndComs.HTTP)) {
            url = curServer + api;
        } else {
            if (!IPUtil.containsPort(curServer)) {
                curServer = curServer + IPUtil.IP_PORT_SPLITER + serverPort;
            }
            url = NamingHttpClientManager.getInstance().getPrefix() + curServer + api;
        }
    }

Nacos server processing:

The server provides an instancecontroller class, which provides APIs related to service registration

@CanDistro
    @PostMapping
    @Secured(parser = NamingResourceParser.class, action = ActionTypes.WRITE)
    public String register(HttpServletRequest request) throws Exception {

        final String namespaceId = WebUtils
                .optional(request, CommonParams.NAMESPACE_ID, Constants.DEFAULT_NAMESPACE_ID);
        final String serviceName = WebUtils.required(request, CommonParams.SERVICE_NAME);
        NamingUtils.checkServiceNameFormat(serviceName);
        //Resolve the instance from the request
        final Instance instance = parseInstance(request);

        serviceManager.registerInstance(namespaceId, serviceName, instance);
        return "ok";
    }

Then call ServiceManager to register the service.

public void registerInstance(String namespaceId, String serviceName, Instance instance) throws NacosException {
        //Create an empty service. The service information displayed in the service list on the Nacos console is actually initializing a servicemap, which is a concurrent HashMap collection
        createEmptyService(namespaceId, serviceName, instance.isEphemeral());
        //Get a service object from servicemap according to namespaceid and servicename
        Service service = getService(namespaceId, serviceName);

        if (service == null) {
            throw new NacosException(NacosException.INVALID_PARAM,
                    "service not found, namespace: " + namespaceId + ", service: " + serviceName);
        }
        //Call addinstance to create a service instance
        addInstance(namespaceId, serviceName, instance.isEphemeral(), instance);
    }

When creating an empty service instance

public void createServiceIfAbsent(String namespaceId, String serviceName, boolean local, Cluster cluster)
            throws NacosException {
        //Get service object from servicemap
        Service service = getService(namespaceId, serviceName);
        //If empty. Then initialize
        if (service == null) {
            Loggers.SRV_LOG.info("creating empty service {}:{}", namespaceId, serviceName);
            service = new Service();
            service.setName(serviceName);
            service.setNamespaceId(namespaceId);
            service.setGroupName(NamingUtils.getGroupName(serviceName));
            // now validate the service. if failed, exception will be thrown
            service.setLastModifiedMillis(System.currentTimeMillis());
            service.recalculateChecksum();
            if (cluster != null) {
                cluster.setService(service);
                service.getClusterMap().put(cluster.getName(), cluster);
            }
            service.validate();

            putServiceAndInit(service);
            if (!local) {
                addOrReplaceService(service);
            }
        }
    }

Map is used in getservice method for storage:

private final Map<String, Map<String, Service>> serviceMap = new ConcurrentHashMap<>();

Nacos maintains services through different namespaces, and each namespace has different groups. Only under different groups can there be corresponding services, and then use this servicename to determine the service instance. The first time you enter, you will enter initialization. After initialization, you will call putserviceandinit.

private void putServiceAndInit(Service service) throws NacosException {
        //Save the service information to the servicemap collection
        putService(service);
        service = getService(service.getNamespaceId(), service.getName());
        //Establish heartbeat detection mechanism
        service.init();
        //Realize data consistency monitoring. Ephemeral (identifies whether the service is temporary and persistent by default, that is, true) = true means raft protocol is adopted, and false means distro protocol is adopted
        consistencyService
                .listen(KeyBuilder.buildInstanceListKey(service.getNamespaceId(), service.getName(), true), service);
        consistencyService
                .listen(KeyBuilder.buildInstanceListKey(service.getNamespaceId(), service.getName(), false), service);
        Loggers.SRV_LOG.info("[NEW-SERVICE] {}", service.toJson());
    }

After obtaining the service, add the service instance to the collection, and then synchronize the data based on the consistency protocol. Then call addInstance.

public void addInstance(String namespaceId, String serviceName, boolean ephemeral, Instance... ips)
            throws NacosException {
        //Assembly key
        String key = KeyBuilder.buildInstanceListKey(namespaceId, serviceName, ephemeral);
        //Get the service just assembled
        Service service = getService(namespaceId, serviceName);

        synchronized (service) {
            List<Instance> instanceList = addIpAddresses(service, ephemeral, ips);

            Instances instances = new Instances();
            instances.setInstanceList(instanceList);
            //That is, add a registration service to the class that implements listening in the previous step
            consistencyService.put(key, instances);
        }
    }

4.4 Nacos service subscription source code

Node subscriptions have different implementations in different registries, which are generally divided into pull and push.

Push means that when the subscribed node is updated, it will actively push to the subscriber. ZK is the implementation method of push. The client and the server will establish a TCP long connection, and the client will register a watcher. Then, when there is data update, the server will push through the long connection. Through this long connection mode, the resources of the server will be seriously consumed. Therefore, when there are many watchers and frequent updates, the performance of zookeeper will be very low or even hang up.

Pull refers to that the subscribed node actively and regularly obtains the information of the server node, and then makes a comparison locally. If there is any change, it will make some updates. There is also a watcher mechanism in consul, but unlike ZK, it is implemented through HTTP long polling. Consul server will immediately return whether the requested URL contains the wait parameter, or suspend and wait for the return if the service changes within the specified wait time. The performance using this method may be high, but the real-time performance may not be high.

In Nacos, these two ideas are combined to provide both pull and active push.

For the pulled part, the specific operations of obtaining serviceinfo from hostreactor are as follows:

public ServiceInfo getServiceInfo(final String serviceName, final String clusters) {

        NAMING_LOGGER.debug("failover-mode: " + failoverReactor.isFailoverSwitch());
        //Splicing service name + cluster name (empty by default)
        String key = ServiceInfo.getKey(serviceName, clusters);
        if (failoverReactor.isFailoverSwitch()) {
            return failoverReactor.getService(key);
        }
        //Find the list of service providers according to the key from the serviceinfomap, which is the local cache of the service address of the client
        ServiceInfo serviceObj = getServiceInfo0(serviceName, clusters);
        //If it is empty, the local cache does not exist
        if (null == serviceObj) {
            serviceObj = new ServiceInfo(serviceName, clusters);
            //If it cannot be found, create a new one, put it into serviceinfomap and updatingmap, execute updateservicenow, and then remove it from updatingmap;
            serviceInfoMap.put(serviceObj.getKey(), serviceObj);

            updatingMap.put(serviceName, new Object());
            //Load the service address information from the Nacos server immediately
            updateServiceNow(serviceName, clusters);
            updatingMap.remove(serviceName);

        } else if (updatingMap.containsKey(serviceName)) {
            //If the serviceobj found from the serviceinfomap is in the updatingmap, wait for update_ HOLD_ INTERVAL
            if (UPDATE_HOLD_INTERVAL > 0) {
                // hold a moment waiting for update finish
                synchronized (serviceObj) {
                    try {
                        serviceObj.wait(UPDATE_HOLD_INTERVAL);
                    } catch (InterruptedException e) {
                        NAMING_LOGGER
                                .error("[getServiceInfo] serviceName:" + serviceName + ", clusters:" + clusters, e);
                    }
                }
            }
        }
        //Start regular scheduling and query the service address every 10s
        //If it exists in the local cache, start the scheduled task through scheduleupdateifabsent, and then get the serviceinfo from the serviceinfomap
        scheduleUpdateIfAbsent(serviceName, clusters);
        return serviceInfoMap.get(serviceObj.getKey());
    }

Nacos push function. Nacos will record our subscribers to our pushservice

The pushservice class implements applicationlistener < servicechangeevent >, so it will listen to the event, listen to the service state change event, traverse all clients, and broadcast the message through UDP protocol:

public void onApplicationEvent(ServiceChangeEvent event) {
        Service service = event.getService();// Get service
        String serviceName = service.getName();// service name
        String namespaceId = service.getNamespaceId();// Namespace
        //Perform tasks
        Future future = GlobalExecutor.scheduleUdpSender(() -> {
            try {
                Loggers.PUSH.info(serviceName + " is changed, add it to push queue.");
                ConcurrentMap<String, PushClient> clients = clientMap
                        .get(UtilsAndCommons.assembleFullServiceName(namespaceId, serviceName));
                if (MapUtils.isEmpty(clients)) {
                    return;
                }
                Map<String, Object> cache = new HashMap<>(16);
                long lastRefTime = System.nanoTime();
                for (PushClient client : clients.values()) {
                    if (client.zombie()) {
                        Loggers.PUSH.debug("client is zombie: " + client.toString());
                        clients.remove(client.toString());
                        Loggers.PUSH.debug("client is zombie: " + client.toString());
                        continue;
                    }
                    Receiver.AckEntry ackEntry;
                    Loggers.PUSH.debug("push serviceName: {} to client: {}", serviceName, client.toString());
                    String key = getPushCacheKey(serviceName, client.getIp(), client.getAgent());
                    byte[] compressData = null;
                    Map<String, Object> data = null;
                    if (switchDomain.getDefaultPushCacheMillis() >= 20000 && cache.containsKey(key)) {
                        org.javatuples.Pair pair = (org.javatuples.Pair) cache.get(key);
                        compressData = (byte[]) (pair.getValue0());
                        data = (Map<String, Object>) pair.getValue1();
                        Loggers.PUSH.debug("[PUSH-CACHE] cache hit: {}:{}", serviceName, client.getAddrStr());
                    }
                    if (compressData != null) {
                        ackEntry = prepareAckEntry(client, compressData, data, lastRefTime);
                    } else {
                        ackEntry = prepareAckEntry(client, prepareHostsData(client), lastRefTime);
                        if (ackEntry != null) {
                            cache.put(key, new org.javatuples.Pair<>(ackEntry.origin.getData(), ackEntry.data));
                        }
                    }
                    Loggers.PUSH.info("serviceName: {} changed, schedule push for: {}, agent: {}, key: {}",
                            client.getServiceName(), client.getAddrStr(), client.getAgent(),
                            (ackEntry == null ? null : ackEntry.key));
                    //Perform UDP push
                    udpPush(ackEntry);
                }
            } catch (Exception e) {
                Loggers.PUSH.error("[NACOS-PUSH] failed to push serviceName: {} to client, error: {}", serviceName, e);

            } finally {
                futureMap.remove(UtilsAndCommons.assembleFullServiceName(namespaceId, serviceName));
            }

        }, 1000, TimeUnit.MILLISECONDS);

        futureMap.put(UtilsAndCommons.assembleFullServiceName(namespaceId, serviceName), future);

    }

At this time, the service consumer needs to establish a UDP service listener, otherwise the server cannot push data. This listener is initialized in the construction method of hostreactor.

The push mode of Nacos will save a lot of resources for zookeeper’s long TCP connection. Even a large number of node updates will not cause too many performance bottlenecks for Nacos. In Nacos, if the client receives a UDP message, it will return an ACK. If the Nacos server does not receive an ACK for a certain time, it will resend. After a certain resend time, There will be no retransmission. Although it can not be guaranteed to be delivered to subscribers through UDP, Nacos also has regular rotation training as the bottom, so there is no need to worry about the fact that the data will not be updated.

Through these two means, Nacos not only ensures the real-time performance, but also ensures that the data update will not be missed.

5、 Comparison of four registries

The four registries have their own characteristics. Their differences can be clearly compared through the following list:
Principles and comparison of common registration centers of acquisition technology

Wen / Hz

Pay attention to the technology of getting things and go hand in hand to the cloud of technology

Recommended Today

Swift advanced (XV) extension

The extension in swift is somewhat similar to the category in OC Extension can beenumeration、structural morphology、class、agreementAdd new features□ you can add methods, calculation attributes, subscripts, (convenient) initializers, nested types, protocols, etc What extensions can’t do:□ original functions cannot be overwritten□ you cannot add storage attributes or add attribute observers to existing attributes□ cannot add parent […]