Mongodb deployment scheme and switching scheme design in dual active (active and standby) machine room

Time:2022-1-4

1. General

Now, in order to cope with extreme situations, such as host downtime, network failure, computer room downtime and other disasters, many highly available systems usually deploy the primary and standby architecture (dual computer room), or dual active architecture (dual computer room), or even multiple active architecture (three or more computer rooms). Mongodb is naturally suitable for deploying dual or multiple computer rooms, but in the event of computer room downtime disasters, There will also be the problem that the primary node cannot be elected. This paper focuses on the deployment scheme and switching scheme of mongodb under the active / standby or dual active architecture. The following discussion takes the active / standby architecture as an example (dual active is the same).

2. Network deployment diagram of active and standby architecture

In the deployment scheme of the active and standby architecture, all user requests are routed to the host room, and there are no user requests in the standby room. In order to simplify the diagram, CDN, DNS, WAF and other parts are omitted here, focusing on the deployment structure of applications and internal nodes of Mongo cluster, as shown in the following figure:

As can be seen from the above figure, the load balancing and application services are both active and standby architectures, and the mongodb cluster is integrated (both sides will process requests). There is no difference between active and standby, but only deployed in two machine rooms.

Of course, in this way, the resource utilization rate of mongodb cluster will be higher, and there is no idle waste of resources of application services in the upper standby computer room.

3. Pain points of mongodb active / standby architecture

In the active / standby architecture environment, mongodb’s high availability deployment scheme recommends that the number of nodes in the replication group is odd (for example, three nodes, one master and two slave). At this time, one computer room deploys two nodes and one computer room deploys one node. When the computer room deploying two nodes goes down, there is only one node in the other computer room, and mongodb’s election protocol is raft consistency protocol, This isThe primary node cannot be elected(the number of surviving nodes is required to be greater than 1 / 2 of the number of original nodes), resulting in the unavailability of mongodb service. The schematic diagram is as follows:

 

4. Mongodb active / standby deployment scheme

In view of the problems encountered in Chapter 3, we adjusted the deployment scheme, that is, a standby node is prepared in the standby machine room, which is not started at ordinary times. The standby node is started only when the host machine room disaster occurs. The schematic diagram is as follows:

 

5. Mongodb’s active / standby switching scheme

There are already deployment schemes. Let’s talk about the active / standby switching scheme. When a disaster occurs in the host room, we need to solve two problems:

1. How to start the previous standby node.

2. How to make the standby node just started join the replication group, otherwise it cannot participate in the primary node election.

Start standby node

Prepare the startup script on the standby node, and then use the operation and maintenance software (such as saltstack) to send the startup command to start the standby node.

The standby node joins the replication group

We know that if you want to add a new node to the replication group, you need to execute the rs.add command at the master node. However, when a disaster occurs, this method cannot be used because there is no master node. Therefore, you need to change the idea, that is, let the standby node “replace” the slave node of the original host room, “Replace” here means to make other members of the replication group think that the standby node is the original slave node. The technical scheme is as follows:

1. First, copy the members in the group. When joining the replication group, use the domain name to replace the IP, such as rs.add (“sharda1. Mongodb. Net: 27017”), and modify the / etc / hosts files of all servers in the mongodb cluster to configure sharda1 mongodb. Net and IP.

2. In the event of a disaster, sharda1.0 in / etc / hosts of all servers in the mongodb cluster mongodb. Net is changed to the IP of the standby node, and then the standby node is started. At this time, other nodes in the replication group can quickly connect to the new node.

Explain why the mapping relationship between domain name and IP is configured to the hosts file instead of the DNS server. The main reason is that modifying the hosts file takes effect faster, so as to quickly elect the master node.

Summary

1. The highlight of this scheme is that new nodes can join the cluster by modifying the / etc / hosts file, so as to quickly complete the primary node election.

2. This scheme is a general scheme, which is suitable for many distributed systems, such as zookeeper.

Of course, during the implementation, it is necessary to consider the two-way switching between the master and the standby. After the switching, monitor whether the original slave node of the original master machine room is started and other abnormal situations.

If there is anything wrong with the above scheme, please correct it.