Replica sets + sharded cluster cluster cluster based on mongodb 4.0.0 under docker container

Time:2020-9-29

target

`Using three physical machines for database cluster
Any downtime will not affect the online business operation
There won’t be any data loss`

programme

`The cluster is replica sets + sharded cluster
It has the characteristics of high availability, fail over, distributed storage and so on`

Replica sets + sharded cluster cluster cluster based on mongodb 4.0.0 under docker container

Replica sets + sharded cluster cluster cluster based on mongodb 4.0.0 under docker container

`As shown in the figure above, our cluster configuration is as follows:
Three physical machines, each of which has a complete partition cluster configuration, can run independently
Configuration server: use three configuration servers to ensure metadata integrity.
Mongos process: three routing processes are used to balance and improve client access performance
Three shard11, shard12 and shard13 form a replica set, which provides shard1 function in sharding.
Three shard processes, shard21, shard22 and shard23, form a replica set to provide shard2 functions in sharding. `

Building a mongodb sharding cluster requires three roles: shardserver, config server and route process

  • Shard server

`Shard server is a fragment of the actual data,
Each shard can be an instance of mongod,
It can also be a replica sets made up of a set of mongod instances
In order to realize automatic fault conversion within each shard,
Mongodb officially recommends that each shard be a set of replica sets`

  • configure server

`To store a specific collection in multiple shards,
You need to specify a shard key for the collection,
Decide which chunk the record belongs to,
The configuration server can store the following information,
Configuration information of each shard node,
The shard key range of each chunk,
The distribution of chunk in Shards,
Sharding configuration information of all dB and collection in the cluster. `

  • Routing (mongos) process

`It is a front-end route through which clients access,
First, ask the configuration server which shard to query or save records,
Then connect the corresponding shard to perform the operation, and finally return the result to the client,
The client only needs to send the original query or update request to mongod to the routing process,
It doesn’t matter which shard the records are stored on. `

implementation

At present, I build this environment on my own computer, that is, a physical machine

Plan the next port assignment first

Replica sets + sharded cluster cluster cluster based on mongodb 4.0.0 under docker container

File directory

`First create a directory result as follows
mengfaniaodeMBP:third_software mengfanxiao$ tree mongodb/
mongodb/
├── node1
│ ├── config-server1
│ │ ├── backup
│ │ ├── config
│ │ │ └── config.conf
│ │ └── db
│ ├── mongos1
│ │ ├── backup
│ │ ├── config
│ │ │ └── config.conf
│ │ └── db
│ ├── shard11
│ │ ├── backup
│ │ ├── config
│ │ │ └── config.conf
│ │ └── db
│ └── shard21
│ ├── backup
│ ├── config
│ │ └── config.conf
│ └── db
├── node2
│ ├── config-server2
│ │ ├── backup
│ │ ├── config
│ │ │ └── config.conf
│ │ └── db
│ ├── mongos2
│ │ ├── backup
│ │ ├── config
│ │ │ └── config.conf
│ │ └── db
│ ├── shard12
│ │ ├── backup
│ │ ├── config
│ │ │ └── config.conf
│ │ └── db
│ └── shard22
│ ├── backup
│ ├── config
│ │ └── config.conf
│ └── db
└── node3
├── config
├── config-server3
│ ├── backup
│ ├── config
│ │ └── config.conf
│ └── db
├── db
├── mongos3
│ ├── backup
│ ├── config
│ │ └── config.conf
│ └── db
├── shard13
│ ├── backup
│ ├── config
│ │ └── config.conf
│ └── db
└── shard23
├── backup
├── config
│ └── config.conf
└── db
If there are three physical machines, copy the corresponding node1 node2 node3`

Configuration services

Configuration service 1

`node1/config-server1
docker run –restart=always –privileged=true -p 10021:27019 -v $PWD/config:/etc/mongod -v $PWD/db:/data/db -d –name pro-file-server-config1 mongo:4.0.0 -f /etc/mongod/config.conf –configsvr –replSet “rs-file-server-config-server” –bind_ip_all`

Configuration service 2

`node2/config-server2
docker run –restart=always –privileged=true -p 10022:27019 -v $PWD/config:/etc/mongod -v $PWD/db:/data/db -d –name pro-file-server-config2 mongo:4.0.0 -f /etc/mongod/config.conf –configsvr –replSet “rs-file-server-config-server” –bind_ip_all`

Configuration service 3

`node3/config-server3
docker run –restart=always –privileged=true -p 10023:27019 -v $PWD/config:/etc/mongod -v $PWD/db:/data/db -d –name pro-file-server-config3 mongo:4.0.0 -f /etc/mongod/config.conf –configsvr –replSet “rs-file-server-config-server” –bind_ip_all`

Associate three configuration services together

  • Connect using mongodb client

mongo 192.168.50.100:10021

The client here is that I have installed another mongodb locally. Here I am installing mongod in MAC mode. Please skip

a. Switch brew installation Library

brew tap mongodb/brew

b. Install mongodb Community Edition

brew install mongodb-community

c. Start, stop

`brew services start mongodb-community
brew services stop mongodb-community`

  • Initialization configuration

`rs.initiate({
_id: “rs-file-server-config-server”,
configsvr: true,
members: [
{ _id : 0,host : “192.168.50.100:10021” },
{ _id : 1,host : “192.168.50.100:10022” },
{ _id : 2, host : “192.168.50.100:10023” }
]
});
Note that server IP must be used here, not 127.0.0.1`

  • View configuration results

rs.status()

Fragment service cluster 1

Share11

`cd node1/shard11
docker run –restart=always –privileged=true -p 10031:27018 -v $PWD/config:/etc/mongod -v $PWD/backup:/data/backup -v $PWD/db:/data/db -d –name pro-file-server-shard11 mongo:4.0.0 -f /etc/mongod/config.conf –shardsvr –replSet “rs-file-server-shard1-server” –bind_ip_all`

Share12

`cd node2/shard12
docker run –restart=always –privileged=true -p 10032:27018 -v $PWD/config:/etc/mongod -v $PWD/backup:/data/backup -v $PWD/db:/data/db -d –name pro-file-server-shard12 mongo:4.0.0 -f /etc/mongod/config.conf –shardsvr –replSet “rs-file-server-shard1-server” –bind_ip_all`

Share13

`cd node1/shard13
docker run –restart=always –privileged=true -p 10033:27018 -v $PWD/config:/etc/mongod -v $PWD/backup:/data/backup -v $PWD/db:/data/db -d –name pro-file-server-shard13 mongo:4.0.0 -f /etc/mongod/config.conf –shardsvr –replSet “rs-file-server-shard1-server” –bind_ip_all`

Associate shard services 11, 12, 13 as shard service cluster 1

  • Mongodb client connection 11

mongo 127.0.0.1:10031

  • to configure

`rs.initiate({
_id: “rs-file-server-shard1-server”,
members: [
{ _id : 0, host : “192.168.50.100:10031” },
{ _id : 1, host : “192.168.50.100:10032” },
{ _id : 2, host : “192.168.50.100:10033” }
]
});`

Partitioned backup service cluster 2

Share21

`cd node1/shard21
docker run –restart=always –privileged=true -p 10041:27018 -v $PWD/config:/etc/mongod -v $PWD/backup:/data/backup -v $PWD/db:/data/db -d –name pro-file-server-shard21 mongo:4.0.0 -f /etc/mongod/config.conf –shardsvr –replSet “rs-file-server-shard2-server” –bind_ip_all`

Share22

`node2/shard22
docker run –restart=always –privileged=true -p 10042:27018 -v $PWD/config:/etc/mongod -v $PWD/backup:/data/backup -v $PWD/db:/data/db -d –name pro-file-server-shard22 mongo:4.0.0 -f /etc/mongod/config.conf –shardsvr –replSet “rs-file-server-shard2-server” –bind_ip_all`

Share23

`cd node3/shard23
docker run –restart=always –privileged=true -p 10043:27018 -v $PWD/config:/etc/mongod -v $PWD/backup:/data/backup -v $PWD/db:/data/db -d –name pro-file-server-shard23 mongo:4.0.0 -f /etc/mongod/config.conf –shardsvr –replSet “rs-file-server-shard2-server” –bind_ip_all`

Under correlation, share21, share22, share23

  • Connect via client

mongo 127.0.0.1:10041

  • to configure

`rs.initiate({
_id: “rs-file-server-shard2-server”,
members: [
{ _id : 0, host : “192.168.50.100:10041” },
{ _id : 1, host : “192.168.50.100:10042” },
{ _id : 2, host : “192.168.50.100:10043” }
]
});`

Mongod service

Install mongos1

docker run --restart=always --privileged=true -p 10011:27017 -v $PWD/config:/etc/mongod -v $PWD/db:/data/db -d --entrypoint mongos --name pro-file-server-mongos1 mongo:4.0.0 -f /etc/mongod/config.conf --configdb rs-file-server-config-server/192.168.50.100:10021,192.168.50.100:10022,192.168.50.100:10023 --bind_ip_all

Installing mongos2

docker run --restart=always --privileged=true -p 10012:27017 -v $PWD/config:/etc/mongod -v $PWD/db:/data/db -d --entrypoint mongos --name pro-file-server-mongos2 mongo:4.0.0 -f /etc/mongod/config.conf --configdb rs-file-server-config-server/192.168.50.100:10021,192.168.50.100:10022,192.168.50.100:10023 --bind_ip_all

Install mongos3

docker run --restart=always --privileged=true -p 10013:27017 -v $PWD/config:/etc/mongod -v $PWD/db:/data/db -d --entrypoint mongos --name pro-file-server-mongos3 mongo:4.0.0 -f /etc/mongod/config.conf --configdb rs-file-server-config-server/192.168.50.100:10021,192.168.50.100:10022,192.168.50.100:10023 --bind_ip_all

to configure

  • Mongodb client connection

mongo 127.0.0.1:10011

  • to configure

`sh.addShard(“rs-file-server-shard1-server/192.168.50.100:10031,192.168.50.100:10032,192.168.50.100:10033”)
sh.addShard(“rs-file-server-shard2-server/192.168.50.100:10041,192.168.50.100:10042,192.168.50.100:10043”)`

test

  • Mongodb client connection

mongo 127.0.0.1:10011

  • Create partition database test

sh.enableSharding("test")

  • Add the collection to the shard and set the shard field

sh.shardCollection("test.user", {"_id": "hashed" })

  • Insert 1000 pieces of data

a. Switch fragment library

use test

b. Loop insertion

for (i = 1; i <= 1000; i=i+1){db.user.insert({'userIndex': 1})}

  • Confirm data

a. Check the backup library. Each backup library is 1000

`After the insertion is complete, you can use the
127.0.0.1:10011,127.0.0.1:10012,127.0.0.1:10013
Under the three databases, we can see that the collection named user in the test database has 1000 pieces of data,
Use the following code to query the number of records
db.getCollection(‘user’).find({}).count()
The result is 1000`

b. View the fragment library. The sum of each fragment library is 1000

`You can now connect to
Use the above command to query the number of records on 127.0.0.1:10031127.0.0.1:10041,
You will find that the total number of records in both databases is exactly 1000`

  • Springboot connection

`In application.yml Configure to access the mogos database:
spring:
data :
mongodb :
uri: mongodb://127.0.0.1:10011,127.0.0.1:10012,127.0.0.1:10013/test`

Reference documents

https://blog.csdn.net/quanmaoluo5461/article/details/85164588

This paper uses mdnice typesetting