Practice of building redis cluster environment

Time:2021-3-5

preface

This paper is the practice summary of redis cluster learning (based on redis 6.0 +). It introduces in detail the process of gradually building a redis cluster environment, and completes the practice of cluster scaling.

Brief introduction of redis cluster

Redis clusterIt is a distributed database solution provided by redisShardingTo share data, and provide replication and fail over functions. Compared with the master-slave replication and sentinel mode, redis cluster implements a more perfect high availability solution, which solves the problem that the storage capacity is limited by a single machine and the write operation cannot be load balanced.

This paper is a summary of the practice of redis cluster learning. It introduces in detail the process of gradually building a redis cluster environment, and completes the practice of cluster scaling.

Many articles on redis have been released, which can focus on WeChat official account Java backend, and reply to the 666 download Java technology stack document.

Practice of building redis cluster environment

1. Construction of redis cluster environment

For convenience, all nodes in the cluster environment are located on the same server. There are 6 nodes in total, which are distinguished by port number, 3 master nodes and 3 slave nodes. The simple structure of the cluster is as follows:

Practice of building redis cluster environment

Based on the latest redis 6.0 +, this paper directly downloads the latest source code from GitHub, compiles and obtains the commonly used tools redis server and redis cli. It is worth noting that since redis 5.0, the cluster management software is redis- trib.rb It is integrated into the redis cli client tool (refer to cluster tutorial for details).

This section describes how to build a cluster environment without the help of redis- trib.rb Fast management, but according to the standard steps step by step to build, this is also to be familiar with the basic steps of cluster management. In the cluster scaling practice section, we will use redis- trib.rb Complete cluster re fragmentation.

The construction of cluster can be divided into four steps

  1. Start node: start the node as a cluster. At this time, the node is independent.
  2. Node Handshake: connect independent nodes into a network.
  3. Slot assignment: 16384 slots are assigned to the master node to save the database key value pairs in pieces.
  4. Master slave replication: Specifies the master node for the slave node.

1.1 start node

The initial state of each node is still master server. The only difference is that it starts in cluster mode. The configuration file needs to be modified. Take the node with port number 6379 as an example, the main modifications are as follows:

# redis_6379_cluster.confport 6379cluster-enabled yescluster-config-file "node-6379.conf"logfile "redis-server-6379.log"dbfilename "dump-6379.rdb"daemonize yes

Where cluster config file The parameter specifies the location of the cluster configuration file, and each node maintains a cluster configuration file during operation. Whenever the cluster information changes (such as increasing or decreasing nodes), all nodes in the cluster will update the latest information to the configuration file. When the node restarts, it will read the configuration file again to obtain the cluster information, so it is convenient to rejoin the cluster In the middle. In other words, when the redis node starts in cluster mode, it will first find out whether there is a cluster configuration file. If there is, it will start using the configuration in the file. If not, it will initialize the configuration and save it to the file. The cluster configuration file is maintained by the redis node and does not need to be modified manually.

After modifying the corresponding configuration files for the six nodes, you can use redis server redis_ xxxx_ cluster.conf The tool starts six servers (XXXX represents the port number, corresponding to the corresponding configuration file). utilizepsCommand view process:

$ ps -aux | grep redis
... 800  0.1  0.0  49584  2444 ?        Ssl  20:42   0:00 redis-server 127.0.0.1:6379 [cluster]
... 805  0.1  0.0  49584  2440 ?        Ssl  20:42   0:00 redis-server 127.0.0.1:6380 [cluster]
... 812  0.3  0.0  49584  2436 ?        Ssl  20:42   0:00 redis-server 127.0.0.1:6381 [cluster]
... 817  0.1  0.0  49584  2432 ?        Ssl  20:43   0:00 redis-server 127.0.0.1:6479 [cluster]
... 822  0.0  0.0  49584  2380 ?        Ssl  20:43   0:00 redis-server 127.0.0.1:6480 [cluster]
... 827  0.5  0.0  49584  2380 ?        Ssl  20:43   0:00 redis-server 127.0.0.1:6481 [cluster]

1.2 node handshake

After each node is started in 1.1, the nodes are independent of each other. They are all in a cluster that only contains their own nodes. Take the server with port number 6379 as an example, use cluster nodes to view the nodes in the current cluster.

127.0.0.1:6379> CLUSTER NODES
37784b3605ad216fa93e976979c43def42bf763d :[email protected] myself,master - 0 0 0 connected 449 4576 5798 7568 8455 12706

We need to connect each independent node to form a cluster with multiple nodes, using “cluster meet < IP style =” margin: 0px; padding: 0px; max width: 100%; box sizing: border box! Important; word wrap: break word! Important; “> < port style =” margin: 0px; padding: 0px; max width: 100%; box sizing: border box! Important; “> < port style =” margin: 0px; padding: 0px; max width: 100%; box sizing: border box! Important; “; Word wrap: break word! Important; “> command. </port></ip>

The $redis cli - P 6379 - C # - C option specifies to run redis cli 127.0.0.1:6379 > cluster met 127.0.0.1 6380 OK 127.0.0.1:6379 > cluster met 127.0.0.1 6381 OK 127.0.0.1:6379 > cluster met 127.0.0.1 6480 OK 127.0.0.1:6379 > cluster met 127.0.0.1 6381 OK 127.0.0.1:6379 > cluster met 127.0.0.1 6382 OK

Check the nodes in the cluster again

127.0.0.1:6379> CLUSTER NODES
c47598b25205cc88abe2e5094d5bfd9ea202335f 127.0.0.1:[email protected] master - 0 1603632309283 4 connected
87b7dfacde34b3cf57d5f46ab44fd6fffb2e4f52 127.0.0.1:[email protected] myself,master - 0 1603632308000 1 connected
51081a64ddb3ccf5432c435a8cf20d45ab795dd8 127.0.0.1:[email protected] master - 0 1603632310292 2 connected
9d587b75bdaed26ca582036ed706df8b2282b0aa 127.0.0.1:[email protected] master - 0 1603632309000 5 connected
4c23b25bd4bcef7f4b77d8287e330ae72e738883 127.0.0.1:[email protected] master - 0 1603632308000 3 connected
32ed645a9c9d13ca68dba5a147937fb1d05922ee 127.0.0.1:[email protected] master - 0 1603632311302 0 connected

It can be found that at this time, all the six nodes join the cluster as master nodes, and the meanings of the results returned by cluster nodes are as follows:

<id> <ip:[email protected]> <flags> <master> <ping-sent> <pong-recv> <config-epoch> <link-state> <slot> <slot> ... <slot>
  • Node ID: consists of 40 hexadecimal strings. The node ID is created only once during cluster initialization, and then saved to the cluster configuration file (cluster config file mentioned above). When the node restarts, it will be read directly from the cluster configuration file.
  • [email protected] : the former is common port, which is used to provide services for clients; the latter is cluster port, which is allocated as common port + 10000, which is only used for communication between nodes.

For a detailed explanation of the rest, please refer to the official document cluster nodes.

1.3 slot assignment

Redis cluster stores the key value pairs of the database by sharding. The whole database is divided into 16384 slots. Each key of the database belongs to one of these 16384 slots. Each node in the cluster can process 0 or up to 16384 slots.

Slot is the basic unit of data management and migration. When 16384 slots in the database are allocated nodes, the cluster is in the online state (OK); if any slot is not allocated nodes, the cluster is in the offline state (fail).

Note that only the master node has the ability to process slots. If the slot assignment step is placed after the master-slave copy and the slot is assigned to the slave node, the cluster will not work normally (in offline state).

Using cluster addlogs

redis-cli  -p 6379 cluster addslots {0..5000}
redis-cli  -p 6380 cluster addslots {5001..10000}
redis-cli  -p 6381 cluster addslots {10001..16383}

After slot assignment, the nodes in the cluster are as follows:

127.0.0.1:6379> CLUSTER NODES
c47598b25205cc88abe2e5094d5bfd9ea202335f 127.0.0.1:[email protected] master - 0 1603632880310 4 connected 5001-10000 87b7dfacde34b3cf57d5f46ab44fd6fffb2e4f52 127.0.0.1:[email protected] myself,master - 0 1603632879000 1 connected 0-5000 51081a64ddb3ccf5432c435a8cf20d45ab795dd8 127.0.0.1:[email protected] master - 0 1603632879000 2 connected 10001-16383 9d587b75bdaed26ca582036ed706df8b2282b0aa 127.0.0.1:[email protected] master - 0 1603632878000 5 connected
4c23b25bd4bcef7f4b77d8287e330ae72e738883 127.0.0.1:[email protected] master - 0 1603632880000 3 connected
32ed645a9c9d13ca68dba5a147937fb1d05922ee 127.0.0.1:[email protected] master - 0 1603632881317 0 connected

127.0.0.1:6379> CLUSTER INFO
cluster_ state:ok                        #  The cluster is online
cluster_slots_assigned:16384 cluster_slots_ok:16384 cluster_slots_pfail:0 cluster_slots_fail:0 cluster_known_nodes:6 cluster_size:3 cluster_current_epoch:5 cluster_my_epoch:1 cluster_stats_messages_ping_sent:4763 cluster_stats_messages_pong_sent:4939 cluster_stats_messages_meet_sent:5 cluster_stats_messages_sent:9707 cluster_stats_messages_ping_received:4939 cluster_stats_messages_pong_received:4768 cluster_stats_messages_received:9707

1.4 master slave replication

After the above steps, the cluster nodes all exist as master nodes, and still can not achieve the high availability of redis. After the master-slave replication is configured, the high availability function of the cluster is truly realized.

CLUSTER REPLICATE <node_ ID > is used to make the node receiving the command in the cluster a node_ ID, and starts to copy the master node.

redis-cli  -p 6479 cluster replicate 87b7dfacde34b3cf57d5f46ab44fd6fffb2e4f52
redis-cli  -p 6480 cluster replicate c47598b25205cc88abe2e5094d5bfd9ea202335f
redis-cli  -p 6481 cluster replicate 51081a64ddb3ccf5432c435a8cf20d45ab795dd8

127.0.0.1:6379> CLUSTER NODES
c47598b25205cc88abe2e5094d5bfd9ea202335f 127.0.0.1:[email protected] master - 0 1603633105211 4 connected 5001-10000 87b7dfacde34b3cf57d5f46ab44fd6fffb2e4f52 127.0.0.1:[email protected] myself,master - 0 1603633105000 1 connected 0-5000 51081a64ddb3ccf5432c435a8cf20d45ab795dd8 127.0.0.1:[email protected] master - 0 1603633105000 2 connected 10001-16383 9d587b75bdaed26ca582036ed706df8b2282b0aa 127.0.0.1:[email protected] slave 51081a64ddb3ccf5432c435a8cf20d45ab795dd8 0 1603633107229 5 connected
4c23b25bd4bcef7f4b77d8287e330ae72e738883 127.0.0.1:[email protected] slave 87b7dfacde34b3cf57d5f46ab44fd6fffb2e4f52 0 1603633106221 3 connected
32ed645a9c9d13ca68dba5a147937fb1d05922ee 127.0.0.1:[email protected] slave c47598b25205cc88abe2e5094d5bfd9ea202335f 0 1603633104000 4 connected

By the way, steps 1.2, 1.3 and 1.4 above can use redis- trib.rb After redis 5.0, the overall implementation of the tool is directly completed with redis cli. The reference commands are as follows:

redis-cli --cluster create  127.0.0.1:6379 127.0.0.1:6479  127.0.0.1:6380 127.0.0.1:6480  127.0.0.1:6381 127.0.0.1:6481  --cluster-replicas 1

–Cluster replicas 1 indicates that the given creation node list is composed of master + slave node pairs.

1.5 executing commands in a cluster

The cluster is online at this time, and the client can send commands to the nodes in the cluster. The node receiving the command calculates which slot the key to be processed belongs to and checks whether the slot is assigned to itself.

  • If the slot of the key is assigned to the current node, the command will be executed directly.
  • Otherwise, the node returns the moved error to the client, directs the client to redirect to the correct node, and sends the previous command again.

Here, we use cluster keyslot to see that the slot number of key name is 5798 (assigned to 6380 nodes). When operating on this key, it will be redirected to the corresponding node. The operation of the key fruits is similar.

127.0.0.1:6379> CLUSTER KEYSLOT name
(integer) 5798
127.0.0.1:6379> set name huey -> Redirected to slot [5798] located at 127.0.0.1:6380 OK 127.0.0.1:6380>

127.0.0.1:6379> get fruits -> Redirected to slot [14943] located at 127.0.0.1:6381
"apple"
127.0.0.1:6381>

It is worth noting that when we send a command to a slave node through the client, the command will be redirected to the corresponding master node.

127.0.0.1:6480> KEYS *
1) "name"
127.0.0.1:6480> get name -> Redirected to slot [5798] located at 127.0.0.1:6380
"huey"

1.6 cluster fail over

When the master node in the cluster goes offline, all the slave nodes that copy the master node will select a new master node and complete the failure transfer. Similar to the configuration of master-slave replication, when the original slave node goes online again, it will exist in the cluster as the slave node of the new master node.

In the following, we simulate the situation of 6379 node outage (shutdown), and we can observe that its slave node 6479 will continue to work as a new master node.

462:S 26 Oct 14:08:12.750 * FAIL message received from c47598b25205cc88abe2e5094d5bfd9ea202335f about 87b7dfacde34b3cf57d5f46ab44fd6fffb2e4f52 462:S 26 Oct 14:08:12.751 # Cluster state changed: fail 462:S 26 Oct 14:08:12.829 # Start of election delayed for 595 milliseconds (rank #0, offset 9160). 462:S 26 Oct 14:08:13.434 # Starting a failover election for epoch 6. 462:S 26 Oct 14:08:13.446 # Failover election won: I'm the new master.
462:S 26 Oct 14:08:13.447 # configEpoch set to 6 after successful failover 462:M 26 Oct 14:08:13.447 # Setting secondary replication ID to d357886e00341b57bf17e46b6d9f8cf53b7fad21, valid up to offset: 9161. New replication ID is adbf41b16075ea22b17f145186c53c4499864d5b 462:M 26 Oct 14:08:13.447 * Discarding previously cached master state. 462:M 26 Oct 14:08:13.448 # Cluster state changed: ok

After the 6379 node recovers from the down state, it will exist as the slave node of the 6380 node.

127.0.0.1:6379> CLUSTER NODES
51081a64ddb3ccf5432c435a8cf20d45ab795dd8 127.0.0.1:[email protected] master - 0 1603692968000 2 connected 10001-16383 c47598b25205cc88abe2e5094d5bfd9ea202335f 127.0.0.1:[email protected] master - 0 1603692968504 0 connected 5001-10000 4c23b25bd4bcef7f4b77d8287e330ae72e738883 127.0.0.1:[email protected] master - 0 1603692967495 6 connected 0-5000 87b7dfacde34b3cf57d5f46ab44fd6fffb2e4f52 127.0.0.1:[email protected] myself,slave 4c23b25bd4bcef7f4b77d8287e330ae72e738883 0 1603692964000 1 connected
9d587b75bdaed26ca582036ed706df8b2282b0aa 127.0.0.1:[email protected] slave 51081a64ddb3ccf5432c435a8cf20d45ab795dd8 0 1603692967000 4 connected
32ed645a9c9d13ca68dba5a147937fb1d05922ee 127.0.0.1:[email protected] slave c47598b25205cc88abe2e5094d5bfd9ea202335f 0 1603692967000 5 connected

As mentioned earlier, cluster config file records the status of cluster nodes. Open node 6379’s configuration file nodes-6379.conf, and you can see that the information shown in cluster nodes is saved in the configuration file

51081a64ddb3ccf5432c435a8cf20d45ab795dd8 127.0.0.1:[email protected] master - 0 1603694920206 2 connected 10001-16383 c47598b25205cc88abe2e5094d5bfd9ea202335f 127.0.0.1:[email protected] master - 0 1603694916000 0 connected 5001-10000 4c23b25bd4bcef7f4b77d8287e330ae72e738883 127.0.0.1:[email protected] master - 0 1603694920000 6 connected 0-5000 87b7dfacde34b3cf57d5f46ab44fd6fffb2e4f52 127.0.0.1:[email protected] myself,slave 4c23b25bd4bcef7f4b77d8287e330ae72e738883 0 1603694918000 1 connected
9d587b75bdaed26ca582036ed706df8b2282b0aa 127.0.0.1:[email protected] slave 51081a64ddb3ccf5432c435a8cf20d45ab795dd8 0 1603694919000 4 connected
32ed645a9c9d13ca68dba5a147937fb1d05922ee 127.0.0.1:[email protected] slave c47598b25205cc88abe2e5094d5bfd9ea202335f 0 1603694919200 5 connected
vars currentEpoch 6 lastVoteEpoch 0

2. Cluster scaling practice

The key to cluster scaling isThe cluster is re partitioned to realize the migration of slots between nodes. This section will take adding and deleting nodes in the cluster as an example to practice slot migration.

With the help of integrated redis in redis cli- trib.rb The help menu of the tool is as follows:

$ redis-cli --cluster help
Cluster Manager Commands:
  create         host1:port1 ... hostN:portN --cluster-replicas <arg> check          host:port --cluster-search-multiple-owners
  info           host:port
  fix            host:port --cluster-search-multiple-owners --cluster-fix-with-unreachable-masters
  reshard        host:port --cluster-from <arg>
                 --cluster-to <arg>
                 --cluster-slots <arg>
                 --cluster-yes --cluster-timeout <arg>
                 --cluster-pipeline <arg>
                 --cluster-replace
  rebalance      host:port --cluster-weight <node1=w1...nodeN=wN>
                 --cluster-use-empty-masters --cluster-timeout <arg>
                 --cluster-simulate --cluster-pipeline <arg>
                 --cluster-threshold <arg>
                 --cluster-replace
  add-node       new_host:new_port existing_host:existing_port --cluster-slave --cluster-master-id <arg> del-node       host:port node_id
  call           host:port command arg arg .. arg set-timeout    host:port milliseconds
  import         host:port --cluster-from <arg>
                 --cluster-copy --cluster-replace
  backup         host:port backup_directory
  help
  
For check, fix, reshard, del-node, set-timeout you can specify the host and port of any working node in the cluster.

2.1 cluster scaling – adding nodes

Consider adding two nodes in the cluster, port numbers are 6382 and 6482, in which node 6482 replicates 6382.
(1) Start node: start 6382 and 6482 nodes according to the steps described in 1.1.

(2) Node Handshake: add nodes 6382 and 6482 with redis cli — cluster add node command.

redis-cli --cluster add-node 127.0.0.1:6382 127.0.0.1:6379 redis-cli --cluster add-node 127.0.0.1:6482 127.0.0.1:6379

$ redis-cli --cluster add-node 127.0.0.1:6382 127.0.0.1:6379
>>> Adding node 127.0.0.1:6382 to cluster 127.0.0.1:6379
>>> Performing Cluster Check (using node 127.0.0.1:6379)
S: 87b7dfacde34b3cf57d5f46ab44fd6fffb2e4f52 127.0.0.1:6379 slots: (0 slots) slave
    replicates 4c23b25bd4bcef7f4b77d8287e330ae72e738883
M: 51081a64ddb3ccf5432c435a8cf20d45ab795dd8 127.0.0.1:6381 slots:[10001-16383] (6383 slots) master 1 additional replica(s)
M: c47598b25205cc88abe2e5094d5bfd9ea202335f 127.0.0.1:6380 slots:[5001-10000] (5000 slots) master 1 additional replica(s)
M: 4c23b25bd4bcef7f4b77d8287e330ae72e738883 127.0.0.1:6479 slots:[0-5000] (5001 slots) master 1 additional replica(s)
S: 9d587b75bdaed26ca582036ed706df8b2282b0aa 127.0.0.1:6481 slots: (0 slots) slave
    replicates 51081a64ddb3ccf5432c435a8cf20d45ab795dd8
S: 32ed645a9c9d13ca68dba5a147937fb1d05922ee 127.0.0.1:6480 slots: (0 slots) slave
    replicates c47598b25205cc88abe2e5094d5bfd9ea202335f
[OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage...
[OK] All 16384 slots covered. >>> Send CLUSTER MEET to node 127.0.0.1:6382 to make it join the cluster.
[OK] New node added correctly.

3) Repartitioning: repartitioning the cluster with redis cli — cluster reshard command to make the slots of each node balanced (migrating some slots from node 6379 / 6380 / 6381 to node 6382 respectively). You need to specify:

  • Number of slots moved: the final average number of slots per master node is 4096, so the total number of slots moved is 4096
  • Destination node ID of receiving slot: ID of node 6382
  • Source node ID moved out of slot: ID of node 6379 / 6380 / 6381
$ redis-cli --cluster reshard 127.0.0.1 6479
>>> Performing Cluster Check (using node 127.0.0.1:6479)
M: 4c23b25bd4bcef7f4b77d8287e330ae72e738883 127.0.0.1:6479 slots:[0-5000] (5001 slots) master 1 additional replica(s)
S: 32ed645a9c9d13ca68dba5a147937fb1d05922ee 127.0.0.1:6480 slots: (0 slots) slave
  replicates c47598b25205cc88abe2e5094d5bfd9ea202335f
M: 706f399b248ed3a080cf1d4e43047a79331b714f 127.0.0.1:6482 slots: (0 slots) master
M: af81109fc29f69f9184ce9512c46df476fe693a3 127.0.0.1:6382 slots: (0 slots) master
M: 51081a64ddb3ccf5432c435a8cf20d45ab795dd8 127.0.0.1:6381 slots:[10001-16383] (6383 slots) master 1 additional replica(s)
S: 9d587b75bdaed26ca582036ed706df8b2282b0aa 127.0.0.1:6481 slots: (0 slots) slave
  replicates 51081a64ddb3ccf5432c435a8cf20d45ab795dd8
S: 87b7dfacde34b3cf57d5f46ab44fd6fffb2e4f52 127.0.0.1:6379 slots: (0 slots) slave
  replicates 4c23b25bd4bcef7f4b77d8287e330ae72e738883
M: c47598b25205cc88abe2e5094d5bfd9ea202335f 127.0.0.1:6380 slots:[5001-10000] (5000 slots) master 1 additional replica(s)
[OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage...
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)? 4096 What is the receiving node ID?

(4) Set master-slave relationship:

redis-cli -p 6482 cluster replicate af81109fc29f69f9184ce9512c46df476fe693a3 

127.0.0.1:6482> CLUSTER NODES
32ed645a9c9d13ca68dba5a147937fb1d05922ee 127.0.0.1:[email protected] slave c47598b25205cc88abe2e5094d5bfd9ea202335f 0 1603694930000 0 connected
51081a64ddb3ccf5432c435a8cf20d45ab795dd8 127.0.0.1:[email protected] master - 0 1603694931000 2 connected 11597-16383 9d587b75bdaed26ca582036ed706df8b2282b0aa 127.0.0.1:[email protected] slave 51081a64ddb3ccf5432c435a8cf20d45ab795dd8 0 1603694932000 2 connected
706f399b248ed3a080cf1d4e43047a79331b714f 127.0.0.1:[email protected] myself,slave af81109fc29f69f9184ce9512c46df476fe693a3 0 1603694932000 8 connected
87b7dfacde34b3cf57d5f46ab44fd6fffb2e4f52 127.0.0.1:[email protected] slave 4c23b25bd4bcef7f4b77d8287e330ae72e738883 0 1603694932000 6 connected
c47598b25205cc88abe2e5094d5bfd9ea202335f 127.0.0.1:[email protected] master - 0 1603694933678 0 connected 6251-10000 4c23b25bd4bcef7f4b77d8287e330ae72e738883 127.0.0.1:[email protected] master - 0 1603694932669 6 connected 1250-5000 af81109fc29f69f9184ce9512c46df476fe693a3 127.0.0.1:[email protected] master - 0 1603694933000 9 connected 0-1249 5001-6250 10001-11596

2.2 cluster scaling delete node

Here, we consider deleting the two newly added nodes 6382 and 6482, and we need to migrate the slots allocated on node 6382 to other nodes.

(1) Repartitioning:Similarly, with the help of redis cli — cluster reshard command, all the slots on node 6382 are transferred to node 6479.

$ redis-cli --cluster reshard 127.0.0.1 6382
>>> Performing Cluster Check (using node 127.0.0.1:6382)
M: af81109fc29f69f9184ce9512c46df476fe693a3 127.0.0.1:6382 slots:[0-1249],[5001-6250],[10001-11596] (4096 slots) master 1 additional replica(s)
M: 51081a64ddb3ccf5432c435a8cf20d45ab795dd8 127.0.0.1:6381 slots:[11597-16383] (4787 slots) master 1 additional replica(s)
S: 87b7dfacde34b3cf57d5f46ab44fd6fffb2e4f52 127.0.0.1:6379 slots: (0 slots) slave
    replicates 4c23b25bd4bcef7f4b77d8287e330ae72e738883
S: 32ed645a9c9d13ca68dba5a147937fb1d05922ee 127.0.0.1:6480 slots: (0 slots) slave
    replicates c47598b25205cc88abe2e5094d5bfd9ea202335f
M: 4c23b25bd4bcef7f4b77d8287e330ae72e738883 127.0.0.1:6479 slots:[1250-5000] (3751 slots) master 1 additional replica(s)
M: c47598b25205cc88abe2e5094d5bfd9ea202335f 127.0.0.1:6380 slots:[6251-10000] (3750 slots) master 1 additional replica(s)
S: 706f399b248ed3a080cf1d4e43047a79331b714f 127.0.0.1:6482 slots: (0 slots) slave
    replicates af81109fc29f69f9184ce9512c46df476fe693a3
S: 9d587b75bdaed26ca582036ed706df8b2282b0aa 127.0.0.1:6481 slots: (0 slots) slave
    replicates 51081a64ddb3ccf5432c435a8cf20d45ab795dd8
[OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage...
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)? 4096 What is the receiving node ID? 4c23b25bd4bcef7f4b77d8287e330ae72e738883
Please enter all the source node IDs.
Type 'all' to use all the nodes as source nodes for the hash slots.
Type 'done' once you entered all the source nodes IDs.
Source node #1: af81109fc29f69f9184ce9512c46df476fe693a3
Source node #2: done

127.0.0.1:6379> CLUSTER NODES
c47598b25205cc88abe2e5094d5bfd9ea202335f 127.0.0.1:[email protected] master - 0 1603773540922 0 connected 6251-10000 87b7dfacde34b3cf57d5f46ab44fd6fffb2e4f52 127.0.0.1:[email protected] myself,slave 4c23b25bd4bcef7f4b77d8287e330ae72e738883 0 1603773539000 1 connected
4c23b25bd4bcef7f4b77d8287e330ae72e738883 127.0.0.1:[email protected] master - 0 1603773541000 10 connected 0-6250 10001-11596 706f399b248ed3a080cf1d4e43047a79331b714f 127.0.0.1:[email protected] slave 4c23b25bd4bcef7f4b77d8287e330ae72e738883 0 1603773541000 10 connected
32ed645a9c9d13ca68dba5a147937fb1d05922ee 127.0.0.1:[email protected] slave c47598b25205cc88abe2e5094d5bfd9ea202335f 0 1603773539000 5 connected
9d587b75bdaed26ca582036ed706df8b2282b0aa 127.0.0.1:[email protected] slave 51081a64ddb3ccf5432c435a8cf20d45ab795dd8 0 1603773541931 4 connected
af81109fc29f69f9184ce9512c46df476fe693a3 127.0.0.1:[email protected] master - 0 1603773539000 9 connected
51081a64ddb3ccf5432c435a8cf20d45ab795dd8 127.0.0.1:[email protected] master - 0 1603773540000 2 connected 11597-16383

(2) Delete node:Use the redis cli — cluster del node command to delete the slave node 6482 and the master node 6382 in turn.

$ redis-cli --cluster del-node 127.0.0.1:6482 706f399b248ed3a080cf1d4e43047a79331b714f >>> Removing node 706f399b248ed3a080cf1d4e43047a79331b714f from cluster 127.0.0.1:6482
>>> Sending CLUSTER FORGET messages to the cluster... >>> Sending CLUSTER RESET SOFT to the deleted node.
$ redis-cli --cluster del-node 127.0.0.1:6382 af81109fc29f69f9184ce9512c46df476fe693a3 >>> Removing node af81109fc29f69f9184ce9512c46df476fe693a3 from cluster 127.0.0.1:6382
>>> Sending CLUSTER FORGET messages to the cluster... >>> Sending CLUSTER RESET SOFT to the deleted node.

127.0.0.1:6379> CLUSTER NODES
c47598b25205cc88abe2e5094d5bfd9ea202335f 127.0.0.1:[email protected] master - 0 1603773679121 0 connected 6251-10000 87b7dfacde34b3cf57d5f46ab44fd6fffb2e4f52 127.0.0.1:[email protected] myself,slave 4c23b25bd4bcef7f4b77d8287e330ae72e738883 0 1603773677000 1 connected
4c23b25bd4bcef7f4b77d8287e330ae72e738883 127.0.0.1:[email protected] master - 0 1603773678000 10 connected 0-6250 10001-11596 32ed645a9c9d13ca68dba5a147937fb1d05922ee 127.0.0.1:[email protected] slave c47598b25205cc88abe2e5094d5bfd9ea202335f 0 1603773680130 5 connected
9d587b75bdaed26ca582036ed706df8b2282b0aa 127.0.0.1:[email protected] slave 51081a64ddb3ccf5432c435a8cf20d45ab795dd8 0 1603773677099 4 connected
51081a64ddb3ccf5432c435a8cf20d45ab795dd8 127.0.0.1:[email protected] master - 0 1603773678112 2 connected 11597-16383

3 Summary

The construction of redis cluster environment mainly includes four steps: start node, node handshake, slot assignment and master-slave replication. Cluster scaling also involves these aspects. With the help of redis cli — cluster command to manage the cluster environment, it can not only increase the convenience, but also reduce the risk of operational errors.

Author: hueyxu
Link to the original text: https://www.cnblogs.com/hueyx…
Source: cnblogs

Practice of building redis cluster environment

last

A little buddy love articles, you can give a little praise. Finally, as usual, Amway’s official account: terminal research and development department, is concerned about sending 5T programmers to develop courses. At present, a high-quality technical related article will be recommended every day, mainly sharing Java related skills and interview skills, and learning Java is not lost.