K8s deploying redis cluster

Time:2021-3-5

Redis introduction

Redis stands for promote dictionary server, which is an open source in memory data storage, usually used as database, cache or message broker. It can store and manipulate advanced data types, such as lists, maps, collections and sorted collections.

Because redis accepts a variety of key formats, it can perform operations on the server, thus reducing the workload of the client.

· it only uses the disk for persistence and keeps the data completely in memory.

Redis is a popular data storage solution and is used by GitHub, pinterest, snapchat, twitter, stackoverflow, Flickr and other technology giants.

Why use redis

It’s very fast. It is written in ANSI C and can run on POSIX systems such as Linux, Mac OS X, and Solaris.

Redis is generally ranked as the most popular key / value database and the most popular NoSQL database used with containers.

Its caching solution reduces the number of calls to the cloud database back end.

The application can access it through its client API library.

All popular programming languages support redis.

· it’s open source and stable.

What is redis cluster

Redis cluster is a group of redis instances, which aims to expand the database by partitioning the database, so as to make it more flexible.

Each member of the cluster (whether primary or secondary) manages a subset of the hash slots. If the host is not accessible, its slave is upgraded to the host. In the minimum redis cluster composed of three master nodes, each master node has a slave node (to achieve minimum failover), and each master node is assigned a hash slot range between 0 and 16383. Node a contains hash slots from 0 to 5000, node B from 5001 to 10000, and node C from 10001 to 16383.

Communication within the cluster is carried out through the internal bus, using the protocol to disseminate information about the cluster or discover new nodes.

Process record of deploying redis cluster in kubernetes

Deploying redis clusters in kubernetes is a challenge because each redis instance depends on a configuration file that can track other cluster instances and their roles. To do this, we need to combine stateful sets controller with persistent volumes persistent storage.

Design principle model of statefulset:

Topology status:

Multiple instances of an application are not completely equivalent. The application instance must be started in some order. For example, the master node a of the application must be started before the slave node B. If you delete the two pods A and B, they must be created again in strict accordance with this order. In addition, the newly created pod must have the same network ID as the original pod, so that the original visitors can use the same method to access the new pod

Storage status:

Multiple application instances are bound with different storage data. For these application instances, the data read by pod a for the first time and the data read again after ten minutes should be the same, even if pod a has been re created during this period. Multiple storage instances of a database application.

Storage volume

After understanding the state of statefulset, you should know that you need to prepare a storage volume for data. There are static and dynamic ways to create it. The static way is to manually create PV and PVC, and then call pod. Here, dynamic NFS is used as the mount volume, and NFS dynamic storageclass needs to be deployed

1. Using NFS to configure dynamic persistent storage of statefulset

1) Create the shared directory of redis cluster through NFS server (172.16.60.238)

[[email protected] ~]# mkdir -p /data/storage/k8s/redis

2) Creating RBAC of NFS

[[email protected] ~]# mkdir -p /opt/k8s/k8s_project/redis
[[email protected] ~]# cd /opt/k8s/k8s_project/redis
[[email protected] redis]# vim nfs-rbac.yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-provisioner
  namespace: wiseco
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
   name: nfs-provisioner-runner
   namespace: wiseco
rules:
   -  apiGroups: [""]
      resources: ["persistentvolumes"]
      verbs: ["get", "list", "watch", "create", "delete"]
   -  apiGroups: [""]
      resources: ["persistentvolumeclaims"]
      verbs: ["get", "list", "watch", "update"]
   -  apiGroups: ["storage.k8s.io"]
      resources: ["storageclasses"]
      verbs: ["get", "list", "watch"]
   -  apiGroups: [""]
      resources: ["events"]
      verbs: ["watch", "create", "update", "patch"]
   -  apiGroups: [""]
      resources: ["services", "endpoints"]
      verbs: ["get","create","list", "watch","update"]
   -  apiGroups: ["extensions"]
      resources: ["podsecuritypolicies"]
      resourceNames: ["nfs-provisioner"]
      verbs: ["use"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-provisioner
    namespace: wiseco
roleRef:
  kind: ClusterRole
  name: nfs-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
Create and view
[[email protected] redis]# kubectl apply -f nfs-rbac.yaml
serviceaccount/nfs-provisioner created
clusterrole.rbac.authorization.k8s.io/nfs-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-provisioner created
 
[[email protected] redis]# kubectl get sa -n wiseco|grep nfs
nfs-provisioner                1         24s
[[email protected] redis]# kubectl get clusterrole -n wiseco|grep nfs
nfs-provisioner-runner                                                 2021-02-04T02:21:11Z
[[email protected] redis]# kubectl get clusterrolebinding -n wiseco|grep nfs
run-nfs-provisioner                                    ClusterRole/nfs-provisioner-runner                                                 34s

3) Create the storage class of redis cluster

[[email protected] redis]# ll
total 4
-rw-r--r-- 1 root root 1216 Feb  4 10:20 nfs-rbac.yaml
 
[[email protected] redis]# vim redis-nfs-class.yaml
apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
  name: redis-nfs-storage
  namespace: wiseco
provisioner: redis/nfs
reclaimPolicy: Retain
Create and view
[[email protected] redis]# kubectl apply -f redis-nfs-class.yaml
storageclass.storage.k8s.io/redis-nfs-storage created
 
[[email protected] redis]# kubectl get sc -n wiseco
NAME                PROVISIONER   RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
redis-nfs-storage   redis/nfs     Retain          Immediate           false

4) Creating NFS client provisioner of redis cluster

[[email protected] redis]# ll
total 8
-rw-r--r-- 1 root root 1216 Feb  4 10:20 nfs-rbac.yaml
-rw-r--r-- 1 root root  155 Feb  4 10:24 redis-nfs-class.yaml
 
[[email protected] redis]# vim redis-nfs.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-nfs-client-provisioner
  namespace: wiseco
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis-nfs-client-provisioner
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: redis-nfs-client-provisioner
    spec:
      serviceAccount: nfs-provisioner
      containers:
        - name: redis-nfs-client-provisioner
          image: registry.cn-hangzhou.aliyuncs.com/open-ali/nfs-client-provisioner
          imagePullPolicy: IfNotPresent
          volumeMounts:
            - name: nfs-client-root
              mountPath:  /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: redis/nfs
            - name: NFS_SERVER
              value: 172.16.60.238
            - name: NFS_PATH
              value: /data/storage/k8s/redis
      volumes:
        - name: nfs-client-root
          nfs:
            server: 172.16.60.238
            path: /data/storage/k8s/redis 
Create and view
[[email protected] redis]# kubectl apply -f redis-nfs.yml
deployment.apps/redis-nfs-client-provisioner created
 
[[email protected] redis]# kubectl get pods -n wiseco|grep nfs
redis-nfs-client-provisioner-58b46549dd-h87gg   1/1     Running   0          40s

2. Deploying redis cluster

The namespace used in this case is wiseco

1) Prepare image image image

redis- trib.rb The tool can go to the redis source code, copy one to the current directory, and then build an image.

[[email protected] redis]# pwd
/opt/k8s/k8s_project/redis
[[email protected] redis]# ll
total 12
-rw-r--r-- 1 root root 1216 Feb  4 15:31 nfs-rbac.yaml
-rw-r--r-- 1 root root  155 Feb  4 15:32 redis-nfs-class.yaml
-rw-r--r-- 1 root root 1006 Feb  4 15:32 redis-nfs.yml
 
[[email protected] redis]# mkdir image
[[email protected] redis]# cd image
[[email protected] image]# ll
total 64
-rw-r--r-- 1 root root   191 Feb  4 18:14 Dockerfile
-rwxr-xr-x 1 root root 60578 Feb  4 15:49 redis-trib.rb
 
[[email protected] image]# cat Dockerfile
FROM redis:4.0.11
RUN apt-get update -y
RUN apt-get install -y  ruby \
rubygems
RUN apt-get clean all
RUN gem install redis
RUN apt-get install dnsutils -y
COPY redis-trib.rb /usr/local/bin/
Create the image and upload it to the harbor warehouse
[[email protected] image]# docker build -t 172.16.60.238/wiseco/redis:4.0.11 .
[[email protected] image]# docker push 172.16.60.238/wiseco/redis:4.0.11

2) Create configmap

Redis configuration files are mounted in configmap mode. If the configuration is encapsulated in docker image, we need to rebuild docker build every time we modify the configuration. I think it’s troublesome, so I use configmap to mount the configuration.

[[email protected] redis]# pwd
/opt/k8s/k8s_project/redis
[[email protected] redis]# ll
total 12
drwxr-xr-x 2 root root   45 Feb  4 18:14 image
-rw-r--r-- 1 root root 1216 Feb  4 15:31 nfs-rbac.yaml
-rw-r--r-- 1 root root  155 Feb  4 15:32 redis-nfs-class.yaml
-rw-r--r-- 1 root root 1006 Feb  4 15:32 redis-nfs.yml
 
[[email protected] redis]# mkdir conf
[[email protected] redis]# cd conf/
 
[[email protected] conf]# vim redis-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: redis-cluster
  namespace: wiseco
data:
  fix-ip.sh: |
    #!/bin/sh
    CLUSTER_CONFIG="/data/nodes.conf"
    if [ -f ${CLUSTER_CONFIG} ]; then
      if [ -z "${POD_IP}" ]; then
        echo "Unable to determine Pod IP address!"
        exit 1
      fi
      echo "Updating my IP to ${POD_IP} in ${CLUSTER_CONFIG}"
      sed -i.bak -e '/myself/ s/[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}/'${POD_IP}'/' ${CLUSTER_CONFIG}
    fi
    exec "[email protected]"
  redis.conf: |
    cluster-enabled yes
    cluster-config-file /data/nodes.conf
    cluster-node-timeout 10000
    protected-mode no
    daemonize no
    pidfile /var/run/redis.pid
    port 6379
    tcp-backlog 511
    bind 0.0.0.0
    timeout 3600
    tcp-keepalive 1
    loglevel verbose
    logfile /data/redis.log
    databases 16
    save 900 1
    save 300 10
    save 60 10000
    stop-writes-on-bgsave-error yes
    rdbcompression yes
    rdbchecksum yes
    dbfilename dump.rdb
    dir /data
    #requirepass yl123456
    appendonly yes
    appendfilename "appendonly.aof"
    appendfsync everysec
    no-appendfsync-on-rewrite no
    auto-aof-rewrite-percentage 100
    auto-aof-rewrite-min-size 64mb
    lua-time-limit 20000
    slowlog-log-slower-than 10000
    slowlog-max-len 128
    #rename-command FLUSHALL  ""
    latency-monitor-threshold 0
    notify-keyspace-events ""
    hash-max-ziplist-entries 512
    hash-max-ziplist-value 64
    list-max-ziplist-entries 512
    list-max-ziplist-value 64
    set-max-intset-entries 512
    zset-max-ziplist-entries 128
    zset-max-ziplist-value 64
    hll-sparse-max-bytes 3000
    activerehashing yes
    client-output-buffer-limit normal 0 0 0
    client-output-buffer-limit slave 256mb 64mb 60
    client-output-buffer-limit pubsub 32mb 8mb 60
    hz 10
    aof-rewrite-incremental-fsync yes
Note: fix- ip.sh  The script is used to change the pod IP when a pod in the redis cluster is rebuilt/ nodes.conf Replace the old pod IP with the new one. Otherwise, the cluster will have problems.
 
Create and view
[[email protected] conf]# kubectl apply -f redis-configmap.yaml
 
[[email protected] conf]# kubectl get cm -n wiseco|grep redis
redis-cluster                     2      8m55s

3) Prepare statefulset

Volumeclaimtemplates is used in statefulset controller scenarios

[[email protected] redis]# pwd
/opt/k8s/k8s_project/redis
[[email protected] redis]# ll
total 12
drwxr-xr-x 2 root root   34 Feb  4 18:52 conf
drwxr-xr-x 2 root root   45 Feb  4 18:14 image
-rw-r--r-- 1 root root 1216 Feb  4 15:31 nfs-rbac.yaml
-rw-r--r-- 1 root root  155 Feb  4 15:32 redis-nfs-class.yaml
-rw-r--r-- 1 root root 1006 Feb  4 15:32 redis-nfs.yml
 
[[email protected] redis]# mkdir deploy
[[email protected] redis]# cd deploy/
[[email protected] deploy]# cat redis-cluster.yml
---
apiVersion: v1
kind: Service
metadata:
  namespace: wiseco
  name: redis-cluster
spec:
  clusterIP: None
  ports:
  - port: 6379
    targetPort: 6379
    name: client
  - port: 16379
    targetPort: 16379
    name: gossip
  selector:
    app: redis-cluster
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  namespace: wiseco
  name: redis-cluster
spec:
  serviceName: redis-cluster
  podManagementPolicy: OrderedReady
  replicas: 6
  selector:
    matchLabels:
      app: redis-cluster
  template:
    metadata:
      labels:
        app: redis-cluster
    spec:
      containers:
      - name: redis
        image: 172.16.60.238/wiseco/redis:4.0.11
        ports:
        - containerPort: 6379
          name: client
        - containerPort: 16379
          name: gossip
        command: ["/etc/redis/fix-ip.sh", "redis-server", "/etc/redis/redis.conf"]
        env:
        - name: POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        volumeMounts:
        - name: conf
          mountPath: /etc/redis/
          readOnly: false
        - name: data
          mountPath: /data
          readOnly: false
      volumes:
      - name: conf
        configMap:
          name: redis-cluster
          defaultMode: 0755
  volumeClaimTemplates:
  - metadata:
      name: data
      annotations:
        volume.beta.kubernetes.io/storage-class: "redis-nfs-storage"
    spec:
      accessModes:
        - ReadWriteMany
      resources:
        requests:
          storage: 10Gi
Create and view
[[email protected] deploy]# kubectl apply -f redis-cluster.yml
 
[[email protected] deploy]# kubectl get pods -n wiseco|grep redis-cluster
redis-cluster-0                                 1/1     Running   0          10m
redis-cluster-1                                 1/1     Running   0          10m
redis-cluster-2                                 1/1     Running   0          10m
redis-cluster-3                                 1/1     Running   0          10m
redis-cluster-4                                 1/1     Running   0          9m35s
redis-cluster-5                                 1/1     Running   0          9m25s
 
[[email protected] deploy]# kubectl get svc -n wiseco|grep redis-cluster
redis-cluster    ClusterIP   None                     6379/TCP,16379/TCP           10m
View PV, PVC
[[email protected] deploy]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS        CLAIM                         STORAGECLASS        REASON   AGE
pvc-20bcb3be-90e1-4354-bd11-4f442a3bd562   10Gi       RWX            Delete           Bound         wiseco/data-redis-cluster-0   redis-nfs-storage            19m
pvc-3b53a31b-9a53-4bd4-93ff-2cf9fed551de   10Gi       RWX            Delete           Bound         wiseco/data-redis-cluster-2   redis-nfs-storage            12m
pvc-43c0cba2-54a9-4416-afb6-8b7730a199dc   10Gi       RWX            Delete           Bound         wiseco/data-redis-cluster-1   redis-nfs-storage            12m
pvc-66daade5-1b97-41ce-a9e0-4cf88d63894d   10Gi       RWX            Delete           Terminating   wiseco/data-redis-cluster-5   redis-nfs-storage            11m
pvc-dd62a086-1802-446a-9f9d-35620f7f0b4a   10Gi       RWX            Delete           Bound         wiseco/data-redis-cluster-4   redis-nfs-storage            11m
pvc-e5aa9802-b983-471c-a7da-32eebc497610   10Gi       RWX            Delete           Bound         wiseco/data-redis-cluster-3   redis-nfs-storage            12m
 
[[email protected] deploy]# kubectl get pvc -n wiseco
NAME                   STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS        AGE
data-redis-cluster-0   Bound    pvc-20bcb3be-90e1-4354-bd11-4f442a3bd562   10Gi       RWX            redis-nfs-storage   19m
data-redis-cluster-1   Bound    pvc-43c0cba2-54a9-4416-afb6-8b7730a199dc   10Gi       RWX            redis-nfs-storage   12m
data-redis-cluster-2   Bound    pvc-3b53a31b-9a53-4bd4-93ff-2cf9fed551de   10Gi       RWX            redis-nfs-storage   12m
data-redis-cluster-3   Bound    pvc-e5aa9802-b983-471c-a7da-32eebc497610   10Gi       RWX            redis-nfs-storage   12m
data-redis-cluster-4   Bound    pvc-dd62a086-1802-446a-9f9d-35620f7f0b4a   10Gi       RWX            redis-nfs-storage   11m
data-redis-cluster-5   Bound    pvc-66daade5-1b97-41ce-a9e0-4cf88d63894d   10Gi       RWX            redis-nfs-storage   11m

4) View NFS shared storage

NFS server (172.16.60.238), view the shared directory / data / storage / k8s / redis
[[email protected] redis]# pwd
/data/storage/k8s/redis
[[email protected] redis]# ll
total 0
drwxrwxrwx 2 root root 63 Feb  4 18:59 wiseco-data-redis-cluster-0-pvc-20bcb3be-90e1-4354-bd11-4f442a3bd562
drwxrwxrwx 2 root root 63 Feb  4 18:59 wiseco-data-redis-cluster-1-pvc-43c0cba2-54a9-4416-afb6-8b7730a199dc
drwxrwxrwx 2 root root 63 Feb  4 18:59 wiseco-data-redis-cluster-2-pvc-3b53a31b-9a53-4bd4-93ff-2cf9fed551de
drwxrwxrwx 2 root root 63 Feb  4 19:00 wiseco-data-redis-cluster-3-pvc-e5aa9802-b983-471c-a7da-32eebc497610
drwxrwxrwx 2 root root 63 Feb  4 19:00 wiseco-data-redis-cluster-4-pvc-dd62a086-1802-446a-9f9d-35620f7f0b4a
drwxrwxrwx 2 root root 63 Feb  4 19:00 wiseco-data-redis-cluster-5-pvc-66daade5-1b97-41ce-a9e0-4cf88d63894d
[[email protected] redis]# ls ./*
./wiseco-data-redis-cluster-0-pvc-20bcb3be-90e1-4354-bd11-4f442a3bd562:
appendonly.aof  nodes.conf  redis.log
 
./wiseco-data-redis-cluster-1-pvc-43c0cba2-54a9-4416-afb6-8b7730a199dc:
appendonly.aof  nodes.conf  redis.log
 
./wiseco-data-redis-cluster-2-pvc-3b53a31b-9a53-4bd4-93ff-2cf9fed551de:
appendonly.aof  nodes.conf  redis.log
 
./wiseco-data-redis-cluster-3-pvc-e5aa9802-b983-471c-a7da-32eebc497610:
appendonly.aof  nodes.conf  redis.log
 
./wiseco-data-redis-cluster-4-pvc-dd62a086-1802-446a-9f9d-35620f7f0b4a:
appendonly.aof  nodes.conf  redis.log
 
./wiseco-data-redis-cluster-5-pvc-66daade5-1b97-41ce-a9e0-4cf88d63894d:
appendonly.aof  nodes.conf  redis.log

3. Initialize redis cluster

Next, form a redis cluster, run the following command and type yes to accept the configuration.

Cluster form: the first three nodes become master nodes, and the last three nodes become slave nodes.

Note:

redis- trib.rb You must use IP to initialize the redis cluster. If you use the domain name, you will report the following error:*******/redis/client.rb:126:in `call’: ERR Invalid node address specified: redis-cluster-0.redis-headless.sts-app.svc.cluster.local:6379 (Redis::CommandError)

Here is the command to initialize the redis cluster

Use the following command and type yes to accept the configuration. The first three nodes become master nodes and the last three become slave nodes.

kubectl exec -it redis-cluster-0 -n wiseco -- redis-trib.rb create --replicas 1 $(kubectl get pods -l app=redis-cluster -n wiseco -o jsonpath='{range.items[*]}{.status.podIP}:6379 ')
First, obtain the IP addresses of the pods of the six nodes in the redis cluster
[[email protected] redis]# kubectl get pods -n wiseco -o wide|grep redis-cluster
redis-cluster-0                                 1/1     Running   0          4h34m   172.30.217.83    k8s-node04              
redis-cluster-1                                 1/1     Running   0          4h34m   172.30.85.217    k8s-node01              
redis-cluster-2                                 1/1     Running   0          4h34m   172.30.135.181   k8s-node03              
redis-cluster-3                                 1/1     Running   0          4h34m   172.30.58.251    k8s-node02              
redis-cluster-4                                 1/1     Running   0          4h33m   172.30.85.216    k8s-node01              
redis-cluster-5                                 1/1     Running   0          4h33m   172.30.217.82    k8s-node04              
 
 
[[email protected] redis]# kubectl get pods -l app=redis-cluster -n wiseco -o jsonpath='{range.items[*]}{.status.podIP}:6379 '
172.30.217.83:6379 172.30.85.217:6379 172.30.135.181:6379 172.30.58.251:6379 172.30.85.216:6379 172.30.217.82:6379
 
Here is a special note:
There must be a space before the last single quotation mark of the above command!!
When the redis cluster is initialized next, the IP + ports between cluster nodes should be separated by spaces.
 
[[email protected] redis]# kubectl exec -it redis-cluster-0 -n wiseco -- redis-trib.rb create --replicas 1 $(kubectl get pods -l app=redis-cluster -n wiseco -o jsonpath='{range.items[*]}{.status.podIP}:6379 ')
>>> Creating cluster
>>> Performing hash slots allocation on 6 nodes...
Using 3 masters:
172.30.217.83:6379
172.30.85.217:6379
172.30.135.181:6379
Adding replica 172.30.58.251:6379 to 172.30.217.83:6379
Adding replica 172.30.85.216:6379 to 172.30.85.217:6379
Adding replica 172.30.217.82:6379 to 172.30.135.181:6379
M: e5a3154a17131075f35fb32953b8cf8d6cfc7df0 172.30.217.83:6379
   slots:0-5460 (5461 slots) master
M: 961398483262f505a115957e7e4eda7ff3e64900 172.30.85.217:6379
   slots:5461-10922 (5462 slots) master
M: 2d1440e37ea4f4e9f6d39d240367deaa609d324d 172.30.135.181:6379
   slots:10923-16383 (5461 slots) master
S: 0d7bf40bf18d474509116437959b65551cd68b03 172.30.58.251:6379
   replicates e5a3154a17131075f35fb32953b8cf8d6cfc7df0
S: 8cbf699a850c0dafe51524127a594fdbf0a27784 172.30.85.216:6379
   replicates 961398483262f505a115957e7e4eda7ff3e64900
S: 2987a33f4ce2e412dcc11c1c1daa2538591cd930 172.30.217.82:6379
   replicates 2d1440e37ea4f4e9f6d39d240367deaa609d324d
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join......
>>> Performing Cluster Check (using node 172.30.217.83:6379)
M: e5a3154a17131075f35fb32953b8cf8d6cfc7df0 172.30.217.83:6379
   slots:0-5460 (5461 slots) master
M: 961398483262f505a115957e7e4eda7ff3e64900 172.30.85.217:6379
   slots:5461-10922 (5462 slots) master
M: 2d1440e37ea4f4e9f6d39d240367deaa609d324d 172.30.135.181:6379
   slots:10923-16383 (5461 slots) master
M: 0d7bf40bf18d474509116437959b65551cd68b03 172.30.58.251:6379
   slots: (0 slots) master
   replicates e5a3154a17131075f35fb32953b8cf8d6cfc7df0
M: 8cbf699a850c0dafe51524127a594fdbf0a27784 172.30.85.216:6379
   slots: (0 slots) master
   replicates 961398483262f505a115957e7e4eda7ff3e64900
M: 2987a33f4ce2e412dcc11c1c1daa2538591cd930 172.30.217.82:6379
   slots: (0 slots) master
   replicates 2d1440e37ea4f4e9f6d39d240367deaa609d324d
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered. 
From the above initialization information, we can see the cluster relationship:
Redis-cluster-0 is the master node and redis-cluster-3 is its slave node.
Redis-cluster-1 is the master node and redis-cluster-4 is its slave node.
Redis-cluster-2 is the master node and redis-cluster-5 is its slave node.

4. Verify redis cluster deployment

[[email protected] redis]# kubectl exec -it redis-cluster-0 -n wiseco -- redis-cli cluster info                                       
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:130
cluster_stats_messages_pong_sent:137
cluster_stats_messages_sent:267
cluster_stats_messages_ping_received:132
cluster_stats_messages_pong_received:130
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:267
 
[[email protected] redis]# for x in $(seq 0 5); do echo "redis-cluster-$x"; kubectl exec redis-cluster-$x -n wiseco -- redis-cli role; echo; done
redis-cluster-0
master
168
172.30.58.251
6379
168
 
redis-cluster-1
master
168
172.30.85.216
6379
168
 
redis-cluster-2
master
182
172.30.217.82
6379
168
 
redis-cluster-3
slave
172.30.217.83
6379
connected
182
 
redis-cluster-4
slave
172.30.85.217
6379
connected
168
 
redis-cluster-5
slave
172.30.135.181
6379
connected
182

Note: I wrote an article about deploying single node redis on k8s before, and today I found thatAll the glitzThe boss wrote an articleCluster versionAnd upgrade on the basis of it.

Recommended Today

Large scale distributed storage system: Principle Analysis and architecture practice.pdf

Focus on “Java back end technology stack” Reply to “interview” for full interview information Distributed storage system, which stores data in multiple independent devices. Traditional network storage system uses centralized storage server to store all data. Storage server becomes the bottleneck of system performance and the focus of reliability and security, which can not meet […]