Introduction to redis – data type: detailed explanation of stream

Time:2021-11-14

Redis5.0 also adds a data type stream, which draws lessons from Kafka’s design. It is a new and powerful persistent message queue supporting [email protected] pdai

Why is stream designed

Redis5.0 also adds a data structure stream, which is literally a stream type, but in fact, from a functional point of view, it should be redis’s perfect implementation of message queue (MQ, message queue).

Anyone who has used redis as a message queue knows that there are many kinds of reids based message queue implementations, such as:

  • Pub / sub, subscription / publish mode
    • However, the publish subscribe mode cannot be persistent. If the network is disconnected or redis is down, the message will be discarded;
  • be based onList LPUSH+BRPOPperhapsBased on sorted setImplementation of
    • It supports persistence, but does not support multicast, packet consumption, etc

Why can’t the above structure meet a wide range of MQ scenarios? This leads to a core problem: if we want to design a data structure to implement message queue, the most important thing is to understandWhat needs to be considered in designing a message queue? It’s easy for us to think of the preliminary

  • Message production
  • Message consumption
    • Unicast and multicast (many to many)
    • Blocking and non blocking reads
  • Message ordering
  • Message persistence

What else to consider, huh? With the help of an article by meituan’s technical team,Essentials of message queue designGraph in

Let’s take a look at what designs redis has considered

  • Serialization generation of message ID
  • Message traversal
  • Blocking and non blocking reads of messages
  • Packet consumption of messages
  • Unfinished message processing
  • Message queue monitoring

This is also the point that we need to understand the stream, but combined with the above figure, we should also understand that redis stream is also an ultra lightweight MQ, which does not fully realize all the design points of message queue, which determines its applicable scenario.

Stream explanation

After combing and summarizing, I think it is more appropriate to understand stream from the following aspects: @ pdai

  • Structural design of stream
  • Production and consumption
    • Basic addition, deletion, query and modification
    • Single consumer consumption
    • Consumption of consumption group
  • Monitoring status

Structure of stream

Each stream has a unique name, which is the redis key. It is automatically created when we first use the xadd instruction to append messages.

Analysis of the above figure:

  • Consumer Group: a consumer group is created by using the xgroup create command. A consumer group has multiple consumers. These consumers are competitive.
  • last_delivered_idLast cursor, there will be one cursor for each consumption group:_ delivered_ ID, any consumer reading the message will make the cursor last_ delivered_ ID moves forward.
  • pending_ids: the status variable of the consumer, which is used to maintain the unconfirmed ID of the consumer. pending_ IDS records the messages that have been read by the client, but not yetack(acknowledge character): if the client does not have an ACK, there will be more and more message IDS in this variable. Once a message is acked, it will begin to decrease. This pending_ids variable is officially called pel in redis, that is, pending Entries list. This is a very core data structure. It is used to ensure that the client consumes a message at least once, and It will not be lost in the middle of network transmission and will not be processed.

In addition, we need to understand two points:

  • Message ID: the message ID is in the form of timestampinmillis sequence, such as 1527846880572-5. It indicates that the current message is generated at the millimeter timestamp 1527846880572, and it is the fifth message generated within this millisecond. The message ID can be automatically generated by the server or specified by the client itself, but the form must be integer integer, and the ID of the later added message must be greater than the previous message ID.
  • Message content: the message content is a key value pair, such as a key value pair of hash structure, which is nothing special.

Add, delete, modify and query

Message queue related commands:

  • Xadd – add message to end
  • Xtrim – trim by convection, limit length
  • Xdel – delete message
  • Xlen – gets the number of elements contained in the stream, that is, the message length
  • Xrange – get the message list and automatically filter the deleted messages
  • Xrevrange – reverse get message list, ID from large to small
  • Xread – get message list in blocking or non blocking mode
#The * sign indicates that the server automatically generates an ID, followed by a bunch of key / values
127.0.0.1:6379 > xadd codehole * name Laoqian age 30 # name Laoqian, age 30
1527849609889-0 # generated message ID
127.0.0.1:6379> xadd codehole * name xiaoyu age 29
1527849629172-0
127.0.0.1:6379> xadd codehole * name xiaoqian age 1
1527849637634-0
127.0.0.1:6379> xlen codehole
(integer) 3
127.0.0.1:6379 > xrange codehole - + # - indicates the minimum value and + indicates the maximum value
127.0.0.1:6379> xrange codehole - +
1) 1) 1527849609889-0
   1) 1) "name"
      1) "laoqian"
      2) "age"
      3) "30"
2) 1) 1527849629172-0
   1) 1) "name"
      1) "xiaoyu"
      2) "age"
      3) "29"
3) 1) 1527849637634-0
   1) 1) "name"
      1) "xiaoqian"
      2) "age"
      3) "1"
127.0.0.1:6379 > xrange codehole 1527849629172-0 + # specifies the list of minimum message IDs
1) 1) 1527849629172-0
   2) 1) "name"
      2) "xiaoyu"
      3) "age"
      4) "29"
2) 1) 1527849637634-0
   2) 1) "name"
      2) "xiaoqian"
      3) "age"
      4) "1"
127.0.0.1:6379 > xrange codehole - 1527849629172-0 # specifies the list of maximum message IDs
1) 1) 1527849609889-0
   2) 1) "name"
      2) "laoqian"
      3) "age"
      4) "30"
2) 1) 1527849629172-0
   2) 1) "name"
      2) "xiaoyu"
      3) "age"
      4) "29"
127.0.0.1:6379> xdel codehole 1527849609889-0
(integer) 1
127.0.0.1:6379 > xlen codehole # length is not affected
(integer) 3
127.0.0.1:6379 > xrange codehole - + # deleted message is gone
1) 1) 1527849629172-0
   2) 1) "name"
      2) "xiaoyu"
      3) "age"
      4) "29"
2) 1) 1527849637634-0
   2) 1) "name"
      2) "xiaoqian"
      3) "age"
      4) "1"
127.0.0.1:6379 > del codehole # deletes the entire stream
(integer) 1

Independent consumption

We can consume stream messages independently without defining a consumption group. When there is no new message in the stream, we can even block the waiting. Redis has designed a separate consumption instruction xread, which can use stream as an ordinary message queue (list). When using xread, we can completely ignore the existence of consumer group, just like a stream is an ordinary list.

#Read two messages from the stream header
127.0.0.1:6379> xread count 2 streams codehole 0-0
1) 1) "codehole"
   2) 1) 1) 1527851486781-0
         2) 1) "name"
            2) "laoqian"
            3) "age"
            4) "30"
      2) 1) 1527851493405-0
         2) 1) "name"
            2) "yurui"
            3) "age"
            4) "29"
#Read a message from the tail of the stream. There is no doubt that no message will be returned here
127.0.0.1:6379> xread count 1 streams codehole $
(nil)
#Block from the tail until a new message arrives, and the following instructions will be blocked until a new message arrives
127.0.0.1:6379> xread block 0 count 1 streams codehole $
#We open a new window and insert messages into the stream
127.0.0.1:6379> xadd codehole * name youming age 60
1527852774092-0
#Then switch to the previous window, and we can see that the blocking is removed and a new message content is returned
#It also shows a waiting time, where we waited for 93s
127.0.0.1:6379> xread block 0 count 1 streams codehole $
1) 1) "codehole"
   2) 1) 1) 1527852774092-0
         2) 1) "name"
            2) "youming"
            3) "age"
            4) "60"
(93.11s)

If the client wants to use xread for sequential consumption, it must remember where the current consumption is, that is, the returned message ID. The next time you continue to call xread, pass in the last message ID returned last time as a parameter, and you can continue to consume subsequent messages.

Block 0 means blocking forever until the message arrives. Block 1000 means blocking for 1s. If no message arrives within 1s, nil is returned

127.0.0.1:6379> xread block 1000 count 1 streams codehole $
(nil)
(1.07s)

Consumption group consumption

  • Consumption group consumption chart

  • Related commands:

    • Xgroup create – create consumer group
    • Xreadgroup – read messages from consumer groups
    • Xack – mark message as processed
    • Xgroup setid – sets the new last delivery message ID for the consumer group
    • Xgroup delconsumer – delete consumer
    • Xgroup delete – delete consumer group
    • Xpending – displays information about the pending message
    • Xclaim – transfer ownership of messages
    • Xinfo – view information about streams and consumer groups;
    • Xinfo groups – print information of consumer groups;
    • Xinfo stream – print stream information
  • Create consumption group

Stream creates a consumer group through the xgroup create instruction. It needs to pass the start message ID parameter to initialize last_ delivered_ ID variable.

127.0.0.1:6379 > xgroup create codehole CG1 0-0 # means to consume from scratch
OK
#$means to consume from the tail and only accept new messages. All current stream messages will be ignored
127.0.0.1:6379> xgroup create codehole cg2 $
OK
127.0.0.1:6379 > Xinfo stream codehole # get stream information
 1) length
 2) (integer) 3 # of 3 messages
 3) radix-tree-keys
 4) (integer) 1
 5) radix-tree-nodes
 6) (integer) 2
 7) groups
 8) (integer) 2 # two consumption groups
 9) First entry # first message
10) 1) 1527851486781-0
    2) 1) "name"
       2) "laoqian"
       3) "age"
       4) "30"
11) Last entry # last message
12) 1) 1527851498956-0
    2) 1) "name"
       2) "xiaoqian"
       3) "age"
       4) "1"
127.0.0.1:6379 > Xinfo groups codehole # gets the consumption group information of stream
1) 1) name
   2) "cg1"
   3) consumers
   4) (integer) 0 # this consumer group has no consumers
   5) pending
   6) (integer) 0 # the consumer group has no messages being processed
2) 1) name
   2) "cg2"
   3) Consumers # this consumer group has no consumers
   4) (integer) 0
   5) pending
   6) (integer) 0 # the consumer group has no messages being processed
  • Consumption group consumption

Stream provides the xreadgroup instruction, which can be used for intra group consumption of the consumption group. The consumption group name, consumer name and start message ID need to be provided. Like xread, it can also block waiting for new messages. After reading the new message, the corresponding message ID will enter the consumer’s pel (message being processed) structure. After the client completes processing, it will use the xack instruction to notify the server that this message has been processed, and the message ID will be removed from the pel.

#The > sign indicates the last from the current consumption group_ delivered_ Start reading after ID
#Every time the consumer reads a message, last_ delivered_ The ID variable will advance
127.0.0.1:6379> xreadgroup GROUP cg1 c1 count 1 streams codehole >
1) 1) "codehole"
   2) 1) 1) 1527851486781-0
         2) 1) "name"
            2) "laoqian"
            3) "age"
            4) "30"
127.0.0.1:6379> xreadgroup GROUP cg1 c1 count 1 streams codehole >
1) 1) "codehole"
   2) 1) 1) 1527851493405-0
         2) 1) "name"
            2) "yurui"
            3) "age"
            4) "29"
127.0.0.1:6379> xreadgroup GROUP cg1 c1 count 2 streams codehole >
1) 1) "codehole"
   2) 1) 1) 1527851498956-0
         2) 1) "name"
            2) "xiaoqian"
            3) "age"
            4) "1"
      2) 1) 1527852774092-0
         2) 1) "name"
            2) "youming"
            3) "age"
            4) "60"
#If you continue reading, there will be no new messages
127.0.0.1:6379> xreadgroup GROUP cg1 c1 count 1 streams codehole >
(nil)
#Then wait
127.0.0.1:6379> xreadgroup GROUP cg1 c1 block 0 count 1 streams codehole >
#Open another window and fill it with messages
127.0.0.1:6379> xadd codehole * name lanying age 61
1527854062442-0
#Go back to the previous window and find that the blocking has been removed and a new message has been received
127.0.0.1:6379> xreadgroup GROUP cg1 c1 block 0 count 1 streams codehole >
1) 1) "codehole"
   2) 1) 1) 1527854062442-0
         2) 1) "name"
            2) "lanying"
            3) "age"
            4) "61"
(36.54s)
127.0.0.1:6379 > Xinfo groups codehole # observe consumer group information
1) 1) name
   2) "cg1"
   3) consumers
   4) (integer) 1 # consumer
   5) pending
   6) (integer) 5 # of 5 messages being processed. Is there any ack
2) 1) name
   2) "cg2"
   3) consumers
   4) (integer) 0 # consumption group CG2 has not changed, because we have been manipulating CG1
   5) pending
   6) (integer) 0
#If there are multiple consumers in the same consumer group, we can observe the status of each consumer through the Xinfo consumers command
127.0.0.1:6379 > Xinfo consumers codehole CG1 # currently has 1 consumer
1) 1) name
   2) "c1"
   3) pending
   4) (integer) 5 # of 5 pending messages
   5) idle
   6) (integer) how long has 418715 # been idle and MS has not read messages
#Next we ack a message
127.0.0.1:6379> xack codehole cg1 1527851486781-0
(integer) 1
127.0.0.1:6379> xinfo consumers codehole cg1
1) 1) name
   2) "c1"
   3) pending
   4) (integer) 4 # becomes 5
   5) idle
   6) (integer) 668504
#The following ack all messages
127.0.0.1:6379> xack codehole cg1 1527851493405-0 1527851498956-0 1527852774092-0 1527854062442-0
(integer) 4
127.0.0.1:6379> xinfo consumers codehole cg1
1) 1) name
   2) "c1"
   3) pending
   4) (integer) 0 # pel is empty
   5) idle
   6) (integer) 745505

Information monitoring

Stream provides Xinfo to monitor server information. You can query:

  • View queue information
127.0.0.1:6379> Xinfo stream mq
 1) "length"
 2) (integer) 7
 3) "radix-tree-keys"
 4) (integer) 1
 5) "radix-tree-nodes"
 6) (integer) 2
 7) "groups"
 8) (integer) 1
 9) "last-generated-id"
10) "1553585533795-9"
11) "first-entry"
12) 1) "1553585533795-3"
    2) 1) "msg"
       2) "4"
13) "last-entry"
14) 1) "1553585533795-9"
    2) 1) "msg"
       2) "10"
  • Consumer group information
127.0.0.1:6379> Xinfo groups mq
1) 1) "name"
   2) "mqGroup"
   3) "consumers"
   4) (integer) 3
   5) "pending"
   6) (integer) 3
   7) "last-delivered-id"
   8) "1553585533795-4"
  • Consumer group member information
127.0.0.1:6379> XINFO CONSUMERS mq mqGroup
1) 1) "name"
   2) "consumerA"
   3) "pending"
   4) (integer) 1
   5) "idle"
   6) (integer) 18949894
2) 1) "name"
   2) "consumerB"
   3) "pending"
   4) (integer) 1
   5) "idle"
   6) (integer) 3092719
3) 1) "name"
   2) "consumerC"
   3) "pending"
   4) (integer) 1
   5) "idle"
   6) (integer) 23683256

At this point, the operation instructions of message queue are generally over!

Deeper understanding

Let’s see how redis solves the common problems in MQ to further understand redis.

What kind of scene is stream used in

It can be used for time communication, big data analysis, remote data backup, etc

The client can expand smoothly and improve the processing capacity

Does the design of message ID consider the problem of time callback?

stayDistributed algorithm – ID algorithmA common problem in the design is the time callback problem. Does redis take this problem into account in its message ID design?

1553439850328-0 generated by xadd is the message ID generated by redis, which consists of two parts:Timestamp – sequence number。 The timestamp is a millisecond unit. It is the time of the redis server that generates the message. It is a 64 bit integer (Int64). The sequence number is the message sequence number within this millisecond time point. It is also a 64 bit integer.

The increment of sequence number can be verified through multi batch processing:

127.0.0.1:6379> MULTI
OK
127.0.0.1:6379> XADD memberMessage * msg one
QUEUED
127.0.0.1:6379> XADD memberMessage * msg two
QUEUED
127.0.0.1:6379> XADD memberMessage * msg three
QUEUED
127.0.0.1:6379> XADD memberMessage * msg four
QUEUED
127.0.0.1:6379> XADD memberMessage * msg five
QUEUED
127.0.0.1:6379> EXEC
1) "1553441006884-0"
2) "1553441006884-1"
3) "1553441006884-2"
4) "1553441006884-3"
5) "1553441006884-4"

Because a redis command is executed quickly, you can see that the message is represented by increasing the sequence number in the same timestamp.

In order to ensure that the messages are orderly, the IDs generated by redis are monotonically increasing and orderly. Because the ID contains a timestamp, in order to avoid problems caused by server time errors (for example, the server time is delayed), redis maintains a latest for each stream type data_ generated_ ID attribute, used to record the ID of the last message.If it is found that the current timestamp is backward (less than that recorded by the latest_generated_id), the scheme of keeping the timestamp unchanged and increasing the sequence number is adopted as the new message ID(this is also the reason why Int64 is used for serial numbers, so as to ensure that there are enough serial numbers), so as to ensure the monotonic increasing nature of ID.

It is strongly recommended to use redis scheme to generate message ID, because this monotonic increasing ID scheme of timestamp + sequence number can meet almost all your needs. But at the same time, remember that ID supports customization. Don’t forget!

Will there be message loss caused by consumer collapse?

In order to solve the problem of message loss caused by consumer crash during message reading and processing in the group, stream designs a pending list to record the messages read but not processed. The xpending command is used to get an unprocessed message from a consumer group or consumer within a consumer. The demonstration is as follows:

127.0.0.1:6379 > pending status of xpending MQ mqgroup # mpgroup
1) (integer) 5 # 5 read but unprocessed messages
2) "1553585533795-0" # start ID
3) "1553585533795-4" # End ID
4) 1) 1) "consumer a" # consumer a has 3
      2) "3"
   2) 1) "consumerb" # consumer B has 1
      2) "1"
   3) 1) "consumer C" # consumer C has 1
      2) "1"

127.0.0.1:6379 > xpending MQ mqgroup - + 10 # use the start end count option to get details
1) 1) "1553585533795-0" # message ID
   2) "Consumera" # consumer
   3) (integer) 1654355 # from reading to now, 1654355ms, idle
   4) (integer) 5 # messages are read 5 times, and delivery counter
2) 1) "1553585533795-1"
   2) "consumerA"
   3) (integer) 1654355
   4) (integer) 4
#There are 5 in total, and the remaining 3 are omitted

127.0.0.1:6379 > xpending MQ mqgroup - + 10 consumera # adds the consumer parameter to obtain the pending list of a specific consumer
1) 1) "1553585533795-0"
   2) "consumerA"
   3) (integer) 1641083
   4) (integer) 5
#There are 3 in total, and the remaining 2 are omitted

Each pending message has four properties:

  • Message ID
  • Consumer
  • Idle, read duration
  • Delivery counter, the number of times the message has been read

From the above results, we can see that the messages we read before are recorded in the pending list, indicating that all the read messages are not processed, but only read. How does that mean that consumers have processed the message? Use the command xack to notify the completion of message processing. The demonstration is as follows:

127.0.0.1:6379 > xack MQ mqgroup 1553585533795-0 # notifies the end of message processing and identifies it with the message ID
(integer) 1

127.0.0.1:6379 > xpending MQ mqgroup # view the pending list again
1) (integer) 4 # read but unprocessed messages have changed to 4
2) "1553585533795-1"
3) "1553585533795-4"
4) 1) 1) "consumer a" # consumer a, and 2 message processing
      2) "2"
   2) 1) "consumerB"
      2) "1"
   3) 1) "consumerC"
      2) "1"
127.0.0.1:6379>

With such a pending mechanism, it means that after a consumer reads the message but does not process it, the message will not be lost. After the consumer goes online again, he can read the pending list and continue to process the message to ensure that the message is orderly and not lost.

How can consumers transfer to other consumers after complete downtime?

Another problem is that if a consumer can’t go online after downtime, it needs to escape the pending message of the consumer to other consumers, that is, message transfer.

During the operation of message transfer, a message is transferred to its own pending list. Using the syntax xclaim to implement, you need to set the group, the target consumer and message ID of the transfer, and provide the idle (read duration). It can be transferred only after this duration is exceeded. The demonstration is as follows:

#Currently, the message 1553585533795-1 belonging to consumer a has not been processed for 15907787ms
127.0.0.1:6379> XPENDING mq mqGroup - + 10
1) 1) "1553585533795-1"
   2) "consumerA"
   3) (integer) 15907787
   4) (integer) 4

#Transfer messages 1553585533795-1 over 3600s to the pending list of consumer B
127.0.0.1:6379> XCLAIM mq mqGroup consumerB 3600000 1553585533795-1
1) 1) "1553585533795-1"
   2) 1) "msg"
      2) "2"

#The message 1553585533795-1 has been transferred to the pending of consumer B.
127.0.0.1:6379> XPENDING mq mqGroup - + 10
1) 1) "1553585533795-1"
   2) "consumerB"
   3) (integer) 84404 # note that idle has been reset
   4) (integer) 5 # note that the number of reads is also accumulated once

The above code completes a message transfer. In addition to specifying the ID, the transfer also needs to specify the ID to ensure that the transfer is not processed for a long time. The idle of the transferred message will be reset to ensure that it will not be transferred repeatedly. It is thought that there may be concurrent operations to transfer expired messages to multiple consumers at the same time. If idle is set, it can avoid that the subsequent transfer will not succeed because idle does not meet the conditions. For example, if there are two consecutive transfers below, the second one will not succeed.

127.0.0.1:6379> XCLAIM mq mqGroup consumerB 3600000 1553585533795-1
127.0.0.1:6379> XCLAIM mq mqGroup consumerC 3600000 1553585533795-1

This is message transfer. So far, we have used the ID of a pending message, the attribute of the consumer and the attribute of idle. Another attribute is the number of times the message is read, delivery counter. The function of this attribute is to count the number of times the message is read, including being transferred. This attribute is mainly used to determine whether it is error data.

Dead letter

As mentioned above, if a message cannot be processed by consumers, that is, it cannot be xack, it needs to be in the pending list for a long time, even if it is repeatedly transferred to each consumer. At this time, the delivery counter of the message will be accumulated (as can be seen in the example in the previous section). When it is accumulated to a preset critical value, we will consider it as bad news (also known as dead letter, deadletter, undeliverable message). Due to the judgment conditions, we can dispose of the bad news and delete it. Delete a message and use xdel syntax. The demonstration is as follows:

#Delete messages in the queue
127.0.0.1:6379> XDEL mq 1553585533795-1
(integer) 1
#There is no more this message in the view queue
127.0.0.1:6379> XRANGE mq - +
1) 1) "1553585533795-0"
   2) 1) "msg"
      2) "1"
2) 1) "1553585533795-2"
   2) 1) "msg"
      2) "3"

Note that in this example, the message in pending is not deleted, so if you view pending, the message will still be displayed in. Xack can be executed to identify that the processing is completed!

Reference articles

This paper mainly combs and summarizes from:

Knowledge system

Knowledge system

Related articles

First, we learn the conceptual basis of redis to understand its applicable scenarios.

  • Introduction to redis – redis concept and foundation
    • Redis is a storage system that supports multiple data structures such as key value. It can be used for caching, event publishing or subscription, high-speed queue and other scenarios. Support network, provide string, hash, list, queue, collection structure, direct access, memory based, persistent.

Secondly, these applicable scenarios are based on the data types supported by redis, so we need to learn the data types supported by redis; At the same time, we also need to understand the underlying data structure in redis optimization, so we also need to understand the design and implementation of some underlying data structures.

In addition, you need to learn the core functions supported by redis, including persistence, message, transaction and high availability; High availability includes master-slave, sentry, etc; High scalability, such as fragmentation mechanism.

  • Advanced redis persistence: detailed explanation of RDB and AOF mechanisms
    • In order to prevent data loss and recover data when the service is restarted, redis supports data persistence, which is mainly divided into two methods: RDB and AOF; Of course, the two mixed modes will be used in the actual scene.
  • Advanced redis – messaging: detailed explanation of publish subscribe mode
    • Redis publish / subscribe (Pub / sub) is a message communication mode: the sender (PUB) sends messages and the subscriber (sub) receives messages.
  • Advanced redis event: detailed explanation of redis event mechanism
    • Redis uses an event driven mechanism to handle a large number of network io. Instead of using mature open source solutions such as libevent or libev, it implements a very concise event driven library AE_ event。
  • Advanced redis transaction: detailed explanation of redis transaction
    • Redis transaction is essentially a collection of commands. Transactions support the execution of multiple commands at a time, and all commands in a transaction will be serialized. During transaction execution, the commands in the execution queue will be serialized in order, and the command requests submitted by other clients will not be inserted into the transaction execution command sequence.
  • Advanced redis – high availability: detailed explanation of master-slave replication
    • We know that to avoid a single point of failure, that is, to ensure high availability, we need to provide cluster services in a redundant (Replica) manner. Redis provides the master-slave database mode to ensure the consistency of data copies. The master-slave database adopts the method of read-write separation. This article mainly describes the master-slave replication of redis.
  • Advanced redis – high availability: detailed explanation of redis sentinel mechanism
    • Based on the master-slave replication above, what should I do if the note node fails? In redis master-slave cluster, sentinel mechanism is the key mechanism to realize the automatic switching of master-slave database. It effectively solves the problem of failover in master-slave replication mode.
  • Advanced redis – high scalability: detailed explanation of redis cluster
    • In the previous two articles, the master-slave replication and sentinel mechanisms ensure high availability. In terms of separation of reading and writing, although the slave node extends the read concurrency of the master and slave, the write capacity and storage capacity cannot be expanded, so it can only be the upper limit that the master node can carry. In the face of massive data, it is necessary to build clusters between master nodes (master node fragmentation) and absorb high availability (master-slave replication and sentinel mechanism), that is, each master fragmentation node also needs a slave node, which is a typical vertical expansion in distributed systems (cluster fragmentation Technology) Therefore, the corresponding design in redis 3.0 is redis cluster.

Finally, the specific practice and the problems and solutions encountered in practice: there are different characteristics in different versions, so you still need to understand the version; And performance optimization, large factory practice, etc.

Learning materials

This article is composed of one article multi posting platformArtiPubAuto publish

Recommended Today

R language uses logistic regression, decision tree and random forest to classify and predict credit data sets

Original link: http://tecdat.cn/?p=17950 Source: official account of tribal data In this paper, we use logistic regression, decision tree and random forest model to classify and predict credit data sets, and compare their performance. The dataset is credit=read.csv(“german_credit.csv”, header = TRUE, sep = “,”) It seems that all variables are numeric variables, but in fact, most of them are factor variables, > str(credit) […]