Redis mainly relies on master-slave architecture to achieve high concurrency.
Generally speaking, many projects are enough. A single master is used to write data, a single machine has tens of thousands of QPS, and multiple slaves are used to query data. Multiple slave instances can provide 10W QPS per second.
If you want to achieve high concurrency and accommodate a large amount of data, you need a redis cluster. After using a redis cluster, you can provide hundreds of thousands of read-write concurrency per second.
1. Redis master-slave architecture
A stand-alone redis can carry QPS ranging from tens of thousands to tens of thousands. For caching, it is generally used to support high read concurrency. Therefore, the architecture is made into a master-slave architecture. One master and many slaves are responsible for writing, copying data to other slave nodes, and the slave nodes are responsible for reading. All read requests go from the node. In this way, it can also easily realize horizontal expansion and support high concurrency of reading.
Redis replication – > master-slave architecture – > read-write separation – > horizontal capacity expansion to support high concurrency
The core mechanism of redis replication
Redis copies data to the slave node asynchronously, but redis2 8. At the beginning, the slave node will periodically confirm the amount of data copied each time;
One master node can be configured with multiple slave nodes;
Slave nodes can also be connected to other slave nodes;
When the slave node is copied, the block master node will not work normally;
When copying, slave node will not block its own query operations. It will use the old data set to provide services; However, when the copy is completed, the old dataset needs to be deleted and the new dataset needs to be loaded. At this time, the external service will be suspended;
The slave node is mainly used for horizontal expansion and read-write separation. The expanded slave node can improve the read throughput.
Note that if the master-slave architecture is adopted, it is recommended to enable the persistence of the master node. It is not recommended to use the slave node as the data hot standby of the master node, because in that case, if you turn off the persistence of the master, the data may be empty when the master is down and restarted, and then the data of the slave node may be lost after replication.
In addition, various backup schemes of master also need to be done. In case all local files are lost, select an RDB from the backup to restore the master, so as to ensure that there is data when starting. Even if the high availability mechanism described later is adopted, the slave node can automatically take over the master node, but the master node may restart automatically before sentinel detects the master failure, or all the slave node data above may be cleared.
1. Core principle of redis master-slave replication
When a slave node is started, it will send a PSYNC command to the master node. If this is the first time the slave node is connected to the master node, a full resynchronization full replication will be triggered. At this time, the master will start a background thread and start generating an RDB snapshot file. At the same time, it will cache all the newly received write commands from the client in memory. After the RDB file is generated, the master will send the RDB to the slave. The slave will write to the local disk first, and then load it into memory from the local disk. Then the master will send the write command cached in memory to the slave, and the slave will also synchronize these data. If the slave node has a network failure with the master node and is disconnected, it will be automatically reconnected. After connection, the master node will only copy the missing data to the slave.
####Master slave copy breakpoint continuation
From redis2 From the beginning of 8, the breakpoint continuation of master-slave replication is supported. If the network connection is broken during the master-slave replication, you can continue to copy the last copy instead of copying a copy from scratch.
Master node will maintain a backlog in memory. Both master and slave will save a replica offset and a master run ID. the offset is saved in the backlog. If the network connection between the master and the slave is broken, the slave will let the master continue to replicate from the last replica offset. If the corresponding offset is not found, resynchronization will be performed.
If the master node is located according to the host + IP, it is unreliable. If the master node is restarted or the data changes, the slave node should be distinguished according to different run IDs.
The master creates an RDB directly in memory and sends it to the slave instead of landing the disk locally. Just turn on repl diskless sync yes in the configuration file.
repl-diskless-sync yes #Wait for 5S before starting replication, because you have to wait for more slaves to reconnect repl-diskless-sync-delay 5
Expired key processing
The slave will not expire the key, but will only wait for the master to expire the key. If the master has expired a key or eliminated a key through the LRU, a del command will be simulated and sent to the slave.
Complete process of replication
When the slave node is started, it will save the master node information locally, including the host and IP of the master node, but the replication process has not started.
There is a scheduled task inside the slave node. Check whether there is a new master node to connect and copy every second. If found, establish a socket network connection with the master node. Then slave node sends ping command to master node. If requirepass is set for the master, the slave node must send the password of masterauth to authenticate. The master node performs full replication for the first time and sends all data to the slave node. Later, the master node will continue to asynchronously copy the write command to the slave node.
The master executes bgsave and generates an RDB snapshot file locally.
The master node sends the RDB snapshot file to the slave node. If the RDB replication time exceeds 60 seconds (repl timeout), the slave node will consider the replication failed, and this parameter can be adjusted appropriately (for machines with Gigabit network cards, 100MB and 6G files are usually transmitted per second, which may exceed 60s)
When the master node generates an RDB, it caches all new write commands in memory. After the slave node saves the RDB, it copies the new write commands to the slave node.
If the memory buffer continuously consumes more than 64MB during replication, or more than 256MB at one time, the replication stops and the replication fails.
client-output-buffer-limit slave 256MB 64MB 60
After receiving the RDB, the slave node empties its old data, then reloads the RDB into its own memory, and provides external services based on the old data version.
If the slave node has AOF enabled, bgrewriteaof will be executed immediately to rewrite the AOF.
If the master slave network connection is lost during full replication, incremental replication will be triggered when the slave reconnects to the master.
The master directly obtains some lost data from its own backlog and sends it to slave node. The default backlog is 1MB.
The master obtains data from the backlog according to the offset in PSYNC sent by slave.
Both master and slave nodes will send heartbeat information to each other.
By default, the master sends heartbeat every 10 seconds, and the slave node sends heartbeat every 1 second.
Each time the master receives a write command, it writes data internally and then asynchronously sends it to the slave node.