A kind of Learning redis from scratch

Time:2020-2-18

Preface

The text has been included in my GitHub warehouse. Welcome to star: https://github.com/bin3923282
The best time to plant a tree is ten years ago, followed by now
I know a lot of people don’t playqqYes, but in retrospect, welcome to join the six pulse magic sword Java rookie learning group, group chat number:549684836Encourage people to blog on the way to technology

A kind of Learning redis from scratch

Chatter

Xiaoyao Tianjing takes Tiandao as its force, which is echoed by all things.
This is an advanced chapter. If you want to see the basics, please see my following link:
A kind of Learn redis from scratch
A kind of Learning redis from scratch
There are many basic probabilities in the first article. You want to sleep. There are many dry goods in the second article that are worth studying
There are a lot of dry goods in this article

This article mainly writes about the persistence of the elimination algorithm of redis

Redis’s elimination strategy

If your redis can only store 8g data, and you write 11g, how will redis eliminate 3G data?
See the configuration of the source code for the expired configuration in redis.conf

# MAXMEMORY POLICY: how Redis will select what to remove when maxmemory
# is reached. You can select among five behaviors:
#Maximum memory strategy: when the maximum memory usage is reached, you can choose from the following five behaviors: how does redis choose to eliminate the database key

#When the memory is not enough to hold the newly written data

# volatile-lru -> remove the key with an expire set using an LRU algorithm
#Volatile LRU: in the key space with expiration time set, remove the least recently used key. In this case, redis is generally used as both cache and persistent storage.

# allkeys-lru -> remove any key according to the LRU algorithm
#Allkeys LRU: remove the least recently used keys (recommended)

# volatile-random -> remove a random key with an expire set
#Volatile random: in the key space with expiration time set, it is not recommended to randomly remove a key

# allkeys-random -> remove a random key, any key
#All keys random: directly remove a key in the key space at random, what can I do

# volatile-ttl -> remove the key with the nearest expire time (minor TTL)
#Volatile TTL: in the key space with expiration time set, it is not recommended to remove the key with earlier expiration time first

# noeviction -> don't expire at all, just return an error on write operations
#Noeviction: no key processing, only one write error is returned. Not recommended

# Note: with any of the above policies, Redis will return an error on write
#       operations, when there are no suitable keys for eviction.
#Under all the above policies, redis will return an error when performing write operations when there is no appropriate key to be eliminated. Here is the write command:
#       At the date of writing these commands are: set setnx setex append
#       incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
#       sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby
#       zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby
#       getset mset msetnx exec sort

#The default expiration policy is:
# The default is:
# maxmemory-policy noeviction
  • Noeviction: when the memory is not enough to hold the new write data, the new write operation will report an error.
  • Allkeys LRU: when the memory is not enough for the newly written data, remove the least recently used key in the key space.
  • All keys random: when the memory is not enough to hold the newly written data, a key is randomly removed from the key space.
  • Volatile LRU: when the memory is not enough for the newly written data, remove the most recently used key from the key space with expiration time set.
  • Volatile random: when the memory is not enough to hold the newly written data, a key is randomly removed from the key space with expiration time set.
  • Volatile TTL: when the memory is not enough to hold the newly written data, in the key space with expiration time set, the key with earlier expiration time will be removed first.

In fact, I think it’s better to use volatile LRU. After all, it’s unnecessary to report an error, and it’s also necessary to set an alarm device. If it’s not enough, it’s better to engage the master and the slave

Redis’s persistence

Redis provides two ways for persistence:

  • RDB: it can store your data in a snapshot at a specified time interval.
  • Aof: record every write operation to the server. When the server is restarted, these commands will be executed again to recover the original data.

In order to use the persistent function, we need to know how to turn on the persistent function. There are the following configurations in redis.conf:

A kind of Learning redis from scratch

Configuration of RDB

#Time strategy
save 900 1
save 300 10
save 60 10000

#File name
dbfilename dump.rdb

#File save path
dir /home/work/app/redis/data/

#If persistence fails, does the main process stop writing
stop-writes-on-bgsave-error yes

#Compression or not
rdbcompression yes

#Check on import
rdbchecksum yes

Save 9001 means that if there is one write command in 900s, a snapshot will be triggered, which can be understood as a backup
Save 300 10 means that there are 10 writes in 300s, and a snapshot is generated

The following is similar, so why do you need to configure so many rules? Because the read and write requests of redis in each time period are definitely not balanced. In order to balance performance and data security, we can freely customize when to trigger backup. So here is a reasonable configuration based on the redis write status.

stop-writes-on-bgsave-error yes
This configuration is also a very important one. When the backup process goes wrong, the main process stops accepting new write operations to protect the persistent data consistency. If you have a complete monitoring system for your business, you can disable this configuration. Otherwise, please enable it.

About the configuration of compressionrdbcompression yes, it is suggested that it is unnecessary to start redis. After all, redis itself belongs to CPU intensive server. If you start compression again, it will bring more CPU consumption. Compared with the cost of hard disk, the CPU is more valuable.

Of course, if you want to disable RDB configuration, it is very easy. Just write in the last line of save:save “”, which is on by default.

The principle of RDB

In redis, the trigger of RDB persistence can be divided into two types: self manual trigger and redis timed trigger.
For RDB mode persistence, manual triggering can use:

-Save: it will block the current redis server until the persistence is completed. It should not be used online.
-Bgsave: this trigger mode will fork a subprocess, and the subprocess is responsible for the persistence process, so the blocking will only occur when fork subprocesses.

The scenarios of automatic triggering mainly include the following:

-Automatically triggered according to our save m n configuration rules;
-When the slave node copies in full, the master node sends the RDB file to the slave node to complete the copy operation, and the master node triggers bgsave;
-When a shutdown is executed, if AOF is not turned on, it will also be triggered.

Since save will not be used at all, let’s focus on how bgsave completes RDB persistence.

A kind of Learning redis from scratch

Note here that the fork operation will block, resulting in the read-write performance degradation of redis. We can control the maximum memory of a single redis instance to reduce the event consumption of redis in fork as much as possible. And the above mentioned automatic trigger frequency reduces the number of forks, or uses manual trigger to complete persistence according to its own mechanism.

Save has been done. Next, I’ll see how to recover data – > copy the backup file to redis’s dark installation directory, and then restart the service.

Configuration of AOF

A kind of Learning redis from scratch

#Open AOF or not
appendonly yes

#File name
appendfilename "appendonly.aof"

#Synchronization mode
appendfsync everysec

#Synchronization during AOF rewrite
no-appendfsync-on-rewrite no

#Override trigger configuration
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb

#How to deal with errors when loading AOF
aof-load-truncated yes

#File rewrite policy
aof-rewrite-incremental-fsync yes

It also focuses on some key configurations:

Appendfsync everysec actually has three modes:

  • Always: syncing every write command to AOF immediately is slow but safe
  • Everysec: synchronize once per second, which is a compromise scheme (this is also the default)
  • No: redis doesn’t handle it. It’s very fast, but it’s also the most insecure

The principle of AOF

Generally speaking, the whole process of AOF can be divided into two steps: one is to write the command in real time (if it is an appendfsync everysec configuration, there will be 1s loss); the second step is to rewrite the AOF file.

For incremental append to file, the main process is: Command write =, append to AOF buf =, and synchronize to AOF disk. So why write buf and synchronize to disk first? If you write to the disk in real time, it will bring very high disk IO and affect the overall performance.

Aof rewriting is to reduce the size of AOF files. It can be triggered manually or automatically. Please refer to the above configuration section for the rules of automatic triggering. The operation of fork also occurs in the rewrite step, which will block the main process.

Manual trigger: bgrewriteaof. Automatic trigger is triggered according to configuration rules. Of course, the overall time of automatic trigger is also related to the scheduled task frequency of redis.

A kind of Learning redis from scratch

After data backup and persistence, how can we recover data from these persistent files? If there are both RDB files and AOF files on a server, who should be loaded?

A kind of Learning redis from scratch

At startup, the AOF file will be checked to see if it exists. If not, the RDB will be loaded. So why is AOF loaded first? Because the data saved by AOF is more complete, we know that AOF basically loses up to 1s of data through the above analysis.

Performance optimization of redis persistence

Through the above analysis, we all know that both RDB snapshot and AOF rewrite need fork, which is a heavyweight operation and will block redis. Therefore, in order not to affect the redis main process response, we need to reduce the blocking as much as possible.

-Reduce the frequency of fork, for example, you can manually trigger RDB to generate snapshot, and AOF rewrite;
-Control the maximum memory usage of redis to prevent the fork from taking too long;
-Use more powerful hardware;
-Properly configure the memory allocation strategy of Linux to avoid fork failure due to insufficient physical memory.

Some online experience

-If the data in redis is not particularly sensitive or can be rewritten in other ways to generate data, persistence can be turned off, and if the data is lost, it can be supplemented in other ways;
-Make a policy to check redis regularly, and then manually trigger backup and data rewriting;
-You can add the master-slave machine, use a slave machine for backup processing, and other machines normally respond to the commands of the client;

Ending

redis Today’s memory elimination strategy and persistence are so much. I wanted to write more, but I’m afraid that if you are tired of reading an article, you can write a shorter one
Ha ha, there’s moreLua script master slave SentinelSee you later

Because the blogger is also a new developer. I also want to learn and write at the same time. I have a goal of one or three articles from one week to three days. I hope you can stick to them for a year. I hope you guys can give more opinions and let me learn more and make progress together.

Daily praise

All right, everyone, that’s the whole content of this article. You can see that all the people here arepersonnel

It’s not easy to create. Your support and recognition are the biggest driving force for my creation. See next article

If there are any mistakes in this blog, please comment and advice. Thank you very much!