Redis persistence and memory optimization
Some persistence and memory optimization operations are carried out through the configuration file of redis. If there are any errors, you are welcome to guide.
1. Why persistence
If the user data is saved to memory, the memory data will be cleared when the server is powered off or down , Causes the cache data to be emptied.
2. How to use persistent files
After we installed redis, all the configurations are in the redis.conf In the file, various configurations of RDB and AOF persistence mechanisms are stored.
Command: Save: indicates that the current data is persisted once to generate an RDB file. Bgsave: memory data is persisted once in the background to generate RDB file. Command difference: Save: when the save instruction is executed, the user is not allowed to continue the set operation, which leads to blocking. Bgsave: when bgsave is executed, it indicates that redis needs to be persisted. At this time, redis will persist according to the user's usage and will not get stuck, similar to GC (garbage collection mechanism). Persistent file usage rules: When the program runs normally, persistent files will be generated. If the server restarts after downtime, the data will be recovered according to the persistent files specified in the configuration file.
3. RDB mode
three . 1 Description :
- RDB mode is the default persistence strategy of redis, with compact files and full backup, which is very suitable for backup and disaster recovery.
- When generating RDB files, the main process of redis will fork () a sub process to handle all the saving work. The main process does not need to do any disk IO operation.
- RDB can recover large datasets faster than AOF.
- In RDB mode, the snapshot of memory data is recorded during each operation, and the persistent file is small.
1. Time consuming, memory consuming and IO performance consuming. It takes a certain amount of time to dump all the data in memory to the hard disk. In bgsave mode, fork() subprocess takes up extra memory, and a lot of hard disk reading and writing will consume IO performance.
two . Uncontrollable data loss. During downtime, memory data written after the last snapshot will be lost.
3.2 modify the name of persistent file and the path to save persistent file
3.3 RDB mode default snapshot mode
If at least one key of save 9001 changes within 900s, the snapshot will be written to disk. If at least 10 keys change within 300, the snapshot will be written to disk. If at least 10000 keys change within 60 seconds, the snapshot will be written to disk.
4. AOF mode
4.1 what is AOF
The full backup of redis RDB mode is always time-consuming, and AOF mode is a supplement to RDB mode. When AOF mode is turned on (when both modes are turned on, AOF has higher priority than RDB mode), every time the redis write command is executed, the command will record the log (the log file ending with AOF). When redis fails, the data can be recovered as long as log playback is performed.
4.2 open AOF mode
Modify the redis configuration file to yes.
4.3 persistence strategy of AOF
#Appendfsync always # every time data changes, it will be written appendonly.aof #Appendfsync everysec # the default mode, which synchronizes to the appendonly.aof #Appendfsync no # is not synchronized and data will not persist No appendfsync on rewrite no # when the AOF log file is about to grow to a specified percentage, redis calls bgrewriteaof to determine whether to automatically rewrite the AOF log file.
Everysec policy is usually used, which is also the default policy of AOF.
4.4 Aof log rewriting function
When the AOF log file is too large, redis will automatically rewrite the AOF log, and the append mode continuously writes the updated records to the old log file. At the same time, redis will create a new log file to append the subsequent records.
Aof rewriting can greatly reduce the size of the final log file. This reduces disk consumption and speeds data recovery. For example, we have a counting service, which has many automatic incrementing operations, such as automatically incrementing the key to 100 million, which is 100 million incr for AOF files. Aof rewriting records only one record.
4.5 two methods of AOF rewriting
- The bgrewriteaof command triggers AOF rewriting
The redis client sends the bgrewriteaof command to redis, and the redis server derives a subprocess to complete the AOF rewriting. The AOF rewriting here is to trace the data in redis memory back to the AOF file. Instead of rewriting the AOF file to generate a new AOF file to replace.
Aof rewrite configuration
- Auto AOF rewrite min size: the size required to rewrite the AOF file
- Auto AOF rewrite percentage: AOF file growth
- aof \_ current \_ Size: calculate the current size of AOF (bytes)
- aof \_ base \_ Size: the size of AOF last started and rewritten (bytes)
The trigger time of AOF automatic rewriting should meet the following two requirements at the same time:
- aof\_current\_size > auto-aof-rewrite-min-size
- aof\_current\_size – aof\_base\_size/aof\_base\_size > auto-aof-rewrite-percentage
5. Comparison of RDB and AOF
|boot priority||low||high||When both RDB and AOF are enabled, restart redis and select AOF for recovery. In most cases, it holds data that is more up-to-date than RDB|
|volume||Small||large||RDB is stored and compressed in binary mode. Although AOF is rewritten by AOF, its capacity is relatively large. After all, it’s in the form of logging|
|Recovery speed||fast||slow||RDB has small volume and fast recovery speed. Aof was large and recovered slowly.|
|data security||Lose data||Decision based on Strategy||RDB loses data after the last snapshot. Aof decides whether to lose data according to the policies of always, everysec and No.|
|Weight||heavy||light||Aof is an append log, so it’s a lightweight operation. RDB is an operation that takes up a lot of CPU. It will take up a lot of disk and consume a lot of memory.|
6. Redis memory optimization strategy
Redis supports a variety of memory eviction strategies to limit memory usage, some of which are based on LRU and LFU algorithms.
six . 1 LRU algorithm
LRU is the abbreviation of least recently used, which is a commonly used page replacement algorithm. The most recently unused pages are selected to be eliminated. In this algorithm, each page is given an access field to record the time t that a page has experienced since it was last accessed. When a page needs to be eliminated, the page with the largest t value in the existing page, that is, the least recently used page, is selected to be eliminated.
Time t: the time from the last time to the present.
6.2 LFU algorithm
explain : LFU algorithm was proposed after redis5.
LFU（least frequently used (LFU) page-replacement algorithm）。 That is, the least frequently used page replacement algorithm requires that the page with the smallest reference count be replaced during page replacement, because the frequently used page should have a larger number of references. However, some pages are used many times at the beginning, but they will not be used any more. These pages will stay in memory for a long time. Therefore, the reference count register can be moved one bit to the right regularly to form the average use times of exponential decay.
6.3 redis memory optimization
|noeviction (default)||If memory overflows while trying to insert more data, an error is returned|
|allkeys-lru(recommended if unsure)||Remove the least recently used data from all data|
|allkeys-lfu||Remove the least frequently used data from all data|
|allkeys-random||Randomly delete data from all data|
|volatile-lru||Delete the least recently used data in all data and set the “expired” field|
|volatile-lfu||Remove the least commonly used data from all data with the expiration field set|
|volatile-random||Data with “expired” field is randomly deleted|
|volatile-ttl||Delete the shortest lifetime data among all the data with the expiration field set.|
5.4 modify redis configuration file
Modify maxmemory – Policy attribute, select the appropriate policy.