Redis is a kind of memory database, which saves data in memory, and its reading and writing efficiency is much faster than the traditional database which saves data on disk. Therefore, it is very important to monitor the memory consumption of redis and understand the redis memory model for efficient and long-term stable use of redis.
Memory usage statistics
The redis memory related indicators can be obtained through the info memory command. The more important indicators and explanations are as follows:
|Property name||Attribute description|
|used_memory||The total amount of memory allocated by redis allocator, that is, the amount of internal storage of all data|
|used_memory_human||Returns used in a readable format_ memory|
|used_memory_rss||From the perspective of operating system, it shows the total amount of physical memory occupied by redis process|
|used_memory_rss_human||used_ memory_ Display of user readable format of RSS|
|used_memory_peak||The maximum amount of memory used, indicating used_ The peak of memory|
|used_memory_peak_human||Returns used in a readable format_ memory_ Peak value|
|used_memory_lua||The amount of memory consumed by the Lua engine.|
|mem_fragmentation_ratio||used_ memory_ rss / used_ Memory ratio, can represent the memory fragmentation rate|
|maxmemory||The maximum memory limit that redis can use. 0 means there is no limit, in bytes.|
|maxmemory_policy||Redis uses memory recovery strategies such as noeviction, allkeys LRU, volatile LRU, allkeys random, volatile random, or volatile TTL. The default is no eviction, which means no recycling.|
When mem_ fragmentation_ When ratio > 1, it indicates that some memory is not used for data storage, but consumed by memory fragmentation. If the value is large, the fragmentation rate is serious.
When mem_ fragmentation_ When the ratio is less than 1, this situation usually occurs when the operating system switches redis memory to the hard disk. Special attention should be paid to this situation. As the speed of the hard disk is much slower than that of the memory, the performance of redis will become very poor or even rigid.
When the redis memory exceeds the available memory, the operating system will swap and write the old pages to the hard disk. Reading and writing from the hard disk is about five orders of magnitude slower than reading and writing from memory. used_ The memory index can be used to determine whether redis is at risk of being swapped or whether it has been swap.
In the article “redis administration” (link at the end of the article), it is suggested to set up a swap area of the same size as the memory. If there is no swap area, once redis suddenly needs more memory than the available memory of the current operating system, redis will be directly killed by the oom killer of the linix kernel because of out of memory. Although the performance of redis will be worse when the data is swapped out, it is better than being killed directly.
Redis uses the maxmemory parameter to limit the maximum available memory. The main purposes of memory restriction are as follows:
- It is used for caching scenarios. When maxmemory is exceeded, delete strategies such as LRU are used to free space.
- Prevent the memory used from exceeding the physical memory of the server, causing the process to be killed by the system after oom.
Maxmemory limits the amount of memory that redis actually uses, that is, used_ Memory the memory corresponding to the statistics item. The actual memory consumption may be larger than the maxmemory setting. Be careful because this memory causes oom. So, if you have 10GB of memory, it’s best to set maxmemory to 8 or 9g
Memory consumption partition
Redis in-process consumption mainly includes: self memory + object memory + buffer memory + memory fragments, in which redis empty process consumes very little memory, usually used_ memory_ When the RSS is about 3MB, used_ The memory is generally around 800KB, and the memory consumption of an empty redis process can be ignored.
Object memory is the largest memory consumption of redis, which stores all the data of users. All data of redis adopts the key value data type. Each time a key value pair is created, at least two types of objects are created: key object and value object. Object memory consumption can be simply understood as the sum of the memory consumption of the two objects (and information such as expiration). Key objects are all strings. When using redis, it’s easy to ignore the impact of key on memory consumption. You should avoid using too long key. For details of redis object system, please refer to the 12 figures in my previous article to show you the data structure and object system of redis.
Buffer memory mainly includes: client buffer, copy backlog buffer and AOF buffer.
Client buffering refers to all input and output buffers connected to the redis server TCP connection.
The input buffer cannot be controlled. The maximum space is 1g. If it is exceeded, it will be disconnected. Moreover, the input buffer is not controlled by maxmemory. Suppose a redis instance has set maxmemory to 4G, and 2G data has been stored. However, if the input buffer uses 3G at this time, it has exceeded the maxmemory limit, which may lead to data loss, key value elimination or oom.
The reason why the input buffer is too large is that the processing speed of redis cannot keep up with the input speed of the input buffer, and each command entering the input buffer contains a large number of bigkeys.
The output buffer is controlled by the parameter client output buffer limit, and its format is shown below.
client-output-buffer-limit [hard limit] [soft limit] [duration]
Hard limit means that once the buffer size reaches this threshold, redis will immediately close the connection. The soft limit and time duration take effect together. For example, if the soft time is 64MB and the duration is 60, redis will close the connection only when the buffer lasts for 60s and is greater than 64MB.
A normal client is all connections except those for replication and subscription. The default configuration of reids is client output buffer limit normal 0 00. Redis does not limit the output buffer of ordinary clients. Generally, the memory consumption of ordinary clients can be ignored. However, when there are a large number of slow connection clients accessing, this part of memory consumption cannot be ignored. Maxclients can be set as the limit. Especially when a large number of data output commands are used and the data cannot be pushed to the client in time, such as the monitor command, it is easy to cause the memory of redis server to soar suddenly. For related cases, you can see some of the pits meituan stepped on in redis-3.redis memory usage soared.
The slave client is used for master-slave replication. The master node will establish a separate connection for each slave node for command replication. The default configuration is client output buffer limit slave 256MB 64MB 60. When the network delay between the master and slave nodes is high or a large number of slave nodes are mounted on the master node, this part of memory consumption will occupy a large part. It is suggested that the master node should not mount more than 2 slave nodes, and the master-slave nodes should not be deployed in poor network environment, such as remote cross machine room environment, to prevent the overflow caused by slow connection of replication clients. There are two types of buffers related to master-slave replication, one is the output buffer from the client, and the other is the replication backlog buffer described below.
The subscription client is used for the publish subscribe function. The connection client uses a separate output buffer. The default configuration is client output buffer limit PubSub 32MB 8MB 60. When the message production of the subscription service is faster than the consumption speed, the output buffer will generate backlog and cause memory space overflow.
Input and output buffers are easy to get out of control in large traffic scenarios, which causes the memory of redis to be unstable and requires key monitoring. The client list command can be executed periodically to monitor the input and output buffer size and other information of each client.
|Property name||Attribute description|
|qbuf||Length of query buffer (in bytes, 0 means no query buffer is allocated)|
|qbuf-free||Length of remaining space in query buffer (in bytes, 0 means no space left)|
|obl||Length of output buffer (in bytes, 0 means no output buffer is allocated)|
|oll||Number of objects in the output list (command replies are queued as string objects when there is no space left in the output buffer)|
127.0.0.1:6379> client list id=3 addr=127.0.0.1:58161 fd=8 name= \ age=1408 idle=0 flags=N db=0 sub=0 psub=0 multi=-1 \ qbuf=26 qbuf-free=32742 obl=0 oll=0 omem=0 \ events=r cmd=client
The execution speed of the client list command is slow, and when there are many clients, it is possible to block redis. Therefore, you can use the info clients command to get the maximum client buffer size.
127.0.0.1:6379> info clients # Clients connected_clients:1 client_recent_max_input_buffer:2 client_recent_max_output_buffer:0 blocked_clients:0
Replication backlog buffer is a reusable fixed size buffer provided by redis after version 2.8, which is used to implement partial replication function. According to the repl backlog size parameter control, the default value is 1MB. For replication backlog buffer, there is only one master node, and all slave nodes share this buffer. Therefore, a large buffer space, such as 100MB, can be set to effectively avoid full replication. For details of the replication backlog buffer, see my old article redis replication process.
Aof rewrite buffer: this space is used to save the latest write command during redis AOF rewriting. The size of the AOF rewrite buffer is beyond the control of the user, which depends on the AOF rewrite time and the amount of write commands, but it is usually very small. For details of AOF persistence, see my old article redis AOF persistence.
Redis memory fragment
The default memory allocator of redis is jemalloc, and the optional allocators are glibc and tcmalloc. Memory allocator in order to better manage and reuse memory, the memory allocation strategy generally uses a fixed range of memory block allocation. The specific allocation strategy will be explained later, but the normal fragmentation rate of redis is generally around 1.03 (why this value). However, when the length of stored data varies greatly, the following scenarios are prone to high memory fragmentation:
- Update frequently, such as append, setrange and other update operations for existing keys.
- A large number of expired keys are deleted. After the key objects are deleted, the released space cannot be reused, resulting in the increase of fragmentation rate.
In this part, we will explain jemalloc in detail later, because a large number of frameworks will use memory allocators, such as netty.
Subprocess memory consumption
The memory consumption of subprocesses mainly refers to the memory consumption of subprocesses created by redis when AOF rewriting or RDB saving is performed. The memory consumption of the child process generated by the fork operation of redis is the same as that of the parent process. Theoretically, it needs twice the physical memory to complete the corresponding operation. However, Linux has copy on write technology. The parent and child processes will share the same physical memory page. When the parent process processes the write request, they will copy a copy of the page to be modified to complete the write operation, while the child process still reads the memory snapshot of the whole parent process when fork is in progress.
As shown in the figure above, only page table is copied during fork. Only when a page is modified can the page be copied.
However, the Linux kernel adds the transparent Hugh pages (THP) mechanism in 2.6.38 memory. To understand it simply, it is to make the page size larger. Originally, a page is 4KB. After the THP mechanism is enabled, the page size is 2MB. Although it can speed up the fork speed (reduce the number of pages to be copied), it will increase the unit of copy on write copying memory pages from 4KB to 2MB. If the parent process has a large number of write commands, it will increase the amount of memory copy, which is to modify the content of a page, but the page unit becomes larger, resulting in excessive memory consumption. For example, the following two memory consumption logs when performing AOF Rewriting:
//Turn on THP C * AOF rewrite: 1039 MB of memory used by copy-on-write //Turn off THP C * AOF rewrite: 9MB of memory used by copy-on-write
These two logs are from the same redis process, used_ The total memory is 1.5GB, and the number of write commands per second is about 200 during the execution of the subprocess. When THP is turned on and off separately, the memory consumption of subprocesses is quite different. Therefore, when THP is enabled in the high concurrent write scenario, the memory consumption of the child process may be several times that of the parent process, resulting in the physical memory overflow of the machine.
Therefore, the child process generated by redis does not need to consume twice the memory of the parent process. The actual consumption depends on the number of write commands during the period. Therefore, some memory needs to be reserved to prevent overflow. It is also recommended to turn off THP of the system to prevent excessive memory consumption during copy on write. Not only redis, but also the machine deploying MySQL will shut down THP.
Welcome to my blog
WeChat official account
- Redis Administration https://redis.io/topics/admin