Operation and maintenance interview questions – redis (Reprinted)


1. Redis is a high performance key value database based on memory.

2. What are the advantages of redis over memcached

  • All values of memcached are simple strings, and redis, as its substitute, supports more abundant data types
  • Redis is much faster than memcached
  • Redis can persist its data

3. Redis is a single thread

Redis uses queue technology to change concurrent access into serial access, which eliminates the overhead of traditional serial database control

4. Five common data types in reids

  • string,list,set,sorted set,hash


  • noeviction: do not delete the policy. When the maximum memory limit is reached, if more memory is needed, the error message will be returned directly. Most write commands cause more memory to be consumed (with a few exceptions).
  • allkeys-lru:All keys are common; the least recently used (LRU) keys are deleted first.
  • volatile-lru:It is only limited to the part with expiration set; priority is given to deleting the least recently used (LRU) key.
  • allkeys-random:All keys are common; some keys are deleted randomly.
  • volatile-random: set onlyexpireDelete a part of the key randomly.
  • volatile-ttl: set onlyexpireThe key with short time to live (TTL) is deleted first.

7. How to solve the concurrent competition problem of redis?

Single process single thread mode, using the queue mode to change the concurrent access to serial access. Redis itself does not have the concept of lock. There is no competition for multiple client connections in redis, so it uses setnx to implement the lock.

8. Redis is developed with C language.

9. Redis front end start command


10. Languages supported by reids:

java、C、C#、C++、php、 Node.js , go, etc.

11. Redis persistence scheme:


12. Master slave replication of redis

Persistence ensures that no data will be lost even if the redis service is restarted, because after the redis service is restarted, the persistent data on the hard disk will be restored to the memory. However, when the hard disk of the redis server is damaged, the data may be lost. If the master-slave replication mechanism of redis is used, this single point of failure can be avoided,

13. Redis is single threaded, but why is redis so fast?

1. It’s completely memory based. Most requests are pure memory operations, which are very fast. Data is stored in memory, similar to HashMap. The advantage of HashMap is that the time complexity of search and operation is O (1);

2. The data structure is simple and easy to operate. The data structure in redis is specially designed;

3. Single thread is used to avoid unnecessary context switching and contention conditions, and there is no CPU consumption due to switching caused by multi process or multi thread. There is no need to consider all kinds of lock problems, there is no lock release operation, and there is no performance consumption due to possible deadlock;

4. Using multiple I / O multiplexing model, non blocking io;Here “multiplexing” refers to multiple network connections, “multiplexing” refers to multiplexing the same thread

5. Different underlying models are used, and the underlying implementation and application protocol of communication between them and clients are different. Redis directly builds the VM mechanism itself, because the general system will waste a certain amount of time to move and request when calling system functions;

14. Why is redis single threaded?

Redis is a memory based operation. CPU is not the bottleneck of redis. The bottleneck of redis is most likely the size of machine memory or network bandwidth. Since single thread is easy to implement, and CPU will not become a bottleneck, it is logical to adopt the single thread scheme (after all, multithreading will cause a lot of trouble!).

15. Redis info view command: info memory

16. Redis memory model

used_memory:The total amount of memory allocated by redis allocator (in bytes), including the virtual memory used (SWAP); redis allocator will be introduced later. used_ memory_ Human is just more friendly.

used_memory_rss:The redis process occupies the memory of the operating system (in bytes), which is consistent with the value seen by the top and PS commands; except for the memory allocated by the allocator, used_ memory_ RSS also includes the memory and memory fragments needed by the process itself, but does not include virtual memory.

mem_fragmentation_ratio:Memory fragmentation ratio, which is used_ memory_ rss used_ Memory ratio.

mem_allocator:The memory allocator used by redis is specified at compile time. It can be libc, jemalloc or tcmalloc, and the default is jemalloc. The default jemalloc is used in the screenshot.

17. Redis memory partition


As a database, data is the most important part; the memory occupied by this part will be counted in used_ In memory.

Memory required for the process itself to run

The main process of redis must occupy memory, such as code, constant pool, etc. this part of memory is about several megabytes, which can be ignored in most production environments compared with the memory occupied by redis data. This part of memory is not allocated by jemalloc, so it will not be counted in used_ In memory.

Buffer memory

Buffer memory includes client buffer, copy backlog buffer, AOF buffer, etc.; client buffer stores input and output buffer of client connection; copy backlog buffer is used for partial copy function; AOF buffer is used to save the latest write command when AOF rewriting. Before we know the corresponding functions, we don’t need to know the details of these buffers; this part of memory is allocated by jemalloc, so it will be counted in used_ In memory.

Memory fragmentation

Memory fragmentation is produced by redis in the process of allocating and reclaiming physical memory. For example, if the data is changed frequently and the sizes of the data vary greatly, the space released by redis may not be released in the physical memory, but redis can not be used effectively, which leads to memory fragmentation. Memory fragmentation is not counted in used_ In memory.

18. There are five types of redis objects

Regardless of the type, redis will not store it directly, but through the redisobject object.

19. Redis does not use C string directly

(that is, an array of characters ending with the null character ‘\ 0’) is used as the default string representation. Instead, SDS is used. SDS is the abbreviation of simple dynamic string.

20. Reidis’s SDS adds free and Len fields to the C string

21. Reids master slave replication

Replication is the basis of highly available redis, and sentry and cluster are all based on replication to achieve high availability. Replication mainly realizes multi machine backup of data, load balance of read operation and simple fault recovery. Defects: failure recovery cannot be automated; write operation cannot be load balanced; storage capacity is limited by single machine.

22. Redis sentry

On the basis of replication, sentry realized automatic fault recovery. Defects: write operations can not be load balanced; storage capacity is limited by a single machine.

23. Reids persistence trigger condition

The trigger of RDB persistence can be divided into manual trigger and automatic trigger.

24. Redis opens AOF

The redis server turns on RDB and turns off AOF by default. To turn on AOF, you need to configure the following in the configuration file:

appendonly yes

25. Summary of common AOF configurations

Here are the common configuration items of AOF, as well as the default values; we will not describe them in detail here.

  • Appendonly No: open AOF
  • appendfilename ” appendonly.aof “: AOF file name
  • Dir. /: directory of RDB and AOF files
  • Appendfsync everysec: fsync persistence policy
  • No appendfsync on rewrite No: whether to disable fsync during AOF rewriting; if this option is enabled, the load of CPU and hard disk (especially hard disk) during file rewriting can be reduced, but the data during AOF rewriting may be lost; a balance between load and security is needed
  • Auto AOF rewrite percentage 100: one of the trigger conditions of file rewrite
  • Auto AOF rewrite min size 64MB: one of file rewriting triggers submission
  • Aof load truncated yes: if the end of the AOF file is damaged, is the AOF file still loaded when redis starts

26. Advantages and disadvantages of RDB and AOF

RDB persistence

Advantages: RDB file compact, small size, fast network transmission, suitable for full replication; recovery speed is much faster than AOF. Of course, one of the most important advantages of RDB over AOF is its relatively small impact on performance.

Disadvantages: the fatal disadvantage of RDB file lies in the persistence mode of its data snapshot, which determines that real-time persistence cannot be achieved. In today’s more and more important data, a lot of data loss is often unacceptable, so AOF persistence has become the mainstream. In addition, RDB files need to meet the specific format, and the compatibility is poor (for example, the old version of redis is not compatible with the new version of RDB files).

Aof persistence

Corresponding to RDB persistence, AOF has the advantages of supporting second level persistence and good compatibility, but it has the disadvantages of large file size, slow recovery speed and great impact on performance.

27. Persistence strategy selection

(1) If the data in redis is completely discarded, it doesn’t matter (for example, redis is completely used as the cache of DB layer data), then no persistence is required for either the stand-alone or the master-slave architecture.

(2) In a stand-alone environment (for individual developers, this may be more common), if you can accept more than ten minutes of data loss, choosing RDB is more beneficial to the performance of redis; if you can only accept seconds of data loss, you should choose AOF.

(3) However, in most cases, we will configure the master-slave environment. The existence of slave can not only realize the hot standby of data, but also separate read and write, share redis read requests, and continue to provide services after the master is down.

28. Mechanism of redis cache breakdown

Use mutex. To put it simply, when the cache fails (the value obtained is judged to be empty), instead of going to load DB immediately, first use some operations of the cache tool with the return value of successful operation (such as setnx of redis or add of Memcache) to set a mutex key, and then load it when the operation returns success DB and reset the cache; otherwise, try the whole get cache method again

29. Advanced tools provided by redis

Personalized functions such as slow query analysis, performance testing, pipeline, transaction, Lua custom command, bitmaps, hyperloglog, publish / subscribe, geo, etc.

30. Redis common management commands

#Dbsize returns the number of keys in the current database.
#Info returns the current redis server status and some statistics.
#Monitor monitors and returns all the requests received by the redis server in real time.
#Shutdown saves the data to the disk synchronously and closes the redis service.
#Config get parameter gets a redis configuration parameter information. (individual parameters may not be available)
#Config set parameter value sets a redis configuration parameter information. (individual parameters may not be available)
#Config resetstat resets the statistics of the info command. (reset includes: keyspace hits
#Number of keyspace errors, number of processing commands, number of received connections, number of expired keys)
#Debug object key gets the debugging information of a key.
#Debug segfault creates a server crash.
#Flushdb delete all the keys in the current database, this method will not fail. Use with caution
#Flushall delete all keys in all databases, this method will not fail. Use with caution

31. Reids tool command

#Redis server: the daemon startup program of redis server
#Redis cli: redis command line operation tool. Of course, you can also use telnet to operate according to its plain text protocol
#Redis benchmark: redis performance testing tool, which tests the read-write performance of redis in your system and your configuration
$redis benchmark - N 100000 – C 50 simulates sending 100000 sets / gets queries by 50 clients at the same time
#Redis check AOF: update log check
#Redis check dump: local database check

32. Why persistence?

Redis is a kind of in memory database, that is, when the server is running, the system allocates part of the memory to store data. Once the server hangs up or suddenly goes down, the data in the database will be lost. In order to save the data even if the server is suddenly shut down, the data must be saved from memory to disk by persistence.

33. Judge whether the key exists

Exists key + key name

34. Delete key

del key1 key2 ...

35. Data consistency between cache and database

Distributed environment (stand-alone is needless to say) is very prone to data consistency between cache and database. In view of this, we can only say that if your project requires strong consistency of cache, then please do not use cache. We can only take appropriate strategies to reduce the probability of data inconsistency between cache and database, but can not guarantee the strong consistency between them. Appropriate strategies include appropriate cache update strategy, updating the cache in time after updating the database, and adding a retrial mechanism in case of cache failure, such as MQ mode message queue.

36. Bloom filter

Bloom filter is similar to a hash set, which is used to quickly determine whether an element exists in a set. Its typical application scenario is to quickly determine whether a key exists in a container, and return it if it does not exist. The key of Bloom filter is hash algorithm and container size

37. Cache avalanche

At the same time, a large number of key expiration (failure), followed by a big wave of requests fell in the database instantly, resulting in abnormal connection.


1. It’s also like solving the problem of cache penetration.

2. Establish backup cache, cache a and cache B, set timeout time for a, and set no timeout time for B. read cache from a first, a did not read B, and update a cache and B cache;

38. Cache concurrency

Concurrency here refers to the concurrency problem caused by multiple redis clients setting keys at the same time. A more effective solution is to redis.set The operation is put in the queue to make it serialized. It must be executed one by one. The specific code is not available. Of course, locking is also possible. As for why not use the transaction in redis, I leave it to you to think and explore.

39. Redis distributed

Redis supports master-slave mode. Principle: master will synchronize data to slave, while slave will not synchronize data to master. When slave starts, it will connect to master to synchronize data.

This is a typical distributed read-write separation model. We can use master to insert data and slave to provide retrieval service. This can effectively reduce the number of concurrent access to a single machine

40. Read write separation model

By increasing the number of slave dB, the read performance can grow linearly. In order to avoid a single point of failure of the master dB, the cluster will generally use two master DB as hot standby, so the read and write availability of the whole cluster is very high. The drawback of the read-write separation architecture is that each node, whether it is master or slave, must save complete data. If there is a large amount of data, the scalability of the cluster is still limited by the storage capacity of a single node, and the read-write separation architecture is not suitable for write intensive applications.

41. Data fragmentation model

In order to solve the defects of the read-write separation model, the data fragmentation model can be applied.

Each node can be regarded as an independent master, and then data fragmentation can be realized through business.

Combined with the above two models, each master can be designed as a model composed of one master and multiple slaves.

42. Common performance problems and solutions of redis

It is better for master not to do any persistent work, such as RDB memory snapshot and AOF log file

If the data is important, a slave enables AOF backup data, and the policy is set to synchronize once per second

For the speed of master-slave replication and the stability of connection, master and slave should be in the same LAN

Try to avoid adding slave libraries to the main library with great pressure

43. Redis communication protocol

Resp is a kind of communication protocol used before redis client and server; resp is characterized by simple implementation, fast parsing and good readability

44. Implementation of redis distributed lock

First, use setnx to scramble for the lock, and then add an expiration time to the lock with expire to prevent the lock from forgetting to release. **What happens if the process crashes unexpectedly after setnx or needs to restart maintenance before it executes expire? **The set instruction has very complex parameters, which should be able to combine setnx and expire into one instruction at the same time!

45. Redis as asynchronous queue

Generally, list structure is used as queue, rpush produces messages and lpop consumes messages. When there is no message in the lpop, sleep appropriately and try again later. Disadvantages: in the case of consumers offline, the production message will be lost, so we have to use a professional message queue, such as rabbitmq. **Can it be produced once and consumed many times? **Using the pub / sub theme subscriber mode, we can achieve a 1: n message queue.

46. Correct operation of massive data in redis

Scan series commands (scan, sscan, hscan, zscan) are used to complete data iteration.

47. Precautions for scan series commands

  • The parameter of scan has no key, because its iteration object is the data in dB;
  • The return values are all arrays, and the first value is the cursor of the next iteration;
  • Time complexity: O (1) for each request, O (n) for all iterations, and N is the number of elements;
  • Available versions: version > = 2.8.0;

48. Redis pipeline

In some scenarios, we’re in theMultiple commands may need to be executed in one operationIf we only execute one command at a time, it will waste a lot of network timeRedisIf you execute it again, it will reduce a lot of overhead time. But it should be noted thatpipelineThe commands in the pipeline are not executed atomically, that is, the commands in the pipeline arriveRedisThe server may be interspersed with other commands

49. Transaction does not support rollback

50. A LRU algorithm

class LRUCache<K, V\> extends LinkedHashMap<K, V\> {
    private final int CACHE\_SIZE;

     \*How much data can be cached at most after passing in
     \*@ param cachesize cache size
    public LRUCache(int cacheSize) {
        //True means that the LinkedHashMap is sorted according to the access order. The most recently accessed one is placed at the head, and the oldest one is placed at the tail.
        super((int) Math.ceil(cacheSize / 0.75) + 1, 0.75f, true);
        CACHE\_SIZE \= cacheSize;

    protected boolean removeEldestEntry(Map.Entry<K, V\> eldest) {
        //When the amount of data in the map is greater than the specified number of buffers, the oldest data will be deleted automatically.
        return size() \> CACHE\_SIZE;

51. Multi node redis distributed lock: redlock algorithm

Get the current time (start).

Turn to nRedisThe node requests a lock. The way to request a lock is the same as that from a single nodeRedisThe lock is acquired in the same way. In order to ensure that in a certainRedisWhen the node is not available, the algorithm can continue to run, and the operation of obtaining the lock needs to set a timeout, which is far less than the effective time of the lock. Only in this way can we ensure that the client is sending aRedisAfter the node fails to acquire the lock, it can try the next node immediately.

Calculate how long it takes to acquire the lock (consumetime = end – start). If the client starts from mostRedisThe node (> = n / 2 + 1) acquires the lock successfully, and the total time of acquiring the lock does not exceed the valid time of the lock. In this case, the client will think that acquiring the lock is successful, otherwise, acquiring the lock fails.

If the lock is acquired successfully, the effective time of the lock should be reset to the original effective time of the lock minusconsumeTime

If the final acquisition of the lock fails, the client should report to all users immediatelyRedisThe node initiates a request to release the lock.

52. There are four ways to set the expiration time in redis

  • Expire key seconds: set the key to expire in N seconds;
  • Pexpire key milliseconds: set the key to expire after n milliseconds;
  • Expire key timestamp: set the key to expire after a certain time stamp (accurate to seconds);
  • Pexpireat key milliseconds timestamp: set the key to expire after a certain time stamp (accurate to milliseconds);

53. Three different deletion strategies of reids

Scheduled deletion: while setting the expiration time of the key, create a timing task. When the key reaches the expiration time, delete the key immediately

Lazy delete: let the key expire, but every time you get the key from the key space, check whether the key is expired. If it is expired, delete the key. If it is not expired, return the key

Delete periodically: every once in a while, the program will check the database and delete the expired keys. As for how many expired keys to delete and how many databases to check, it is decided by the algorithm.

54. Scheduled deletion

  • advantage:Memory friendly, timing deletion strategy can ensure that the expired key will be deleted as soon as possible, and release the memory occupied during the country
  • Disadvantages:It is not friendly to CPU time. When there are many expired keys, deleting tasks will take up a large part of CPU time. When memory is not tight but CPU time is tight, CPU time is used to delete expired keys unrelated to the current task, which affects the response time and throughput of the server

55. Regular deletion

Due to the timing deletion will take up too much CPU time, affect the response time and throughput of the server, and inert deletion waste too much memory, there is a risk of memory leakage, so there is a kind of periodic deletion strategy which integrates and compromises the two strategies.

  1. The periodic deletion strategy performs the delete expired key operation every other period of time, and reduces the impact of deletion operation on CPU time by limiting the execution time and frequency of deletion operation.
  2. Timing deletion strategy can effectively reduce the memory waste caused by expired keys.

56. Deletion

  • advantage:It is CPU time friendly. Every time the key is retrieved from the key space, the expired key is checked and deleted. The deletion target is only limited to the currently processed key. This strategy does not spend any CPU time on other unrelated deletion tasks.
  • Disadvantages:It is not memory friendly, and the expired key may not be deleted, resulting in the occupied memory will not be released. There may even be a memory leak phenomenon. When there are many expired keys and these expired keys are not accessed, they may be kept in memory all the time, causing memory leak.

57. Reids management tool: redis Manager 2.0

GitHub address

58. Several common caching strategies of redis

  • Cache-Aside
  • Read-Through
  • Write-Through
  • Write-Behind

59. Implementation of Bloom filter with redis module

Redis module is a new feature supported after redis 4.0. Many foreign universities and institutions provide a lot of modules. As long as we compile and introduce them into redis, we can easily realize some functions we need. There are some common modules in the official module of redis. Let’s make a simple use here.

  • Neural redis is mainly the machine science of neural network, which can be integrated into redis to do some machine training. You can try it if you are interested
  • Redissearch mainly supports rich text search
  • Redisbloom supports bloom filter in distributed environment

60. How does redis realize “people nearby”


GEOADD key longitude latitude member \[longitude latitude member ...\]

Adds the given location object (latitude, longitude, name) to the specified key. Where key is the name of the set and member is the object corresponding to the longitude and latitude. In practice, when the number of objects to be stored is too large, the object set can be sharded in disguised form by setting multiple keys (such as saving one key for each set), so as to avoid too many single sets.

Return value after successful insertion:

(integer) N

Where n is the number of successful inserts.