Avalanche cache, concurrent cache and

Time:2021-7-26

 

Cache penetration, cache concurrency and cache avalanche are common cache problems caused by large amount of concurrency. The causes and solutions are recorded here.

Cache penetration is caused by malicious attack or unintentional; Cache concurrency is caused by insufficient design; Cache avalanche is caused by cache invalidation at the same time.

 

1、 Cache penetration

Concept:

Cache penetration refers to the use of nonexistent keys for a large number of highly concurrent queries, which leads to cache failure. Each request must penetrate into the back-end database system for query, resulting in excessive pressure on the database and even crushing the database service.

 

Solution:

1. We usuallyCache null values, when the same query request is received again, if the cache is hit and the value is empty, it will be returned directly and will not penetrate the database to avoid cache penetration.

 

2. Of course, sometimes malicious attackers can guess that we use this scheme, and each time we use different parameters to query, which requires usFilter the input parametersFor example, if we use ID to query, we can analyze the format of ID. if it does not comply with the rules for generating ID, we can reject it directly, or put time information on the ID to judge whether the ID is legal or whether it is the ID we have generated, so as to intercept some invalid requests.

 

 

2、 Cache concurrency

Concept:

Cache concurrency usually occurs in high concurrency scenarios. When a cache key expires, because there are a large number of requests to access the cache key, multiple requests find that the cache has expired at the same time. Therefore, multiple requests will access the database to query the latest data and write back to the cache at the same time, which will increase the load of the application and database and reduce the performance, Due to high concurrency, the database may even be crushed.

 

Solution:

1. Distributed lock

Using distributed locks ensures that there is only one thread for each key to query back-end services at the same time, and other threads do not have the permission to obtain distributed locks, so they only need to wait. This method transfers the pressure of high concurrency to distributed locks, so it is a great test for distributed locks.

 

2. Local lock

Similar to distributed locks, we use local locks to restrict only one thread to query data in the database, while other threads only need to wait until the previous thread queries the data before accessing the cache. However, this method can only limit a service node to only one thread to query in the database. If a service has multiple nodes, there will be multiple database query operations, that is, the problem of cache concurrency is not completely solved when there are a large number of nodes.

 

3. Soft expiration

Soft expiration refers to setting the expiration time for the data in the cache, that is, the expiration time provided by the cache server is not used, but the business layer stores the expiration time information in the data, and the business program judges whether it expires and updates it. When it is found that the data is about to expire, the aging of the cache is extended, and the program can send a thread to obtain the latest data in the database, When other threads see the extended expiration time, they will continue to use the old data and update the cache after the dispatched thread obtains the latest data. You can also update and set the soft expiration cache through the asynchronous update service, so that the application layer does not need to care about cache concurrency.

 

 

3、 Cache avalanche

Concept:

Cache avalanche refers to the situation where the cache server restarts or a large number of caches fail in a certain period of time, resulting in instantaneous load increase pressure on the back-end database, and even crushing the database.

 

Solution:

The usual solution isUse different expiration times for different data, and even use different expiration times for the same data and different requests。 For example, if we want to cache user data, we will set different cache expiration times for each user’s data. We can define a base time, assuming 10 seconds, and then add a random number within two seconds. The expiration time is 10 ~ 12 seconds, which will avoid cache avalanche.

 

Recommended Today

Implementation example of go operation etcd

etcdIt is an open-source, distributed key value pair data storage system, which provides shared configuration, service registration and discovery. This paper mainly introduces the installation and use of etcd. Etcdetcd introduction etcdIt is an open source and highly available distributed key value storage system developed with go language, which can be used to configure sharing […]