Cache avalanche means,of large numberThe application cannot be processed in the redis cache, and then a large number of requests are sent to the database, resulting in a surge in the pressure of the database, and may even lead to the collapse of the database, resulting in the collapse of the whole system and the chain effect like an avalanche.
The causes of cache avalanche are generally as follows:
1. A large number of keys in the cache expire at the same time
2. The redis instance has hung up and cannot process the request
For reason 1, the scenario that a large number of Keys expire at the same time should be avoided in practical application. If there is such a business scenario, you can fine tune the expiration time of these keys to make them have a certain difference interval.
For reason 2, the redis master-slave cluster mentioned earlier can actually better realize that after the master redis instance hangs up, other slave databases can quickly switch to the master database and continue to provide services.
Of course, the above are preventive measures. If a cache avalanche has occurred, it can be used to prevent the database from crashing by a large number of requestsService fuseperhapsRequest current limiting。
Service fusing is to suspend the provision of redis services to businesses until redis returns to normal, and then provide services to the outside world. Of course, in this case, the business will be shut down. Another mild way is to request current limiting. Request flow restriction, as the name suggests, is to limit the flow of requests and randomly discard some requests to ensure that too many requests will not be pressed into the database at the same time.
Cache breakdown refers to the sudden invalidation of a hot data in the cache, and then these requests for hot data will be requested to the database.
Cache breakdown is generally caused by the expiration of hot keys in redis. The most direct way is not to set the expiration time for the hot key.
Cache penetration means that the data is neither in redis nor in the database. After redis finds that there is no corresponding key in each request, it requests the database and finds that there is no corresponding key in the database. At this time, redis is equivalent to a decoration and has no specific function. If someone maliciously attacks the system and deliberately uses null values or other nonexistent values to make frequent requests, it will also cause great pressure on the database.
To avoid cache penetration, we can:
1. Cache null or default
2. Use bloom filter to judge whether there is such data in advance.
The bloom filter actually calculates three hash values through three different hashes of the key, and then sets the corresponding hash value position to 1 in the hash table. When a new request comes, first judge whether the corresponding hash position of the key is 1 after n hashes. As long as one is not 1, it means that the key has not been cached before.
In fact, the bloom filter is also defective. It cannot fully guarantee the requested key. Through the verification of the bloom filter, there must be this data. However, as long as it fails to pass the verification of Bloom filter, the key must not exist. In fact, this can filter out most non-existent key requests.
As mentioned above, the bloom filter has defects. If the hash slot of the bloom filter is too short, it is likely to cause most positions to be 1. At this time, the bloom filter will lose its meaning. Therefore, when we find that most positions of the bloom filter are 1, we should widen the hash slot.
3. In actual business, we should verify the requested parameters first, and the requested parameters should be within the specified range. In fact, in engineering applications, it mainly depends on parameter verification to filter out many invalid requests.
When redis is used as the database cache, there may be data inconsistency.
When you need to modify a piece of data, you need to modify the database and cache at the same time. Here you need to distinguish: modify the database first or modify the cache; For caching, whether to modify data directly or delete data?