I hear you don’t know how to cache?

Time:2020-11-25

Soul torture

  • What is caching? I haven’t heard of you can go
  • Which scenarios need caching?
  • Can cache be classified?
  • What are the implementations of caching?

cache

Cache, also known as cache, is essentially a buffer for data exchange, and can also be called a component for storing data. Cache is mainly used to reduce the speed mismatch between the two sides of data exchange.

Cache is a common and important factor in the computer world, it is almost all over the fields you know. For example, CPU’s first level cache, second level cache, browser’s cache, etc. When we use the cache, we should clear it. We should realize that the cached data has an expiration date, that is, it may disappear at any time. Some students will say that components like redis provide the function of data persistence, so that the data will not disappear. As for this issue, I would like to say two points:

  • When components provide persistence function, disk IO operations will inevitably occur, and disk IO operations will greatly reduce the performance of cache components. Is there any value of caching?
  • The cached data is temporary data in time definition. If it is persisted, the temporary meaning will not exist, and it also takes up the storage space of the disk

The most common storage medium for caching is memory, but this does not mean that only memory can store cache data, which is also a common mistake for beginners. The function of cache is to provide high-speed read-write function. Therefore, if your device is fast enough, it can be used as a cache in theory. For example, SSD can be used as a medium to store cache data in some scenarios where the performance is not strict and sensitive. As for the speed gap between various hardware of the computer, please refer to the previous article:

Why do high parallel developers prefer in-process caching

Cache application scenarios

In theory, any link that needs to improve the access speed can be added to the cache

However, adding cache module to the system will increase the complexity of the system to a certain extent, so whether to introduce cache or not needs to be balanced according to business scenarios. Generally, the following characteristics of data can be considered to introduce cache module:

The data rarely changes

This kind of data is most suitable for cache application scenarios, because it basically does not involve the responsible cache update operation, so just load it into the cache. After that, the most representative information of users such as CSS is generated.

When it comes to data rarely changing, we have to mention the CDN service. Many large websites will use CDN to speed up the access of some unchanging resources, such as some pictures and videos. Because users need to cross multiple backbone networks to access these resources, the speed is slow, and CDN just makes up for this defect, so CDN can also be regarded as a cache service.

Hot data

This kind of data is the main reason that we need to add cache in our development, and it is also the most likely to cause system paralysis. Its biggest characteristic is that the occurrence time is uncertain and the flow peak value is uncertain. You may still vaguely remember the event that the microblog died because of two star infidelity. Although the microblog system architecture was later modified to resist n star infidelity at the same time, there is still nothing to do with the uncertainty.

The cache of hot data is not easy to design because it has a single point attribute. What does it mean? Suppose there are 100 nodes in our cache server. At this time, a certain hot news happens, and the cache of this hot news is in node 0. A large number of requests will be routed to node 0, which is likely to cause node 0 to crash. If node 0 crashes, based on the fault transfer strategy, traffic will be transferred to another node, and then this node will crash Although this kind of push… Cache improves the overall throughput of the system, it needs to be targeted separately when dealing with targeted traffic peaks. In fact, this is also a problem to be solved by distributed systems. Since a node can not withstand the peak traffic, the system can design multiple nodes to fight together. As for the above hot data scenarios, the simplest and crude way is to cache copies. One cache data will have multiple copies, which is similar to MySQL’s read-write separation scheme. Multiple copies provide read operation at the same time. In addition, in this scenario, I recommend using in-process cache instead of distributed cache, because the access speed of in-process cache is much faster than that of distributed cache that needs to span the network. For details, see the previous tweets:

Why do high parallel developers prefer in-process caching

time-consuming operation

In some cases, the cost of cache is very high. Why add specific conditions? If the system has strict requirements on the consistency of these data, and it will change frequently. Although the cost of obtaining data is relatively high, you should also fully consider the side effects brought by cache. Like our most commonly used report service, it is time-consuming to generate reports. If the report data is relatively stable, we can consider using cache to improve system performance.

Cache obsolescence

The device that stores the cache limits the size of the cache. If 16g memory is used to store the cache, the upper limit of the cache is 16g in theory (but it is much smaller in fact). Moreover, the cache has timeliness. Therefore, when the data to be cached is larger than the media capacity, a strategy of eliminating data is needed to ensure that the new data can be cached normally.

The best elimination strategy is to eliminate the data that the system does not use. But what data is useless data is the difficulty of the strategy. Based on the uncertainty of user behavior, this kind of data is difficult to predict by program. In view of the conventional theory of the system, there are several mainstream elimination strategies

  • LFU (least frequently used): the cache system will remember the frequency of each cache data being accessed, and will give priority to the least frequently used data.
  • LRU (least recently used): the cache system will remember the last access time of each data, and will give priority to the data that has not been accessed for a long time
  • Arc (adaptive replacement cache): this cache algorithm tracks and records LFU and LRU at the same time, as well as evicts cache entries to obtain the best use of available cache. It is considered to be one of the best performance cache algorithms. It is between LRU and LFU, and can memorize the effect and adjust itself. Of course, the development will be more complicated.
  • FIFO: elimination algorithm based on queue principle, FIFO. This algorithm is relatively simple, and it is rarely used in reality because there are few business scenarios.

Cache implementation

There are two ways to implement caching in the system

In process caching

In process caching means that the cache and the application are in the same process, and it does not need to cross the network to obtain the cached data, so the in-process cache is the fastest way to access.

In process caching is usually used in single or small systems, but it can also be used in large systems on the premise that the overall architecture is consistent? For example, in the case of multiple server nodes, if user a’s information is cached in node 0, if there is a mechanism to ensure that all user a’s requests will only reach node 0, there is no problem using in-process caching. The most typical application of the actor model can be found in the previous article

The actor model is so excellent under distributed high concurrency

Out of process cache

As the name suggests, out of process caching means that the cached data and the application are isolated and located in different processes. Of course, the out of process cache can be divided into stand-alone version and distributed version. The stand-alone version will not be mentioned here, and there will be a single machine failure.

Distributed version out of process is usually called distributed cache, which is an architecture model based on distributed theory. Although the access speed is much slower than in-process cache, it is much faster than disk IO operation. So many large-scale systems like to use distributed cache to improve performance. For example, redis, which is the most used one, has provided a cluster solution after version 3.0.

Write it at the end

In the face of the advantages brought to the system by cache, we should also pay attention to the shortcomings of cache.

  • Consistency of cache and data source
  • Cache hit problem
  • The problem of cache avalanche penetration
  • Concurrency contention of cache
  • Caching is suitable for systems with more reads and fewer writes
  • The introduction of cache component will bring some complexity to the system design
  • Caching will increase the success of operation and maintenance and the cost of troubleshooting

Although cache brings a lot of problems, compared with cache, the performance of the system is undoubtedly improved. When we design a high concurrency system, cache has become a necessary design. Only when we design all kinds of cache strategies correctly can we give full play to the advantages of cache

More wonderful articles

I hear you don't know how to cache?