Why do you prefer in-process caching

Time:2021-2-22

Why do you prefer in-process caching

In process caching means that the cache and the application are in the same address space. That is, in the same process. Distributed cache refers to the cache and the application located in different processes, usually deployed on different servers.

Once upon a time, there was an organization whose owner was called CPU. This organization specially sent servants to take some things and then deal with them accordingly. Here is the daily scene of the organization.

Why do you prefer in-process caching

The above story is purely estimated data, and the real data will have errors according to different hardware configuration and network environment.

Through the above informal stories, we can understand the general gap between the CPU and each device data. As for the question of YY sister, we should also understand.

  1. First of all, load the data from the disk to the memory for caching. This is right. After all, disk IO is much slower than memory. Take most of the PCs and servers that we use now, disk is often the bottleneck of performance.
  2. If there are conditions or framework support to implement in-process caching, I still recommend using in-process caching. After all, kV storage and applications like redis are not on the same server in most cases. Although the speed of LAN seems very fast to the naked eye, it still gives CPU a big holiday.

As for the suitable conditions for in-process caching, I think we need to pay attention to the following points:

  1. The same request or request with the same cache key is processed by the same program on the same server every time. In this way, the cache of this request will only generate one copy under normal circumstances. If each request is routed to a different server, multiple copies of the cache will be generated. Maintaining the consistency of the cache data is costly.
  2. When a new server node joins or exits, the avalanche phenomenon cannot occur. All cache requests penetrate into the database, which is fatal. For example, take a look at Caicai’s previous article: a clear path of distributed caching (with code attached)
  3. If the cache processing server changes, for example, for some reason, the initial request is handled by server a, then server a goes down, and now server B handles it. In the process of cache transfer, the correctness and consistency of data must be guaranteed.
  4. The in-process cache of the program must have an expiration policy, which can be used reasonably in the case of limited memory size. It is recommended to use LRU elimination algorithm to ensure that the memory will not burst.
  5. Due to the large amount of concurrency and high performance requirement, in-process caching can be considered.
  6. If it is a small part of read-only data and has a large amount of access, such as dictionary data, you can consider using in-process cache.

What are the advantages of in-process caching over distributed caching, such as redis?

  1. The performance of in-process cache is relatively high, and the delay will be smaller and the bandwidth will be saved. After all, the performance of distributed cache network call is much slower than that of local call,
  2. Because it is in the same process as the application and shares the same virtual memory, it is easier to maintain the state,
  3. Secondly, the in-process cache is not designed for network transmission, so there is no serialization process, which is better in performance.
  4. The data type of in-process cache can be almost any type supported by language level, and the data type design is much more flexible than most distributed cache devices.

In the case of dealing with high concurrency, if there is an appropriate environment, Cai Cai still thinks that in-process caching is the first choice. In addition, programs should try to avoid thread switching and try to be asynchronous. If you can, you’d better estimate the size of the cache data to avoid memory leakage.

Of course, distributed cache has its own advantages, which is better in monitoring, disaster recovery, scalability, ease of use and so on. As for whether to use in-process or distributed caching, there is no final conclusion. Solving the business pain point is the best result

Write at the end

If the program wants to maximize the concurrency and shorten the response time, it will put the data that the user needs in the nearest place to the user

More wonderful articles
– Distributed large concurrent series
– Architecture design series
– Interesting algorithm and data structure series
– Design pattern series
Why do you prefer in-process caching