ngx_ Srcache + Lua asynchronous update cache — 10 times faster response


nginx_ Lua update cache asynchronously


Nginx has always been used as a reverse proxy server. First, nginx is event driven and fast. The reverse proxy module of nginx can support it very wellPage cachingandload balancing

Page caching

Nginx has proxy_ The built-in cache function is file based. If you set the cache path to ramdisk, you can achieve the same cache read and write speed as the memory cache. Although this can solve the problem of slow file reading and writing, this cache cannot be shared across machines in distributed deployment.


This module written by Zhang Da Shen supports writing page cache to redis cluster in cooperation with another module. It’s a good solution

  1. Cache cannot cross machineshareThe problem of

  2. It also makesThe cache size is not limited by stand-alone resources

  3. At the same time, redis itself can provideFast reading and writing speed


Look at the above points, it seems to be perfect.
However, in the actual use, there are still some areas that need to be improved

  1. When the cache does not exist:
    The first request failed to query redis. To the back end, redis has to be stored. There are two more redis accesses, one read and one write, than in the case of no cache. In addition, when concurrent access occurs, requests may enter the back-end server. If the back-end is not robust enough, it will lead to “avalanche effect”. (if you use the default proxy_ The cache module can also use proxy_ cache_ use_ The stale command is avoided, but the srcache module does not implement similar functions)

  2. When the cache fails
    Usually we set the cache time for the page. When the cache fails, we need to update the cache again. At this time, there will be similar problems in the first case.

In the above two cases, the first case is less, and the second case is more. This is due to redis automatically deleting expired cache, resulting in cache loss.

terms of settlement

Because nginx does not know the cache time of the data in redis, it will frequently lead to cache loss.
Now that we know the reason, let’s solve this problem. Let nginx know and even participate in the management of cache time, which can be effectiveavoidPassiveCache missingProblem, can be consciousProactive update cache


Here we need to mention a third-party module of nginx, which integrates Lua into nginx, extends the syntax of nginx configuration file, and supports writing logic directly in Lua language in configuration file.


I open source a repository I used in my project to GitHub.

  1. It can let nginx detect the expiration time of cache actively

  2. When it is about to expire, the old cache data is returned directly

  3. Update cache with asynchronous tasks

The second and third points above can update the cache asynchronously when the cache is about to expire, so that the end user can not perceive the process of updating the cache.
In the project, after using this caching mechanism,The average response speed perceived by users has increased by 10 times(nginx access log analysis results), which are implemented under the condition of the original system performance unchanged! Isn’t it a bit incredible.


Let’s talk about the principle of improving speed:

  1. If the cache is missing, skip srcache directly_ The fetch step sends the request to the back-end server. At the same time, a shared memory lock is applied, and the same concurrent request is not sent to the back-end, but waiting for the request to return.

  2. When updating the cache, add a stale time to the expiration time. Then, when the expiration time comes, redis will not delete the data immediately.

  3. If the expired data is detected, but redis can still get the “expired” data, it will immediately return the expired data to the end user.

  4. At the end of the request, an asynchronous task is used to update the cache. Considering the concurrency, a shared memory lock is also needed.

In this way, even when the cache is expired, users rarely encounter the situation that they can only get data from the back-end server. So it can save quite a lot of time and improve the speed greatly.


This library has been running for several months and is very stable. Of course, this benefits from the stability of nginx.
However, we have to say that after using this library, most of the requests that the back-end server can receive are sent by the asynchronous task. Users can get data directly in the cache. Therefore, the performance of the back-end server has not been improved, but the user’s feeling is much faster!!!