LastModified: 10:37:39 June 14, 2019
It mainly relies on redis + Lua to realize the current limiter. The reason for using Lua is to combine multiple commands as an atomic operation without too much consideration of concurrency.
Counter algorithm refers to a fixed number of requests allowed to pass in a window time, such as 10 times per second, 500 times per 30 seconds.
The finer the time granularity set, the smoother the current limit will be.
The Lua script used
Counter Current Limitation The minimum unit time supported here is seconds, and millisecond granularity is supported if expire is changed to pexpire. KEYS  string current limiting key - ARGV  int current limit - ARGV  int unit time (seconds) local cnt = tonumber(redis.call("incr", KEYS)) if (cnt == 1) then -- The CNT value is 1 to indicate that it did not exist before, so you need to set its expiration time redis.call("expire", KEYS, tonumber(ARGV)) elseif (cnt > tonumber(ARGV)) then return -1 end return cnt
Return – 1 indicates that the current limit is exceeded, otherwise the number of requests that have passed in the current unit time is returned.
Key may, but is not limited to, the following situations
- IP + interface
- User_id + interface
- Simple implementation
- When the granularity is not fine enough, there will be double requests in the same window time.
- Keep time granularity as fine as possible
Eg. 1000/3s current limiting
Extreme case 1:
Number of requests in the first second 10
Number of requests in 2 seconds 10
Number of requests in 3 seconds 980
Fourth second request number 900
Number of requests in 5 seconds 100
Number of requests in 6 seconds 0
Note that the total number of requests in 3-5 seconds is as high as 1980.
Extreme case 2:
Number of requests per second 1000
Number of requests in 2 seconds 0
Number of requests in 3 seconds 0
At this point, a large number of rejection requests will occur in the next 2-3 seconds.
Token Bucket Mode
- The bucket holds tokens, has a ceiling, and is full at first.
- Each request consumes tokens (different number of tokens can be consumed according to different requests)
- Every other time (at a fixed rate) tokens are placed in the bucket.
The realization of barrels can also be divided into:
Advance token number: previous digging, later jumping
Not enough tokens to reject directly
Unpreconsumable token bucket implemented here, Lua code:
Token Bucket Current Limitation: Pre-consumption is not supported and the initial bucket is full KEYS  string current limiting key --Argv  int max. capacity - The number of tokens added per time for ARGV  int -- ARGV  int token addition interval (seconds) -- ARGV  int current timestamp local bucket_capacity = tonumber(ARGV) local add_token = tonumber(ARGV) local add_interval = tonumber(ARGV) local now = tonumber(ARGV) Keys to save the last update bucket local LAST_TIME_KEY = KEYS.."_time"; Get the number of tokens in the current bucket local token_cnt = redis.call("get", KEYS) Maximum time required for complete recovery of barrels local reset_time = math.ceil(bucket_capacity / add_token) * add_interval; If token_cnt then -- token bucket exists The last time the bucket was updated local last_time = redis.call('get', LAST_TIME_KEY) --Recovery multiple local multiple = math.floor((now - last_time) / add_interval) --Recovery token count local recovery_cnt = multiple * add_token - Ensure that barrel capacity is not exceeded local token_cnt = math.min(bucket_capacity, token_cnt + recovery_cnt) - 1 if token_cnt < 0 then return -1; end - Reset the expiration time to avoid key expiration redis.call('set', KEYS, token_cnt, 'EX', reset_time) redis.call('set', LAST_TIME_KEY, last_time + multiple * add_interval, 'EX', reset_time) return token_cnt Other -- Token Bucket does not exist token_cnt = bucket_capacity - 1 Set expiration time to avoid keys persisting redis.call('set', KEYS, token_cnt, 'EX', reset_time); redis.call('set', LAST_TIME_KEY, now, 'EX', reset_time + 1); return token_cnt end
The key to the token bucket is the following parameters:
- Bucket maximum capacity
- Number of tokens put in each time
- The interval between tokens
The implementation of token bucket will not cause double flow per unit time in counter mode.