Use of Django rest framework current limiting function

Time:2022-6-21
catalogue
  • Body start
    • 1. current limiting in DRF
    • 2. current limiting advanced configuration
    • 3. analysis of current limiting ideas
    • 4. source code analysis
    • 5. other precautions
  • reference material

    Body start

    Let’s start with the concept of current limiting, which was first introduced at the front end. The real business scenario is that when you enter text in the search box to search, you do not want to call the back-end interface every time you enter a character, but only after there is a pause. This function is very necessary. On the one hand, it reduces the pressure of front-end request and rendering, and reduces the pressure of back-end interface access. Codes for functions similar to the front end are as follows:

    //Front end function current limiting example
    function throttle(fn, delay) {
        var timer;
        return function () {
            var _this = this;
            var args = arguments;
            if (timer) {
                return;
            }
            timer = setTimeout(function () {
                fn.apply(_this, args);
                timer = null;
            }, delay)
        }
    }

    However, the back-end current limit is similar to the front-end in purpose, but the implementation is different. Let’s take a look at the current limit of DRF.

    1. current limiting in DRF

    Project configuration

    # demo/settings.py
    
    REST_FRAMEWORK = {
        # ...
        'DEFAULT_THROTTLE_CLASSES': (
            'rest_framework.throttling.AnonRateThrottle',
            'rest_framework.throttling.UserRateThrottle',
             'rest_framework.throttling.ScopedRateThrottle',
        ),
        'DEFAULT_THROTTLE_RATES': {
            'anon': '10/day',
            'user': '2/day'
        },
    }
    
    # article/views.py
    
    #Viewset based current limiting
    class ArticleViewSet(viewsets.ModelViewSet, ExceptionMixin):
        """
        API path that allows users to view or edit.
        """
        queryset = Article.objects.all()
        #Use default user throttling
        throttle_classes = (UserRateThrottle,)
        serializer_class = ArticleSerializer
    
    #View based current limiting
    @throttle_classes([UserRateThrottle])

    Because the user I configured can only request twice a day, an exception of 429 too many requests will be given after the third request. The specific exception information is that the next available time is 86398 seconds.

    2. current limiting advanced configuration

    The current limiting configuration demonstrated above is applicable to the current limiting of users. For example, if I continue to visit another user, I still have two opportunities.

    $ curl -H 'Accept: application/json; indent=4' -u root:root   http://127.0.0.1:8000/api/article/1/ 
    {
        "id": 1,
        "creator": "admin",
        "Tag": "modern poetry",
        "Title": "if",
        "Content": "in this life, I will never think of you again \n except \n in some wet nights because of tears if \n if you like"
    }

    Introduce three types of current limiting

    • Anonratethrottle applies to any user’s restrictions on interface access
    • Userratethrottle is applicable to the restriction on interface access after the end of authentication request
    • Scopedratethrottle applies restrictions on access to multiple interfaces

    Therefore, three different classes are applicable to different business scenarios. The specific use is selected according to different business scenarios. The desired effect can be achieved by configuring the frequency of the corresponding scope.

    3. analysis of current limiting ideas

    Think about how you can implement this requirement if you code it?

    In fact, this function is not difficult. The core parameters are time, number of times and scope of use. The following shows the limit on the number of function calls.

    from functools import wraps
    
    TOTAL_RATE = 2
    
    FUNC_SCOPE = ['test', 'test1']
    
    
    def rate_count(func):
        func_num = {
            #Note that function names cannot be duplicate
            func.__name__: 0
        }
    
        @wraps(func)
        def wrapper():
            if func.__name__ in FUNC_SCOPE:
                if func_num[func.__name__] >= TOTAL_RATE:
                    raise Exception(f"{func.__name__} Function call exceeds the set number of times ")
                result = func()
                func_num[func.__name__] += 1
                Print (F "function {func.\u name\u} Number of calls: {func\u num[func.\u name\u]} ")
                return result
            else:
                #Functions outside the count limit are not restricted
                return func()
    
        return wrapper
    
    
    @rate_count
    def test1():
        pass
    
    
    @rate_count
    def test2():
        print("test2")
        pass
    
    
    if __name__ == "__main__":
        try:
            test2()
            test2()
            test1()
            test1()
            test1()
        except Exception as e:
            print(e)
        test2()
        test2()
        
    """
    test2
    test2
     Number of function test1 calls: 1
     Number of function test1 calls: 2
    Test1 function call exceeded the set number of times
    test2
    test2
    """

    Here, the function call times are monitored and the functions that can use this function are set. An exception is thrown when the number of function calls exceeds the set threshold for a long time. But there is no time limit here.

    4. source code analysis

    Just now, I analyzed how to limit the number of function calls. It may be a little complicated for a request. Here is how to implement DRF:

    class SimpleRateThrottle(BaseThrottle):
       
        # ......
        
        def allow_request(self, request, view):
            """
            Implement the check to see if the request should be throttled.
    
            On success calls `throttle_success`.
            On failure calls `throttle_failure`.
            """
            if self.rate is None:
                return True
    
            self.key = self.get_cache_key(request, view)
            if self.key is None:
                return True
    
            self.history = self.cache.get(self.key, [])
            self.now = self.timer()
    
            #Change the cache of request times according to the set time limit
            while self.history and self.history[-1] <= self.now - self.duration:
                self.history.pop()
            #The core logic is here to determine the number of requests
            if len(self.history) >= self.num_requests:
                return self.throttle_failure()
            return self.throttle_success()
        
        # ......
        
    class UserRateThrottle(SimpleRateThrottle):
        """
        Limits the rate of API calls that may be made by a given user.
    
        The user id will be used as a unique cache key if the user is
        authenticated.  For anonymous requests, the IP address of the request will
        be used.
        """
        scope = 'user'
    
        def get_cache_key(self, request, view):
            if request.user.is_authenticated:
                ident = request.user.pk
            else:
                #Considering that the user is not authenticated, it is consistent with the key in anonratethrottle
                ident = self.get_ident(request)
            #Build cached keys according to the set range
            return self.cache_format % {
                'scope': self.scope,
                'ident': ident
            }

    in summary:

    • The core judgment logic is still to obtain the number of calls per user in the cache, and judge whether it exceeds the set threshold according to the range and time.
    • Different types of current limiting differ in the design of cache keys. The default key is remote in the request_ ADDR。

    5. other precautions

    • Because the implementation here uses caching, it should be noted that in the case of multi instance deployment, a unified caching service needs to be configured (the default cache is implemented by Django based on memory).
    • The restart of the cache service may cause the existing count to be cleared. If there is a strong business logic need, please implement the logic of flow restriction yourself.
    • If it is a customized user table, you need to rewrite the get in the cache_ cache_ Key logic.
    • If it is necessary to statistically analyze the current limit of users, it is also necessary to redesign the logic of current limit.
    • The logic of current limiting should be used with caution in the production environment, because it will restrict users’ use of the product and is not friendly to users.

    reference material

    DRF current limiting
    Django cache

    The above is the details of the use of Django rest framework flow limiting function. For more information about Django rest framework flow limiting function, please pay attention to other relevant developeppaer articles!

    Recommended Today

    It’s numb, the code is changed to multi-threaded, there are 9 major problems

    foreword In many cases, in order to improve the performance of the interface, we willsingle thread synchronizationThe code to execute is changed toMulti-threaded asynchronousimplement. For example, the interface for querying user information needs to return basic user information, point information, and growth value information, while users, points, and growth value need to call different interfaces […]