Comparison and analysis of Py = > redis and python operation redis syntax



R: For redis cli
P: Redis for Python

get ready

pip install redis
pool = redis.ConnectionPool(host='', port=6379, db=1)
redis = redis.Redis(connection_pool=pool)
Redis. All commands
I have omitted all the following commands. If there are conflicts with Python built-in functions, I will add redis

Global command

  • Dbsize (number of returned keys)

       R: dbsize
       P: print(redis.dbsize())
  • Exists (whether there is a key)

       R: exists name
       P: exists('name')
  • Keys (list all keys, use wildcards)

    R: keys na*
       P: keys('na*')
       Note: the time complexity is O (n)
  • Scan (corresponding to keys, all keys are retrieved iteratively)

    R: scan 0 match '*' count 4       
       P: keys_iter = redis.scan_iter(match='*', count=4)  
       Note: this kind of scan will also be found after the API, so I will put it all in the final concluding remarks
  • Info (view resource information)

    R: Info ා you can also fill in the parameter info memory info CPU, etc
       P:    #'CPU')'MEMORY')
  • Type (list type)

    R: type name
       P: redis.type ('name ') ා type and python conflict, so I write all of them here    
       The types in redis are: none string list set Zset hash

Expiration time

  • Expire (set)

    R: Expire name seconds
       P: Expire ('name ', seconds)
  • TTL (query)

    R: ttl name     
       P: ttl('name')
       #Returns the value of the remaining expiration time
       #If the return value is - 2, it means there is no such key
       #If the return value is - 1, this key exists, but the expiration time is not set
  • Persist (delete)

       R: persist name
       P: persist('name')
  • Self increase, self decrease

    Incr incrby plus an integer
           R: Incr age or incrby age 1
           P: Incr age 1 or incrby age 1 ා the incr implemented by Python is redirected to incrby, so either is OK
       Subtract an integer
       Incrbyflat add and subtract a floating-point number

String related operations

  • Set value

    R: set name lin
       P: redis.set('name', 'lin') 
       Set option (atomic operation)
           Nx (set default)
               R: set name lin nx    
               P: redis.set('name', 'lin', nx=True)
                   Note: NX means that the value can be set successfully only if the key does not exist. It is similar to the setDefault of Python dict, that is, the default value is set for the key
           XX (update value)
               R: set name Tom xx
               P: redis.set('name', 'lin', xx=True)
                   Note: XX means that the value will be updated successfully only if the key exists. If the key does not exist, the update fails.
  • Get value

    R: get name
       P: redis.get('name')
       Note: the get of Py redis client is byte binary type, so it needs to be converted to the corresponding type manually
           As mentioned above, incr decr, etc., the result returned by the operation is directly int, not byte type
  • Mset batch setting

       R: mset name lin age 18
       p: redis.mset( {'name': 'lin', 'age': 18} )
  • Mget batch acquisition

    R: mget name age
       P: redis.mget ('name ','age') # returns a list of byte type values
  • GetSet sets the new value and returns the old value

       R: getset name zhang
       P: print( redis.getset('name', 'zhang') )
  • Append string concatenation

       R: append name abc
       P: redis.append('name', 'abc')
  • Strlen get string length

    R: strlen name
       P: print( redis.strlen('name') )
       Note: different from the common API of programming language, the length of string returned by strlen is the length of corresponding encoding of character....
           Chinese UTF-8 takes 3 bytes
  • Getrange string slicing (starting from 0, closed before and closed after)

       R: getrange name 1 2
       P: redis.getrange('name', 1, 2)
  • Setrange string assignment by index (override)

    R: Setrange name 0 ABC ා start from the 0 position and assign ABC one by one, and the redundant ones remain unchanged
       P: redis.setrange('name', 0, 'abc')
  • Del delete key value

       R: del k1 k2
       P: redis.delete(k1, k2)

Hash related operations (can correspond to document attribute value)

  • Hset sets 1 document and 1 attribute value

       R: hset user name lin
       P: redis.hset('user', 'name', 'lin')
  • Hget gets 1 document and 1 attribute

       R: hget user name
       P: print(redis.hget('user', 'name'))        
  • Hmset sets 1 document with multiple attribute values

       R: hmset user name lin age 18
       P: redis.hmset('user', {'user': 'lin', 'age': 18})
  • Hmget gets 1 document with multiple attribute values

       R: hmget user name age
       P: print(redis.hmget('user', 'name', 'age'))
  • Hkeys gets all the keys

       R: hkeys user
       P: print(redis.hkeys('user'))
  • Hvals gets all the values

       R: hvals user
       P: print(redis.hvals('user'))
  • Hgetall gets a document, all attribute values (use with caution, see the next API)

    R: Hgetall user ා returns a list with even index as key and odd index as value (starting from 0)
       P: print( redis.hgetall ('user ')) ා returns to dict format
       Note: hgetall will fetch all the key values, so the large amount of data may cause performance impact.
           How to deal with mass data in Python???????
           Yes, they are iterators. Of course, Python's redis module has encapsulated an API for us, hscan_ ITER, see an API
  • Hscan (hash iteration, which can be used instead of hgetall)

    R: Hscan user 0 match * count 200
           #0 means the cursor starts from the beginning
           #Match is a keyword 
           #* is a wildcard for key
           #Count is the number of items received at a time
       P: result_iter = redis.hscan_iter('user', match= 'na*', count=2)    
           #There is no cursor parameter in Python because it is fixed to 0 in the source code. Other parameters are explained as above
           #The returned result is an iteratable object, which can be traversed and retrieved.
  • Hexists detects whether a key exists

    R: Hexists user name1 ා if exists, it returns 1; if it does not, it returns 0
       P: print( redis.hexists ('user ','name'))) (exists and returns true
  • HLEN counts the total number of all attributes of a document

       R: hlen user
       P: print(redis.hlen('user'))        
  • HDEL delete the specified field

       R: hdel key field
       P: redis.hdel('key', 'field')

List related operations

  • Lpush (left stack)

       R: lpush list1 1 2 3
       P: redis.lpush('list1', 1,2,3)
  • Rpush (right pressing stack, same as left pressing stack, omitted)

  • Lpop (left flip stack)

       R: lpop list2
       P: print(redis.lpop('list2'))
  • Rpop (right flip stack, same as left pop stack, omitted)

  • Blpop (left blocking pop stack. When the list is empty, it will be blocked)

    R: Blpop List2 1000 ා 1000 means that the expiration time is 1000 seconds. After 1000 seconds, the blocking will be automatically released. If there is a value, the blocking will be removed
       P: redis.blpop('list2', timeout=1000)
  • Brpop (right blocking stack, same as left blocking stack, omitted)

  • Linsert (inserts values before and after the specified value)

    R: Linsert List2 before Tom Jerry
       P: redis.linsert ('list2 ','after','b ','Tom') insert Tom after B, and after stands for after
  • Lset (assign the value according to the index, pay attention to the index not to cross the boundary)

       R:lset list2 4 zhang
       P: redis.lset('list2', 4, 'zhang')
  • Lindex (according to the index, the index can be positive or negative)

       R: lindex list2 -3
       P: print(redis.lindex('list2', 3))
  • Len (get the number of list elements)

       R: llen list2
       P: print(redis.llen('list2'))    
  • Ltrim (Note: slice on the original data, no value is returned. )

    R: Ltrim List2 3 10 ා keeps the list data of indexes 3-10, and the others are deleted
       P: print( redis.ltrim ('list2 ', 2, - 1)) # index closed before and closed after, positive and negative
  • Lrem (delete the specified value)

    R: lrem list2 0 Tom    
           #The parameter at 0 represents the number of deleted values
               #0 means delete all, delete all Tom values
               #A positive number represents the deletion of N from left to right. Eg: lrem list25 Tom is to delete 5 Tom values from left to right
               #Negative numbers remove n from right to left. Eg: lrem List2 - 5 Tom is to delete 5 Tom values from right to left
       P: print( redis.lrem ('list2 ', - 5,1)), the same as above
  • Lrange (traversal, positive and negative indexes, closed before and closed after)

       R: lrange list1 0 -1 
       P: print(redis.lrange('list2', 0, -1))

Set related operations

  • Sadd (insert element)

       R: sadd set1 1 2 3
       P: redis.sadd('set1', *[1,2,3])
  • SREM (delete element with specified value)

       R: srem set1 Tom
       P: redis.srem('set1', 'Tom')
  • Scard (get the number of elements in the collection)

       R: scard set1
       P: redis.scard('set1')
  • Sismember (to determine whether an element is in a collection)

       R: sismember set1 Tom
       P: redis.sismember('set1', 'Tom')
  • Srandmember (randomly extract the specified number of elements in the collection)

    "Py" random.choices Notice the s "“
       R: Srandmember Set1 2 ා take two elements from the set randomly
       P: redis.srandmember('set1', 2)
  • Smembers (take out all elements in the collection)

    R: smembers set1
       P: redis.smembers('set1')
       Note: the same as hgetall, if it is taken out at one time, it may cause problems, so it needs to be retrieved iteratively. See sscan below
  • Sscan (cursor / iteration fetch all elements of the collection)

    R: sscan set1 0 match * count 200
       P: result_ iter =  redis.sscan_ ITER ('set1 ', match =' * ', count = 200) ᦇ traversal iteration
  • Sdiff (difference set)

       R: sdiff sset1 sset2
       P: print(redis.sdiff('sset1', 'sset2'))
  • Intersection

       R: sinter sset1 sset2
       P: print(redis.sinter('sset1', 'sset2'))
  • Sunion (Union)

       R: sunion sset1 sset2
       P: print(redis.sunion('sset1', 'sset2'))

Zset ordered set related operations

  • Zadd (ordered insertion)

    R: Zadd Zset 100 Tom 90 Jerry # 100 is the weight and Tom is the data value. Note that the redis cli weight is in the front and the value is in the back
       P: redis.zadd ('zset ', {' Tom ': 100,' Jerry ': 90}) ා note that py syntax uses weight as the value of the dictionary
       Note special attention:
           The default mechanism of zadd is the same value. When the weight is different, the weight of value will be updated
           Eg: insert another Tom above, but this time the weight is 50 (zadd Zset 50 Tom), then Tom's weight will be updated to 50
       At this point, two parameters will be extended. (remember the NX and XX parameters of set, yes, zadd also has them)
       Nx: (update (add) if it doesn't exist, and fail to update (add) if it exists)
           R: zadd zset nx 1000 Tom            
           P: redis.zadd('zset',{'Tom': 1000}, nx=True)    
               If the value of Tom exists before, the 1000 will not be updated
               If it does not exist, a new one will be created and the 1000 will be set successfully
       Nx: (update (add) only if it exists, or fail to update (add) if it doesn't exist)
           R: zadd zset xx 1000 Tom            
           P:redis.zadd('zset',{'Tom': 1000}, xx=True)    
               If the value of Tom exists before, 1000 will be updated successfully
               If it doesn't exist, for example {'Zhang San': 500}, Zhang San doesn't exist. If XX is used, he won't be added, let alone updated
  • Zrange (ergodic)

    R: zrange zset 0 -1
       P: print( redis.zrange ('zset ', 0, - 1)) # the return value is a list
       WithCores parameter (return the weight as well)
           R: Zrange Zset 0 - 1 WithCores ා note that when returned, odd bits are values and even bits are weights
           P: print( redis.zrange ('zset ', 0, - 1, WithCores = true)) ා returns a list nested tuple, [(value, weight)]
  • Zrevrange (reverse descending, traversal)

    This API is more "Rev" three letters, the word reverse familiar, python built-in reverse order high-order function.. That's what it means
       The operation is the same as that of zrange
  • Zrangebyscore (traversal by weight)

    R: Zrangebyscore Zset 4099 limit 13 ා find out the data with weight within 40-99, and start from the first item, and return 3 items
           #40-99 are all closed intervals. If you want to become an open interval, write (40 (99
       P: print(redis.zrangebyscore('zset', 40, 99, start=1, num=3))
  • Zrevrange by score

    The operation is the same as that of zrangebyscore
       This API design is not as good as make complaints about a command, then add a reverse parameter, Tucao!!!
  • Zrem (delete a value)

    R: Zrem Zset Tom ා delete the value Tom
       P: print(redis.zrem('zset','Tom'))
  • Zremongebyscore (delete values within the weight range)

    R: Zre range by score Zset 70 90
       P: redis.zremrangebyscore('zset', 70, 90)
  • Zremongebyrank (delete values in index range)

    R: Zremongebyrank Zset 0 - 1 ා delete all values (indexes from 0 to - 1 represent all values!)
       P: redis.zremrangebyrank ('zset ', 0, - 1) ා the API style of redis is really... No way, python has no choice but to have the same name
  • Zcard (get the number of all elements of an ordered set)

       R: zcard zset
       P: print(redis.zcard('zset'))
  • Zcount (counts the number of elements in a weight range of an ordered set)

    R: Zcount Zset 10 69 ා also default to closed interval (can be changed to open interval)
       P: print(redis.zcount('zset',50, 69))
  • Zrank (get the index of an element)

    R: No need to guess, the index must start at 0
       P: print(redis.zrank('zset', 'Jerry'))
  • Zrevrank (getting the index of an element in reverse order)

    Get the index in reverse order. For example, the last index is 0
       The specific operation is the same as that of zrank
  • Zscore (get the weight of an element)

       R: zscore zset Jerry
       P: print(redis.zscore('zset', 'Jerry'))
  • Zscan (iterate and return all elements and their weights)

           Eh? Have you ever known Yan back? 
           The above-mentioned scan hsacn sscan and the next zscan are all the same, which are used to deal with big data iteratively
           The python version of redis gives us a simplified function, which is_ At the end of ITER, eg: hscan_ iter()
           This kind of_ The function at the end of ITER does not need us to pass cursor parameter. Why??
               1、 Because Python has a generator iterator mechanism! (of course_ The source code of ITER and other functions is implemented for us with yield)
               2、 Cursor cursors are not easy to manage
       R: zscan zset 0 match * count 5 
       P: zset_ iter =  redis.zscan_ ITER ('zset ', match =' * ', count = 5) ා returns an iteratable object in the same way
       Note: please also state:
           Match parameter:  
               Filter query data (in fact, after filtering, there is no need to use scan when the amount of data is small. This parameter is mainly used in "hscan" and so on)
               "Therefore, the match parameter can not be written", "match = '*' and not passing is an effect. "
           Count parameter: 
               Py source code interpretation '` count' ` allow for hint the minimum number of returns
               This parameter is "at least 5" in one iteration, but in any case, all data will be retrieved in the end!!
  • Zpopmax (pop up maximum priority data pair, redis5. + add)

    R: Zpopmax zset1 2 # 2 represents the largest two pairs of pop-up key:score , do not write, default only play a pair key:score
       P: data =  redis.zpopmax (zset1, count = none) the principle is the same as above
       Zpopmax is equivalent to the sum of the following two commands: 
           data = redis.zrange(zset1, -1, -1)
           zrem(zset1, data)
       Note: no matter how many or not count is specified, py returns the format of [(key, score)] list embedded tuple.
  • Zpopmin (pop up minimum priority data pair, redis5. + add)

    The usage is the same as zpopmax
       Zpopmax is equivalent to the sum of the following two commands:
           data =  redis.zrange (zset1, 0, 0) ා it changes here, and the default ascending order, so the minimum value needs to start from item 0
           zrem(zset1, data)
           Zpopmax and zpopmin are only available in redis 5. +.
           This method was also mentioned before = zrange + zrem
           Obviously, from the original multi line operation. It becomes an atomic operation.
           I think that redis's new two commands should solve the problem of resource competition!!!!!!

Two persistence methods of redis

  • Generating RDB files (three methods)

    The RDB mechanism is to trigger the generation of RDB files and write redis data to it in binary form. There are three trigger methods as follows:
       RDB basic configuration:
           vi /etc/redis/redis.conf
               dbfilename  dump.rdb     #Configure RDB file name
               Dir / var / lib / redis ා configure RDB file storage directory (ll command to view dump.rdb Is it the latest time)
               Appendonly no ා if yes, the AOF file is preferred to be restored, or not restored
       The above configuration can automatically trigger the generation of RDB files when the following three methods are implemented. And restore the RDB file when redis starts
  • Trigger mode 1: save (blocking)

       R: save
  • Trigger mode 2: bgsave (open fork process, asynchronous, non blocking)

       R: bgsave
       P: redis.bgsave()
  • Trigger mode 3: automatically and dynamically generate RDB file (configuration file)

    On the basis of the above RDB basic configuration, the following configuration is added
       vi /etc/redis/redis.conf
           Save 100 10 ා 100 seconds to change 10 pieces of data will automatically generate RDB files
  • Disadvantages of RDB

    Big data takes time and RDB file writing affects IO performance. Uncontrollable downtime data
  • Generating AOF files (three methods)

    "AOF mechanism means that every command is executed, it will be recorded in the buffer. When it is refreshed to the AOF file according to a certain policy, there are three kinds of policies"
       Aof basic configuration:
           vi /etc/redis/redis.conf
               Please turn on the "appendonly yes" switch
               appendfilename " appendonly.aof "AOF file name
               Dir / var / lib / redis # AOF file directory (same as RDB)
  • Refresh strategy 1: always

    Always means that if there is a command in the buffer, it will be refreshed and appended to the AOF file (safe and reliable, consuming IO)
  • Refresh policy 2: everysec (default)

    Everysec means that the command refresh of buffer will be appended to AOF file every second
               If it goes down in this second, the data will be lost... (uncontrollable for 1 second)
  • Refresh strategy 3: no

    No means when to refresh, listen to the operating system's own (completely uncontrollable)
  • Aof rewriting mechanism (two methods, asynchronous)

  • Rewrite the cleaning process:

    As you can see, more and more commands will be added to AOF, and some of them may be similar
           1、 Key value override: 
                   set name tom
                   set name jerry
           2、 Timeout expired
           3、 Multiple inserts (can be replaced by one command)
       The above useless commands will make AOF files complicated.
       Aof rewriting strategy can be optimized to achieve simplification and improve recovery speed.
  • Principle of rewriting (search data + personal understanding)

    1、 Open the fork sub process to create a new AOF file. Its task is to re-establish the data in the current redis according to the above
           ”Rewrite the cleaning process and record it in this new AOF file
       2、 At this time, the main process can normally accept the user's request and modification. (at this time, the sub process AOF may not be consistent with the database content. Read on)
       3、 In fact - when the first fork is opened, a memory space a (called rewriting buffer) is opened to record the user's new requests in parallel
       4、 After the child process AOF is rewritten, the data command in space a above will be appended to AOF (similar to breakpoint copy)
       5、 New AOF replaces old AOF
       For example (for two, three, four)
           That is, you give me a task, I'm doing it, you give me a lot of tasks, I'm certainly not busy
           In this way, you should take a list and record it. When I'm finished, we'll make a connection.)
  • Rewriting method 1: bgrewriteaof

       R: bgrewriteaof
       P: redis.bgrewriteaof()
  • Rewriting method 2: automatic rewriting of configuration file

    On the basis of AOF basic configuration above, the following configuration is added
       vi /etc/redis/redis.conf
           Appendfsync everysec ා, which is the three strategies mentioned above, is always No
           Auto AOF rewrite min size 64MB
           Auto AOF rewrite percentage 100 ා 100 is the growth rate, and the limit size of each time is 100% of the previous one, that is, twice
           No appendfsync on rewrite yes ා yes is not to refresh the contents of the "rewrite buffer" to disk
           Note this parameter:
               This is the memory space a (rewrite buffer) in the third item of the "rewrite principle" above
               If the rewriting buffer is not refreshed and persisted to the disk, the data in the buffer will be lost if it goes down.
               How much is lost? It is reported that (up to 30 seconds of data will be lost in Linux)
               If you set it to no, the rewriting buffer will be flushed and persisted to the hard disk just like the original AOF.
               But think about it, if you rewrite the buffer and the original AOF do persistent refresh
                   Then they will compete for IO, and their performance will be greatly reduced. In special cases, they may be blocked.
               So, for performance (set to yes) and data integrity and security (set to no), choose


This article mainly wrote about redis and python operation redis syntax comparison detailed explanation!!
Python's redis API is also very interesting. The function name almost completely restores the native redis!!

In the grammar part, we are impressed by "redis's scan family function" and "Python's scan"_ ITER "family function:
    So many data structures have their own "traversal of all data operations" mentioned above
    However, in the case of a large amount of data, these traversal functions will become dregs, which may cause "oom (memory overflow) and other situations"
    At this time, redis provides us with some columns of "scan family functions". Of course, these functions need cursor control.
    "Cursor" is a headache, so Python has a humanized idea
        Encapsulating "scan family functions" as "scan"_ ITER family functions "let us omit the operation of cursors, can be happy programming!
    Then I'll list all the big families and the corresponding primitive ergodic functions
        Original traversal redis Python
        keys       scan     scan_iter
        hgetall    hscan    hscan_iter
        smembers   sscan    sscan_iter
        zrange     zscan    zscan_iter
        Along this corresponding law, I found one thing before:
            Why does "list's lrange have no corresponding lscan? "
        Like ZZ, I went to OV to check it again, and I found that a foreign friend had the same questions as me...
        "Instead, you should use lrange to iterate the list"
            Because of following the law, thinking set, but forget that "lrange itself can iterate with index" lrange list10 n
        At this time, it suddenly occurred to me that zrange is the same as lrange grammar???
        Why does zrange set up a zscan separately, but list doesn't???
            (after checking the underlying performance of the list, I don't want to continue reading it... )
    Scan and ITER family functions, their respective data structure chapters are written, and in the "zscan" section of "zscan" to make a detailed analysis