Spring boot integrates redis to realize distributed lock

Time:2022-4-27

Redis FAQ

  • Cache penetration: there is no cached null value in the program; When a large number of requests get a nonexistent data, because there is no null value cached in the cache, a large number of requests directly access the database, and the pressure on the database increases sharply, resulting in penetration problems!

    • Solution: cache the null value of the query result into redis
  • Cache avalanche: a large number of caches fail at the same time;

    • Solution: add a random number when setting the effective time of data
  • Buffer breakdown: a large number of requests access the same cache data that has just expired at the same time

    • Solution: add distributed lock

1、 Native mode

Reference documents:https://github.com/redisson/redisson/wiki/Table-of-Content

1. Import dependency

<!-- Native reisson -- >
<dependency>
    <groupId>org.redisson</groupId>
    <artifactId>redisson</artifactId>
    <version>3.11.0</version>
</dependency>

<!-- Operation redistemplate -- >
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>

2. Create configuration

/**
  *All use of redisson is through the redissoclient object
  * @return
  * @throws IOException
  */
 @Bean(destroyMethod="shutdown")
 public RedissonClient redisson() throws IOException {
     //Create configuration
     Config config = new Config();
     //"Reiss: //" can be used to enable SSL connection. Usesingleserver indicates singleton mode
     config.useSingleServer().setAddress("redis://127.0.0.1:6379");
     //Create redissonclient instance according to config
     return Redisson.create(config);
 }

3. Test whether the redissonclient object is created

@Autowired
RedissonClient redisson;

@Test
public void test(){
    System.out.println(redisson);
}

The following results indicate that the test is passed
Spring boot integrates redis to realize distributed lock

4. Test distributed locks

Note: in order to avoid deadlock, all lock related programs are designed as followsReentrant lock

4.1. Resolve deadlock

@Autowired
RedissonClient redisson;

@ResponseBody
@RequestMapping("/hello")
public String hello(){
    //1. Get a lock. As long as the name of the lock is the same, it is the same lock
    RLock lock = redisson.getLock("my-lock");
    //2. Lock. The default lock is 30s
    lock. lock(); // Blocking waiting
    //1) Automatic renewal of locks; If the service is too long, a new 30s will be automatically added to the lock during operation; Don't worry about the long business time. The lock will automatically expire and be deleted
    //2) . as long as the business of locking is completed, the current lock will not be renewed. Even if it is not unlocked manually, the lock will be automatically deleted after 30s by default

    /**
     * lock. lock(10, TimeUnit.SECONDS); // Automatic unlocking in 10 seconds; The unlocking time must be greater than the business operation time
     *Problem: if the unlock time is specified, it will not be automatically renewed after the lock time
     *1. If we pass the lock timeout, we will send it to redis to execute the script to occupy the lock. The default timeout is the time we specify
     *2. If we do not specify the timeout time, we will use 30 * 1000 [lockwatchdogtimeout watchdog default time]. As long as the lock is occupied successfully, it will be locked
     *Start a scheduled task [reset the expiration time for the lock, and the new expiration time is the default time of the watchdog], which will be automatically renewed every 10 seconds for 30 seconds
     */
    //Best practices
    //lock. lock(30, TimeUnit.SECONDS);  Specify the time and unlock manually
    try {
        System. out. Println ("successful locking, business execution..."+ Thread. currentThread(). getId());
        Thread.sleep(30000);
    } catch (InterruptedException e) {
        e.printStackTrace();
    } finally {
        System. out. Println ("release lock..."+ Thread. currentThread(). getId());
        lock.unlock();
    }
    return "hello";
}

4.2. Read write lock

@Autowired
 RedissonClient redisson;
 
 @Autowired
 RedisTemplate redisTemplate;

//The write lock ensures that the latest data can be read. During modification, the write lock is an exclusive lock (mutual complaint lock, exclusive lock). A read lock is a shared lock
//If the write lock is not released, the reader must wait
//Write + read (read when writing): wait for the write lock to be released
//Write + Write (write when writing): blocking mode
//Read + Write (write when reading): wait for the read lock to be released
//Read + read: it is equivalent to unlocked and concurrent reads. All current read locks will only be recorded in redis. They all lock successfully at the same time.
//Conclusion: as long as there is a write, you must wait for the previous lock to be released.
@ResponseBody
@RequestMapping("/write")
public String writeLock(){
    RReadWriteLock lock = redisson.getReadWriteLock("rw-lock");
    String s = "";
    RLock rLock = lock.writeLock();
    try {
        //Change data and add write lock, read data and add read lock

        rLock.lock();
        s = UUID.randomUUID().toString();
        redisTemplate.opsForValue().set("writerValue",s);
        Thread.sleep(3000);
    } catch (InterruptedException e) {
        e.printStackTrace();
    } finally {
        rLock.unlock();
    }
    return s;
}

@ResponseBody
@RequestMapping("/read")
public String readLock(){
    RReadWriteLock lock = redisson.getReadWriteLock("rw-lock");
    RLock rLock = lock.readLock();
    String s = "";
    try {
        //Read lock
        rLock.lock();
        s = redisTemplate.opsForValue().get("writerValue").toString();
    } catch (Exception e) {
        e.printStackTrace();
    } finally {
        rLock.unlock();
    }
    return s;
}

4.3 locking

/**
 *After work, close the door and go home
 *1. The Department is empty
 *2. Lock the door and go home after all five departments have finished
 * @return
 * @throws InterruptedException
 */
@ResponseBody
@RequestMapping("/lockDoor")
public String lockDoor() throws InterruptedException {
    RCountDownLatch door = redisson.getCountDownLatch("door");
    door.trySetCount(5);
    door. await(); //  Wait until the locking is completed
    Return "off duty...";
}

@ResponseBody
@RequestMapping("/gogogo/{id}")
public String gogogo(@PathVariable("id") String id){
    RCountDownLatch door = redisson.getCountDownLatch("door");
    door. countDown(); // Count minus 1
    Return ID + "the Department is off duty";
}

4.4 semaphore

Note: semaphores can also be used as distributed current limiting

/**
 *Garage parking (semaphore)
 *3 parking spaces
 *Semaphores can also be used as distributed current limiting
 * @return
 */
@ResponseBody
@RequestMapping("/park")
public String park() throws InterruptedException {
    RSemaphore park = redisson.getSemaphore("park");
    //park. acquire(); // Get a signal, get a value and occupy a parking space (blocking mode)
    boolean b = park. tryAcquire();// Run the code directly, non blocking
    if (b){
        //Execute business
    }
    return "ok";
}

@ResponseBody
@RequestMapping("/gogo")
public String gogo(){
    RSemaphore park = redisson.getSemaphore("park");
    park. release(); // Release a parking space and drive away
    return "ok";
}

4.5. Solve the cache consistency problem

There are two common ways:

  • Double write mode: after modifying the data, directly modify the data in the cache
  • Failure mode: after modifying the data, delete the data in the cache and wait for the next active query to update
    Spring boot integrates redis to realize distributed lock
    Solve the problem of dirty data in double write mode:
  • Write lock for concurrent write operations
  • If the system allows temporary inconsistency of data, it can be ignored! Wait for the data to be automatically deleted after expiration, and then cache it next time!

Spring boot integrates redis to realize distributed lock
Solve the problem of dirty data in failure mode:

  • Write lock

From this, we can see that no matter which mode, it will lead to cache inconsistency. What should we do?

  • If the user’s data is relatively stable (order data and user data), the concurrency probability is small, and the cache inconsistency does not need to be considered. The expiration time is directly added to the cache, and the cache will be automatically updated when the next query is made!
  • For basic data such as menus and product introductions, you can use canal to subscribe to binlog
  • Caching data with expiration time is also enough to meet the caching requirements of most businesses
  • Ensure concurrent read-write by locking, such as read-write lock. If the business is not related to heart data, you can ignore this problem!

Summary (best option)

  • For data with high real-time and consistency (read more and write more), query the database directly.
  • When caching data, add the expiration time to ensure that the data obtained every day is the latest data.
  • When reading and writing data, add distributed read-write locks (except those with frequent write operations).
  • It should not be over designed to increase the difficulty of the system.