AOP and redis cache implementation

Time:2021-2-9

1. AOP implements cache service

1.1 business requirements

1) . custom annotation @ cachefind (key = “XXX”, second = – 1)
2) Use custom annotation to identify the business method and save the return value of the method to the cache
3) AOP is used to intercept annotations, and surround notification is used to realize business

1.2 user defined annotation @ cachefind

AOP and redis cache implementation

1.3 annotation identification

AOP and redis cache implementation

1.4 edit AOP

package com.jt.aop;

import com.jt.anno.CacheFind;
import com.jt.util.ObjectMapperUtil;
import org.aspectj.lang.JoinPoint;
import org.aspectj.lang.ProceedingJoinPoint;
import org.aspectj.lang.annotation.Around;
import org.aspectj.lang.annotation.Aspect;
import org.aspectj.lang.annotation.Before;
import org.aspectj.lang.annotation.Pointcut;
import org.aspectj.lang.reflect.MethodSignature;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;
import org.springframework.stereotype.Controller;
import org.springframework.stereotype.Repository;
import org.springframework.stereotype.Service;
import redis.clients.jedis.Jedis;

import java.lang.reflect.Method;
import java.util.Arrays;

/*@Service
@Controller
@Repository*/
@Component // the component hands over the class to the spring container for management
@Aspect // indicates that I am a section
public class RedisAOP {

    @Autowired
    private Jedis jedis;

    /*
    *Implementation of AOP service call
    *1. Intercept the specified annotation
    *2. Using surround notification
    *Implementation steps:
    *1. To get the key, you must get the annotation first, and get the key from the annotation?
    *2. Check whether there is a value in redis
    *     *
    *3. Knowledge point supplement:
    *Specify the parameter name to transfer the value, and bind the parameter type at runtime to intercept the annotation
    *Joinpoint must be in the first bit of the parameter
    */
    @Around("@annotation(cacheFind)")
    public Object around(ProceedingJoinPoint joinPoint,CacheFind cacheFind){
        Object result = null;
        //Key = Business Name:: parameter
        String key = cacheFind.key();
        String args = Arrays.toString(joinPoint.getArgs());
        key = key + "::" + args;

        //2. Check whether there is a value
        if(jedis.exists(key)){
            String json = jedis.get(key);
            MethodSignature methodSignature = (MethodSignature) joinPoint.getSignature();
            Class returnType = methodSignature.getReturnType();

            result = ObjectMapperUtil.toObj(json,returnType);
            System.out.println (AOP query redis cache);

        }else{
            //There is no data in redis, so you need to query the database and save the data in the cache
            try {
                result = joinPoint.proceed();
                String json = ObjectMapperUtil.toJSON(result);
                //Do you want to set the timeout
                if(cacheFind.seconds()>0){
                    jedis.setex(key, cacheFind.seconds(), json);
                }else{
                    jedis.set(key,json);
                }
                System.out.println ("AOP query database");
            } catch (Throwable throwable) {
                throwable.printStackTrace();
            }
        }

        return result;
    }


    /**
     *// 1. Get the key annotation method object class method name parameter
     *         Class targetClass = joinPoint.getTarget().getClass();
     *// 2. Get method object
     *         String methodName = joinPoint.getSignature().getName();
     *         Object[] args = joinPoint.getArgs();
     *         Class[] classArgs = new Class[args.length];
     *         for(int i=0;i<args.length;i++){
     *             classArgs[i] = args[i].getClass();
     *         }
     *         try {
     *// reflect instantiated objects
     *             Method method = targetClass.getMethod(methodName,classArgs);
     *             CacheFind cacheFind = method.getAnnotation(CacheFind.class);
     *             String key = cacheFind.key();
     *             System.out.println(key);
     *         } catch (NoSuchMethodException e) {
     *             e.printStackTrace();
     *         }
     */


    //Formula AOP = pointcut expression + notification method
    //@Pointcut("bean(itemCatServiceImpl)")
    //@Pointcut("within(com.jt.service.*)")
    //@Pointcut("execution(*  com.jt.service . *. * (..) "//. * the first level subdirectory of the current package
   /* @Pointcut("execution(*  com.jt.service .. *. * (..) "//.. * all subdirectories of the current package
    public void pointCut(){

    }*/

    //How to get the related parameters of the target object?
    //ProceedingJoinPoint is only supported for around advice
   /* @Before("pointCut()")
    Public void before (joinpoint joinpoint) {// join point
        Object target = joinPoint.getTarget();
        Object[] args = joinPoint.getArgs();
        String className = joinPoint.getSignature().getDeclaringTypeName();
        String methodName = joinPoint.getSignature().getName();
        System.out.println (target object) + target);
        System.out.println (method parameters:+ Arrays.toString (args));
        System.out.println (class name: + classname);
        System.out.println (method name: + methodname);
    }*/
}

2. About redis general attributes

2.1 persistence strategy in redis – RDB (redis database)

2.1.1 requirement description

Note: redis saves all data in memory, but the characteristics of memory are power-off and erasure. In order to ensure that the cache data in redis is not lost, it is necessary to persist the memory data regularly
Persistence:Write memory data to disk

2.1.2 RDB mode

characteristic:
1. RDB mode is the default persistence rule of redis
2.RDB mode records redis memory data snapshot (only the latest data is kept)
3. Regular persistence (time adjustable) of RDB mode may lead to data loss
4. RDB mode has the highest backup efficiency
5. RDB mode backup is blocking. Other users are not allowed to operate during backup to ensure data security
Command:
1. Active backup save will block user operation
2. Background backup bgsave asynchronously for persistent operation will not block

2.1.3 about persistent configuration

open redis.conf File:

CD / usr / local / SRC / redis / # enter the redis directory
vim   redis.conf                       #Open redis.conf file
: set Nu # displays the line number
: / save # find save
save 900 1 When the user performs an update operation in 900 seconds, it is persisted once
save 300 10 In 300 seconds, the user has performed 10 update operations. Then it is persisted once
save 60 10000 If the user performs 10000 update operations in 60 seconds, it will persist once
save 1 1 Update and persist once in one second! The performance is very low

AOP and redis cache implementation

2.1.4 about persistent file name setting

By default, the file name is persisted dump.rdb

AOP and redis cache implementation

2.1.5 file storage directory

./ Represents the current file directory. The meaning is written in absolute path

AOP and redis cache implementation

2.2 persistence strategy in redis – AOF (append only file)

2.2.1 characteristics of AOF

1) Aof mode is off by default. It needs to be turned on manually
2).Aof mode records the user’s operation process. It can realize real-time persistence and ensure that the data is not lost.
3) The persistent files maintained in. AOF mode occupy a large space, so the persistence efficiency is not high, and the persistent files need to be maintained regularly
4) Once the. AOF mode is turned on, redis reads AOF files mainly in AOF mode

2.2.2 AOF configuration

1) . turn on AOF mode

Change no in appendonly to yes

:/appendonly 

AOP and redis cache implementation
AOP and redis cache implementation

2) . persistence strategy
always: Users update once, then persist once
everysec:Persistence once per second is more efficient
no:No active persistence. Operating system related. Almost no use
AOP and redis cache implementation

2.3 about redis interview questions

2.3.1 about flushall operation

Business scenario:
Xiaoli is an intern of a company. You are his project director. Because Xiaoli is not familiar with the business, she inadvertently carried out flushall operation in the production environment??
Scenario 1: the service in redis only enables the default persistent policy RDB mode

Solution:

Scene 1:
The service in redis starts AOF mode

1. Close the existing redis server
2. Check whether the RDB file is covered. If the file is not covered, restart redis
3. If flushall command and save operation are executed at the same time, RDB mode is invalid

cd  /usr/local/src/redis/shards/
redis-cli  -p  6379  shutdown
vim  dump.rdb

Scene 2:
The service in redis starts AOF mode

Solution:
1. Close the redis server
2. Edit the redis persistence file, delete the flushall command, save and exit
3. Restart the redis server

cd  /usr/local/src/redis/shards/
redis-cli  -p  6379  shutdown
vim  appendonly.aof

General conditions: both RDB mode and AOF mode will be enabled. RDB persistence mode can be executed by Save command

2.3.2 why is single threaded redis fast

1) Redis running environment is in memory, pure memory operation
2) Single thread operation avoids frequent context switching and switch link overhead
3) Non blocking I / O (bio | NiO) is adoptedThe mechanism of multiplexing(dynamic perception)
AOP and redis cache implementation

4) The latest version of redis is 6.0. Before 6.0, it is single thread operation mode. After 6.0, it supports multi thread operation mode

2.4 about redis memory optimization strategy

2.4.1 business scenario

If you use redis frequently, keep saving data to it, and do not delete, then the memory will overflow. Can you optimize the memory strategy
Can you delete unused data automatically and keep hot data in redis

2.4.2 LRU algorithm

LRU is the abbreviation of least recently usedLeast recently used, is a commonly used page replacement algorithm, select the latest and longest unused page (data) to be eliminated. The algorithm gives each page an access field,It is used to record the time t of a page since it was last visitedWhen a page needs to be eliminated, select the one with the largest t value in the existing page, that is, the least recently used page.
Calculation dimension: time t since last time
AOP and redis cache implementation

Description: LRU algorithm is the best algorithm in memory optimization

2.4.3 LFU algorithm

LFU(least frequently used (LFU) page-replacement algorithm)。 NamelyLeast frequently used page replacement algorithmIt is required to replace the page with the smallest reference count during page replacement, because the frequently used page should have a larger number of references. But some pages are used many times at the beginning, but they will not be used any more. These pages will stay in memory for a long time, so the reference count register can be timedMove rightOne bit, the average number of uses that form exponential decay.
Dimension: number of references
Common sense: computer left expansion multiple
The computer moves right to reduce multiple
AOP and redis cache implementation

2.4.4 random algorithm

Delete data randomly

2.4.5 TTL algorithm

Note: sort the remaining survival time, and delete the data that will be deleted in advance
AOP and redis cache implementation

2.4.6 redis default memory optimization strategy

The strategy used in redis is periodic deletion + lazy deletion

explain:

  1. Periodic deletion policy: redis defaultEvery 100 msCheck whether there are expired keys, and check them randomly. (not all the data, because the efficiency is too low.)

Problem: due to the large amount of data, it may not be selected when extracting. It may appear that the data has reached the timeout, but redis does not delete the data immediately

  1. Lazy deletion policy:When the user obtains the key, first check whether the data has passed the timeoutIf it has timed out, delete the data

Problem: due to the large amount of data, it is impossible for users to get all the memory data. It is inevitable that the data to be deleted will be kept in the memory all the time. It takes up the memory resources
3. You can use the above memory optimization method to delete the memory actively

Memory optimization algorithm description:

volatile-lru The LRU algorithm is used to optimize the timeout data
allkeys-lru LRU algorithm is used to optimize all data
volatile-lfu The LFU algorithm is used to optimize the timeout data
allkeys-lfu LFU algorithm is used to optimize all the data
volatile-random In the data with timeout, random algorithm is used
allkeys-random All data were processed by random algorithm
volatile-ttl TTL algorithm for setting timeout
noeviction Do not actively delete data, if the memory overflow will return an error

AOP and redis cache implementation

3. Redis fragmentation mechanism

3.1 business requirements

Note: the data capacity of a single redis is limited. If you need to store massive cache data, using a single redis can not meet the requirements. In order to meet the needs of data expansion, you can use the mechanism of fragmentation
AOP and redis cache implementation

3.2 implementation of redis fragmentation mechanism

The purpose of slicing is to expand redis memory.

3.2.1 construction strategy

Prepare three redis 6379 / 6380 / 6381

3.2.2 prepare file directory

AOP and redis cache implementation

3.2.2 copying configuration files

Note: put the redis configuration file into the shards directory
AOP and redis cache implementation

Modify the port number of the configuration file, and then modify 6380 / 6381
AOP and redis cache implementation

Start three redis:
redis-server 6379.conf
redis-server 6380.conf
redis-server 6381.conf

Verification server:
AOP and redis cache implementation

3.2.3 introduction to redis segmentation

package com.jt.test;

import org.junit.jupiter.api.Test;
import redis.clients.jedis.JedisShardInfo;
import redis.clients.jedis.ShardedJedis;

import java.util.ArrayList;
import java.util.List;

public class TestRedisShards {

    @Test
    public void testShards(){
        List<JedisShardInfo> shards = new ArrayList<>();
        shards.add(new JedisShardInfo("192.168.126.129",6379));
        shards.add(new JedisShardInfo("192.168.126.129",6380));
        shards.add(new JedisShardInfo("192.168.126.129",6381));
        ShardedJedis shardedJedis = new ShardedJedis(shards);
        //The memory capacity of three redis is expanded three times as one redis???
        shardedJedis.set ("shards", "redis split test");
        System.out.println(shardedJedis.get("shards"));
    }
}

3.3 consistent hash algorithm

3.3.1 algorithm Introduction

Consistent hash algorithm was proposed by MIT in 1997. It is a special hash algorithm to solve the problem of distributed cache. When a server is removed or added, the mapping relationship between the existing service request and the processing request server can be changed as little as possible. Consistent hashing solves the dynamic scaling problem of simple hashing algorithm in distributed hash table (DHT).

3.3.2 algorithm description

Common sense:

  1. If the data is the same, the hash result must be the same
  2. Common hash values consist of 8-bit hexadecimal numbers. How many possibilities do they share? 2 ^ 32

AOP and redis cache implementation

3.3.3 consistent hash has four definitions

Balance:
Balance means that hash results can be distributed to all buffers as much as possible, so that all buffer spaces can be used. Many hash algorithms can satisfy this condition.

Monotonicity:
Monotonicity means that if some content has been allocated to the corresponding buffer through hash, and a new buffer has been added to the system, the result of hash should be able to ensure that the original allocated content can be mapped to the new buffer, instead of being mapped to other buffers in the old buffer set. (this translation information has negative value. When the buffer size changes, consistent hashing tries to protect the allocated content from being re mapped to the new buffer.)

Spread:
In a distributed environment, the terminal may not see all the buffers, but only a part of them. When the terminal wants to map the content to the buffer through the hash process, different terminals may see different buffer ranges, resulting in inconsistent hash results. The final result is that the same content is mapped to different buffers by different terminals. Obviously, this situation should be avoided, because it causes the same content to be stored in different buffers and reduces the efficiency of system storage. Dispersion is defined as the severity of the situation. A good hash algorithm should be able to avoid inconsistencies as much as possible, that is, to minimize the dispersion.

Load:
The load problem is actually looking at the problem of decentralization from another perspective. Since different terminals may map the same content to different buffers, a specific buffer may also be mapped to different content by different users. As with decentralization, this situation should be avoided, so a good hash algorithm should be able to reduce the load of buffer as much as possible.