Springboot notes (III) redis

Time:2022-5-25

Springboot notes (III) redis

Redis database

**Soul torture: * * didn’t you learn MySQL and save data? Why do you learn another database?

Previously, we learned MySQL database, which is a traditional relational database. We can use Mysql to better manage and organize our data. Although in small web applications, only a MySQL + mybatis built-in cache system can be competent for most of the data storage work. However, the disadvantages of MySQL are also obvious. Its data is always stored on the hard disk. For the content of our user information that does not need to be modified frequently, MySQL storage can be used. However, if it is data that is updated quickly or used frequently, such as microblog hot search and double 11 second kill, these data not only require the server to provide higher response speed, Moreover, it also needs to face millions or even tens of millions of accesses in a short time, and MySQL’s disk IO read-write performance can not meet the above requirements. Only memory can meet the above requirements, because the speed is much higher than disk IO.

Therefore, we need to find a better solution to store the above special data and make up for the shortcomings of Mysql to meet the many tests of the big data era.

Introduction to NoSQL

The full name of NoSQL is not only SQL (not just SQL). It is a non relational database. Compared with the traditional SQL relational database, it:

  • The acid property of relational data is not guaranteed
  • Does not follow SQL standards
  • Eliminate correlation between data

At first glance, this thing is no more garbage than MySQL? Let’s look at its advantages:

  • Far surpassing the performance of traditional relational database
  • Very easy to expand
  • More flexible data model
  • High availability

In this way, the advantages of NoSQL come out at once. Isn’t this the solution for high concurrency and massive data we’re looking for!

NoSQL databases are divided into the following types:

  • **Key value storage database: * * all data is stored in key value mode, which is similar to the HashMap we learned before. It is very simple and convenient to use and has very high performance.
  • **Column storage database: * * this part of the database is usually used to deal with the massive data stored in distributed storage. Keys still exist, but they are characterized by pointing to multiple columns.
  • **Document database: * * it stores data in a specific document format, such as JSON format. When dealing with complex data such as web pages, document database has higher query efficiency than traditional key value database.
  • **Graphics database: * * use the data structure similar to graph to store data, and realize high-speed access combined with graph related algorithms.

The redis database we want to learn is an open source databaseKey value storage database, all data is stored in memory. Its performance is much higher than disk IO, and it can also support data persistence. It also supports horizontal expansion, master-slave replication, etc.

In actual production, we usually use redis and MySQL together to give play to their respective advantages and learn from each other.

Redis installation and deployment

We still use windows to install redis server here, but the official designation is to install it on Linux server. We will install it on Linux server after learning Linux later. Since there is no official installation package for Windows version, we need to look for another one:

  • Official website address: https://redis.io
  • GitHub Windows version maintenance address: https://github.com/tporadowski/redis/releases

basic operation

Before using mysql, we need to create a table in the database and define the contents of each field of the table, and theninsertStatement to add data to the table, but redis does not have a strict table structure like mysql. Redis is a key value database. Therefore, you can add data to the redis database through key value pairs in the same way as map (the operation is similar to saving data to a HashMap)

In redis, the database is identified by an integer index, not by a database name. By default, after we connect to the redis database, we will use database 0. We can modify the total number of databases through the parameters in the redis configuration file. The default is 16.

We can passselectStatement to switch:

Select serial number;

Data operation

Let’s see how to add data to redis database:

set <key> <value>
--Multiple at one time
mset [<key> <value>]...

All stored data will be displayed by defaultcharacter stringThe key value is saved in the form of, and the key value has a certain naming specification, so that we can quickly locate which part our data belongs to, such as the user’s data:

--Use a colon to split the plates. For example, the following represents the name attribute in the information of user XXX, with the value LBW
Set user: Info: user ID: name LBW

We can obtain the stored value through the key value:

get <key>

Do you think redis is just accessing data? It also supports the expiration time setting of data:

Set < key > < value > ex seconds
Set < key > < value > PX MS

When the data reaches the specified time, it will be automatically deleted. We can also set the expiration time for other key value pairs separately:

Expire < key > seconds

Query the expiration time of a key value pair through the following command:

ttl <key>
--Millisecond display
pttl <key>
--Convert to permanent
persist <key>

So when we want to delete this data directly? Direct use:

del <key>...

The delete command can splice multiple key values to delete at the same time.

When we want to view all the key values in the database:

keys *

You can also query whether a key exists:

exists <key>...

You can also take a key at random:

randomkey

We can move content from one database to another:

Move < key > database serial number

Change one key to another:

Rename < key > < new name >
--The following will check whether the new name already exists
Rename < key > < new name >

If the stored data is a number, we can also increase and decrease it automatically:

--Equivalent to a = a + 1
incr <key>
--Equivalent to a = a + B
incrby <key> b
--Equivalent to a = a - 1
decr <key>

Finally, check the data type of the value:

type <key>

Redis database also supports multiple data types, but it prefers those we know in Java.

Data type introduction

In addition to storing a value of string type, a key value pair also supports a variety of common data types.

Hash

This type is essentially a HashMap, that is, a nested HashMap. In Java, it is like this:

#Redis saves string by default, which is similar to this:
Map<String, String> hash = new HashMap<>();
#The hash data stored in redis is similar to this:
Map<String, Map<String, String>> hash = new HashMap<>();

It is more suitable for storing data such as classes. Since the value itself is a map, we can put various attributes and values of classes in this map to realize a hash data type to store data of a class.

We can add a hash type of data like this:

Hset < key > [< field > < value >]

We can get it directly:

Hget < key > < field >
--If you want to get all the fields and values at once
hgetall <key>

Similarly, we can also judge whether a field exists:

Hexists < key > < Fields >

Delete a hash in a field:

hdel <key>

We found that when we operate a hash, we actually add one in front of our normal operation commandhIn this way, you can operate the key values stored in the hash in the same way. Here, we won’t list all the operations one by one. Let’s take a look at some special ones.

Now we want to know how many key value pairs are stored in the hash:

hlen <key>

We can also get the values of all fields at one time:

hvals <key>

The only thing to note is that only string values can be stored in hash, and nesting is not allowed.

List

Let’s next look at the list type. In fact, we all know that it is a list, and a series of strings are stored in the list. It supports random access and double ended operation, just as we use LinkedList in Java.

We can directly add data to an existing or nonexistent list. If it does not exist, it will be created automatically:

--Add an element to the list header
lpush <key> <element>...
--Add an element to the end of the list
rpush <key> <element>...
--Inserts an element before / after the specified element
Linsert < key > before / after < specify element > < element >

Similarly, getting elements is very simple:

--Get elements from Subscripts
Lindex < key > < subscript >
--Get and remove header elements
lpop <key>
--Gets and removes the tail element
rpop <key>
--Gets the of the specified range
lrange <key> start stop

Note that subscripts can use negative numbers to represent numbers from the back to the front (Python: copy here, right):

--Get all elements in list a
lrange a 0 -1

Unexpectedly, push and pop can still be used together:

--Take a number from the last of the previous array, put it in the head of another array, and return the element
Rpoplpush current array target array

It also supports blocking operations, similar to producers and consumers. For example, we want to wait for data in the list before performing pop operations:

--If there are no elements in the list, wait. If data is added within the specified time (seconds), perform pop operation. If the timeout expires, it will be invalidated. It supports waiting for multiple lists at the same time. As long as one of the lists has elements, it can be executed
blpop <key>... timeout

Set and sortedset

The set set is actually like the HashSet in Java (we have explained in javase that the HashSet essentially uses a HashMap, but the values are fixed objects, but the keys are different). It does not allow duplicate elements and does not support random access, but can use the hash table to provide high search efficiency.

Add one or more values to the set:

sadd <key> <value>...

To see how many values are in the set set:

scard <key>

Judge whether the set contains:

--Whether to include the specified value
sismember <key> <value>
--List all values
smembers <key>

Operations between sets:

--Difference set between sets
sdiff <key1> <key2>
--Intersection between sets
sinter <key1> <key2>
--Union set
sunion <key1> <key2>
--Save the difference set between sets to the target set
Sdiffstore target < key1 > < key2 >
--Ibid
Sinterstore target < key1 > < key2 >
--Ibid
Sunionstore target < key1 > < key2 >

Move the specified value to another collection:

Smove < key > target value

Remove operation:

--Randomly remove a lucky one
spop <key>
--Remove assignment
srem <key> <value>...

So what if we ask the data in the set to be arranged in the order we specify? At this time, sortedset can be used. It supports us to set a score for each value. The size of the score determines the position of the value, so it is orderly.

We can add a value with a fraction:

zadd <key> [<value> <score>]...

alike:

--How many values does the query have
zcard <key>
--Remove
zrem <key> <value>...
--Get all in the interval
zrange <key> start stop

Since all values have a score, we can also obtain it according to the score segment:

--View by segment
zrangebyscore <key> start stop [withscores] [limit]
--Count the number in the score segment
zcount <key>  start stop
--Gets the ranking of the specified value according to the score
zrank <key> <value>

https://www.jianshu.com/p/32b9fe8c20e1

Data types such as bitmap, hyperlog and geospatial are not introduced here for the time being. If you are interested, you can learn about them by yourself.


Persistence

We know that the data in redis database is stored in memory. Although it is very efficient, there is a very serious problem. If there is a sudden power failure, won’t all our data be lost? Unlike the data on the hard disk, it can still be saved after power failure.

At this time, we need persistence. We need to back up our data to the hard disk to prevent data loss caused by power failure or machine failure.

There are two ways to implement persistence: one is to directly save the currentStored data, which is equivalent to copying the data in the memory to the hard disk. You can read it directly when you need to recover the data; Another is to save the data we storeAll processes, when you need to recover data, you only need to repeat the whole process completely to ensure the consistency with the contents in the previous database.

RDB

RDB is what we call the first solution, so how to save data locally? We can use the command:

save
--Note that the above command is to save directly, which will take a certain time. You can also open a sub process to save in the background
bgsave

After execution, a dump will be generated in the server directory RDB file, which saves the data stored in memory. When the server restarts, it will automatically load the contents into the corresponding database. After saving, we can shut down the server:

shutdown

After restart, you can see that the data still exists.

Springboot notes (III) redis

Although this method is very convenient, because all data will be completely copied, if the amount of data in the database is large, it may take a lot of time to copy once, so we can automatically save it every other period of time; In addition, if we are basically reading but not writing, we only need to save it once in a while, because there is little change in the data, and the same data may be saved twice.

We can set automatic saving in the configuration file and set how much data to write in a period of time to perform a saving operation:

Save 300 10 # writes in 300 seconds (5 minutes)
Save 10000 # writes in 60 seconds (1 minute)

The configured save uses bgsave to execute in the background.

AOF

Although RDB can well solve the problem of data persistence, its disadvantages are also obvious: it needs to completely save the data in the whole database every time, and there will be additional memory overhead in the background saving process. The most serious thing is that it is not saved in real time. If the server crashes before the automatic saving trigger, it will still lead to the loss of a small amount of data.

Aof is another way. It will save the commands we execute every time in the form of log. When the server restarts, it will execute all commands in turn, and recover the data in this way, which can well solve the problem of real-time storage.

Springboot notes (III) redis

But how often do we keep a diary? We can configure our own saving strategies. There are three strategies:

  • Always: it is saved once for each write operation
  • Everysec: save once per second (default configuration), so that even if data is lost, only data within one second will be lost
  • No: look at the system mood save

You can configure in the configuration file:

#Note that it has to be changed to the same
appendonly yes

# appendfsync always
appendfsync everysec
# appendfsync no

After restarting the server, you can see that there is one more in the server directoryappendonly.aofFile, which stores the commands we execute.

The disadvantages of AOF are also obvious. Every time the server starts, it needs to repeat the process. Compared with RDB, it takes more time. As our operations become more and more, our AOF files may become huge in the end. We need an improvement scheme to optimize these problems.

Redis has an AOF rewriting mechanism for optimization. For example, we execute the following statement:

lpush test 666
lpush test 777
lpush test 888

In fact, it can also be realized with one statement:

lpush test 666 777 888

Exactly, as long as we can ensure that the Final replay result is consistent with the result of the original statement, no matter how the statement is modified, so we can compress multiple statements in this way.

We can enter a command to manually perform the rewrite operation:

bgrewriteaof

Or configure automatic override in the configuration file:

#Percentage calculation is not introduced here
auto-aof-rewrite-percentage 100
#When this size is reached, automatic rewriting is triggered
auto-aof-rewrite-min-size 64mb

So far, we have completed the introduction of two persistence schemes. Finally, let’s make a summary:

  • AOF:
    • Advantages: fast storage speed, less resource consumption and support real-time storage
    • Disadvantages: slow loading speed and large data volume
  • RDB:
    • Advantages: fast loading speed and small data volume
    • Disadvantages: slow storage speed, large consumption of resources and data loss

Transaction and lock mechanism

Like mysql, redis also has a transaction mechanism. When we need to ensure that multiple commands are executed completely at one time without interference from other commands, we can use the transaction mechanism.

We can use the command to start the transaction directly:

multi

When we have entered all the commands to be executed, we can use the command to execute the transaction immediately:

exec

We can also cancel the transaction halfway:

discard

In fact, the whole transaction creates a command queue. Unlike mysql, which can get results separately in the transaction, we put all commands in the queue in advance, but they will not be executed. Instead, we will execute them uniformly when we commit the transaction.

lock

I mentioned lock again. In fact, this concept is not strange to us. In fact, in redis, multiple commands compete for the same data at the same time. For example, now there are two commands executing at the same time, and they have to modify the value of A. at this time, the locking mechanism can only be used to ensure that there can only be one command operation at the same time.

Although redis also has a lock mechanism, it is an optimistic lock. Unlike mysql, the lock we know in MySQL is a pessimistic lock. So what is an optimistic lock and what is a pessimistic lock?

  • Pessimistic lock: always think that others will seize resources and prohibit all external access until the lock is released, which is strongly exclusive.
  • Optimistic lock: I don’t think someone will seize resources, so I will directly operate the data and verify whether others will seize resources during operation.

In redis, you can use watch to monitor a target. If the monitored target is modified before executing the transaction, cancel the transaction:

watch

We can open two clients for testing.

To cancel monitoring, you can use:

unwatch

So far, the basic content of redis has been explained. In the later stage of spring cloud, we will also explain the knowledge related to clusters, including master-slave replication, sentinel mode, etc.


Interact with redis using java

Now that we know how to operate redis database through the command window, how can we use java to operate it?

Here we need to use the jedis framework, which can realize the interaction between Java and redis database. It depends on:

<dependencies>
    <dependency>
        <groupId>redis.clients</groupId>
        <artifactId>jedis</artifactId>
        <version>4.0.0</version>
    </dependency>
</dependencies>

basic operation

Let’s see how to connect to the redis database. It’s very simple. Just create an object:

public static void main(String[] args) {
    //Create jedis object
    Jedis jedis = new Jedis("localhost", 6379);
  	
  	//Close the connection after use
  	jedis.close();
}

Through the jedis object, we can directly call the method with the same name of the command to execute the redis command, such as:

public static void main(String[] args) {
    //Use try with rest directly and omit close
    try(Jedis jedis = new Jedis("192.168.10.3", 6379)){
        jedis. set("test", "lbwnb");   // Equivalent to set test lbwnb command
        System. out. println(jedis.get("test"));  // Equivalent to get test command
    }
}

The same is true for hash type data:

public static void main(String[] args) {
    try(Jedis jedis = new Jedis("192.168.10.3", 6379)){
        jedis. hset("hhh", "name", "sxc");   // Equivalent to hset HHH name SXC
        jedis. hset("hhh", "sex", "19");    // Equivalent to hset HHH age 19
        jedis.hgetAll("hhh").forEach((k, v) -> System.out.println(k+": "+v));
    }
}

Let’s move on to the list operation:

public static void main(String[] args) {
    try(Jedis jedis = new Jedis("192.168.10.3", 6379)){
        jedis. lpush("mylist", "111", "222", "333");  // Equivalent to lpush mylist 111 222 333 command
        jedis.lrange("mylist", 0, -1)
                .forEach(System.out::println);    // Equivalent to lrange mylist 0 - 1
    }
}

In fact, we only need to call the method with the same name according to the corresponding operation. Jedis has helped us complete all the type encapsulation.

Spring boot integrates redis

Next, let’s look at how to integrate the redis operation framework in the springboot project. It only needs a starter, but it doesn’t use jedis at the bottom, but lettuce:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>

The default configuration provided by starter will connect to the local redis server and use database 0. Of course, you can also modify it manually:

spring:
  redis:
  	#Redis server address
    host: 192.168.10.3
    #Port
    port: 6379
    #What database number is used
    database: 0

Starter has provided us with two default template classes:

@Configuration(
    proxyBeanMethods = false
)
@ConditionalOnClass({RedisOperations.class})
@EnableConfigurationProperties({RedisProperties.class})
@Import({LettuceConnectionConfiguration.class, JedisConnectionConfiguration.class})
public class RedisAutoConfiguration {
    public RedisAutoConfiguration() {
    }

    @Bean
    @ConditionalOnMissingBean(
        name = {"redisTemplate"}
    )
    @ConditionalOnSingleCandidate(RedisConnectionFactory.class)
    public RedisTemplate<Object, Object> redisTemplate(RedisConnectionFactory redisConnectionFactory) {
        RedisTemplate<Object, Object> template = new RedisTemplate();
        template.setConnectionFactory(redisConnectionFactory);
        return template;
    }

    @Bean
    @ConditionalOnMissingBean
    @ConditionalOnSingleCandidate(RedisConnectionFactory.class)
    public StringRedisTemplate stringRedisTemplate(RedisConnectionFactory redisConnectionFactory) {
        return new StringRedisTemplate(redisConnectionFactory);
    }
}

So how to use these two template classes? We can inject directlyStringRedisTemplateTo use the template:

@SpringBootTest
class SpringBootTestApplicationTests {

    @Autowired
    StringRedisTemplate template;

    @Test
    void contextLoads() {
        ValueOperations<String, String> operations = template.opsForValue();
        operations. set("c", "xxxxx");   // Set value
        System. out. println(operations.get("c"));   // Get value
      	
        template. delete("c");    // Delete key
        System. out. println(template.hasKey("c"));   // Determine whether the key is included
    }

}

In fact, all value operations are encapsulated inValueOperationsObject, and ordinary key operations can be used directly through the template object. The general use method is actually the same as jedis.

Let’s take a look at the transaction operation. Since spring does not have a special redis transaction manager, we can only borrow the one provided by JDBC, but it doesn’t matter. Under normal circumstances, we also need to use this thing anyway:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-jdbc</artifactId>
</dependency>
<dependency>
    <groupId>mysql</groupId>
    <artifactId>mysql-connector-java</artifactId>
</dependency>
@Service
public class RedisService {

    @Resource
    StringRedisTemplate template;

    @PostConstruct
    public void init(){
        template. setEnableTransactionSupport(true);   // Transaction needs to be started
    }

    @Transactional // this annotation needs to be added
    public void test(){
        template.multi();
        template.opsForValue().set("d", "xxxxx");
        template.exec();
    }
}

We can also configure a serializer for redistemplate object to realize JSON storage of the object:

@Test
void contextLoad2() {
    //Note that students need to implement a serialization interface before they can be stored in redis
    template.opsForValue().set("student", new Student());
    System.out.println(template.opsForValue().get("student"));
}

Using redis for caching

We can easily use redis to implement the caching and other storage of some frameworks.

Mybatis L2 cache

Remember when we were learning the caching mechanism explained by mybatis? We introduced the L2 cache, which is mapper level cache and can work with all sessions. However, we raised a problem at that time. Since the default L2 cache of mybatis can only be single machine, if multiple servers access the same database, the L2 cache will only take effect on their respective servers. However, we hope that multiple servers can use the same L2 cache, so as not to cause excessive waste of resources.

Springboot notes (III) redis

All the data can be stored on the same secondary server of myredis, because they can be stored on the same secondary server of myredis. We need to manually implement the cache interface provided by mybatis. Here we simply write:

//Implement the cache interface of mybatis
public class RedisMybatisCache implements Cache {

    private final String id;
    private static RedisTemplate<Object, Object> template;

   	//Note that the constructor must take a parameter of type string to receive the ID
    public RedisMybatisCache(String id){
        this.id = id;
    }

  	//When initializing, the redistemplate is sent through the configuration class
    public static void setTemplate(RedisTemplate<Object, Object> template) {
        RedisMybatisCache.template = template;
    }

    @Override
    public String getId() {
        return id;
    }

    @Override
    public void putObject(Object o, Object o1) {
      	//Here, just drop the data directly into the redis database. O is the key, O1 is the value, and 60 seconds is the expiration time
        template.opsForValue().set(o, o1, 60, TimeUnit.SECONDS);
    }

    @Override
    public Object getObject(Object o) {
      	//Here, you can directly obtain the value from the redis database according to the key
        return template.opsForValue().get(o);
    }

    @Override
    public Object removeObject(Object o) {
      	//Delete according to key
        return template.delete(o);
    }

    @Override
    public void clear() {
      	//Because there is no encapsulation clearing operation in the template, it can only be performed through connection
				template.execute((RedisCallback<Void>) connection -> {
          	//The emptying operation is performed through the connection object
            connection.flushDb();
            return null;
        });
    }

    @Override
    public int getSize() {
      	//Here, the connection object is also used to obtain the current number of keys
        return template.execute(RedisServerCommands::dbSize).intValue();
    }
}

After the cache class is written, we will then write the configuration class:

@Configuration
public class MainConfiguration {
    @Resource
    RedisTemplate<Object, Object> template;

    @PostConstruct
    public void init(){
      	//Give redistemplate to redismybatiscache
        RedisMybatisCache.setTemplate(template);
    }
}

Finally, we can enable this cache on mapper:

//Just modify the implementation of the cache implementation class to our redismybatiscache
@CacheNamespace(implementation = RedisMybatisCache.class)
@Mapper
public interface MainMapper {

    @Select("select name from student where sid = 1")
    String getSid();
}

Finally, we provide a test case to check whether the current L2 cache is effective:

@SpringBootTest
class SpringBootTestApplicationTests {


    @Resource
    MainMapper mapper;

    @Test
    void contextLoads() {
        System.out.println(mapper.getSid());
        System.out.println(mapper.getSid());
        System.out.println(mapper.getSid());
    }

}

Manually use the client to view the redis database, and you can see that there is already a cache data generated by mybatis.

Token persistent storage

When we used spring security before, remember me’s token supports persistent storage, and we stored it in the database at that time. Can the token information be stored in the cache? Of course, we can manually implement one:

//Implement the persistenttokenrepository interface
@Component
public class RedisTokenRepository implements PersistentTokenRepository {
  	//Key name prefix, used to distinguish
    private final static String REMEMBER_ME_KEY = "spring:security:rememberMe:";
    @Resource
    RedisTemplate<Object, Object> template;

    @Override
    public void createNewToken(PersistentRememberMeToken token) {
      	//Here we need to put two, one for seriesid - > token and the other for username - > seriesid, because it is deleted through username
        template.opsForValue().set(REMEMBER_ME_KEY+"username:"+token.getUsername(), token.getSeries());
        template.expire(REMEMBER_ME_KEY+"username:"+token.getUsername(), 1, TimeUnit.DAYS);
        this.setToken(token);
    }

  	//Get it first, then modify it, create a new one, and then put it in
    @Override
    public void updateToken(String series, String tokenValue, Date lastUsed) {
        PersistentRememberMeToken token = this.getToken(series);
        if(token != null)
           this.setToken(new PersistentRememberMeToken(token.getUsername(), series, tokenValue, lastUsed));
    }

    @Override
    public PersistentRememberMeToken getTokenForSeries(String seriesId) {
        return this.getToken(seriesId);
    }

  	//Find the seriesid through username and delete them directly
    @Override
    public void removeUserTokens(String username) {
        String series = (String) template.opsForValue().get(REMEMBER_ME_KEY+"username:"+username);
        template.delete(REMEMBER_ME_KEY+series);
        template.delete(REMEMBER_ME_KEY+"username:"+username);
    }

  
  	//Since persistentremembermetoken does not implement the serialization interface, it can only be stored with hash, so write a set and get operation separately
    private PersistentRememberMeToken getToken(String series){
        Map<Object, Object> map = template.opsForHash().entries(REMEMBER_ME_KEY+series);
        if(map.isEmpty()) return null;
        return new PersistentRememberMeToken(
                (String) map.get("username"),
                (String) map.get("series"),
                (String) map.get("tokenValue"),
                new Date(Long.parseLong((String) map.get("date"))));
    }

    private void setToken(PersistentRememberMeToken token){
        Map<String, String> map = new HashMap<>();
        map.put("username", token.getUsername());
        map.put("series", token.getSeries());
        map.put("tokenValue", token.getTokenValue());
        map.put("date", ""+token.getDate().getTime());
        template.opsForHash().putAll(REMEMBER_ME_KEY+token.getSeries(), map);
        template.expire(REMEMBER_ME_KEY+token.getSeries(), 1, TimeUnit.DAYS);
    }
}

Then the verification service is implemented:

@Service
public class AuthService implements UserDetailsService {

    @Resource
    UserMapper mapper;

    @Override
    public UserDetails loadUserByUsername(String username) throws UsernameNotFoundException {
        Account account = mapper.getAccountByUsername(username);
        if(account == null) throw new UsernameNotFoundException("");
        return User
                .withUsername(username)
                .password(account.getPassword())
                .roles(account.getRole())
                .build();
    }
}

Mapper also arranged:

@Data
public class Account implements Serializable {
    int id;
    String username;
    String password;
    String role;
}
@CacheNamespace(implementation = MybatisRedisCache.class)
@Mapper
public interface UserMapper {

    @Select("select * from users where username = #{username}")
    Account getAccountByUsername(String username);
}

The last configuration file is equipped with a wave:

@Override
protected void configure(HttpSecurity http) throws Exception {
    http
            .authorizeRequests()
            .anyRequest().authenticated()
            .and()
            .formLogin()
            .and()
            .rememberMe()
            .tokenRepository(repository);
}

@Override
protected void configure(AuthenticationManagerBuilder auth) throws Exception {
    auth
            .userDetailsService(service)
            .passwordEncoder(new BCryptPasswordEncoder());
}

OK, start the server and verify it.


Three cache problems

**Note: * * this part is selected.

Although we can use cache to greatly improve the data acquisition efficiency of our program, there are also some potential problems in using cache.

Cache penetration

[the transfer of external chain pictures fails. The source station may have an anti-theft chain mechanism. It is recommended to save the pictures and upload them directly (img-jqzweifu-1643109024010)( https://mydlq-club.oss-cn-beijing.aliyuncs.com/images/springboot-cache-redis-1004.png ” https://img-blog.csdnimg.cn/img_convert/9f4216e76bec09e934efd451e273234b.png “>

Using the bloom filter can tell you that something must not exist or that something may exist.

The bloom filter is essentially a bit array storing binary bits. If we want to add a value to the bloom filter, we need to use n different hash functions to generate N hash values, and for the bit position 1 pointed to by each generated hash value, as shown in the above figure, a total of three values ABC are added.

Then we give a D, then we can judge at this time. If the positions of N hash values calculated by D are all 1, then it indicates that D may exist; At this time, another e comes. After calculation, we find that the value of one position is 0. At this time, we can directly conclude that e must not exist.

Buffer breakdown

[external chain picture transfer failed. The source station may have anti-theft chain mechanism. It is recommended to save the picture and upload it directly (img-2nwtj0xu-1643109024010)( https://mydlq-club.oss-cn-beijing.aliyuncs.com/images/springboot-cache-redis-1005.png “_1012” > cache avalanche

[external chain picture transfer failed. The source station may have anti-theft chain mechanism. It is recommended to save the picture and upload it directly (img-z6zb2qpf-1643109024011)( https://mydlq-club.oss-cn-beijing.aliyuncs.com/images/springboot-cache-redis-1006.png?x -oss-process=style/shuiyin)]

When your redis server explodes or a large number of Keys expire at the same time, it is equivalent to caching direct GG. If there are many requests to access different data at this time, the cache server has to send a large number of requests to the database to re-establish the cache at the same time, which is easy to make the database GG.

The best way to solve this problem is to set up high availability, that is, build redis cluster. Of course, some service degradation mechanisms can also be adopted. We will discuss these contents in the spring cloud stage.