Integrated Druid connection pool and monitoring configuration of spring series

Time:2022-5-11

preface

In the previous article, we were familiar with hikaricp connection pool and learned that its performance is very high. Today, we talk about another popular connection pool: Druid, which is a database connection pool open source by Alibaba. It claims on its official website that it is born for monitoring! It can realize page monitoring, see the execution times, time and slow SQL information of SQL, encrypt the database password information, log the monitoring results, and switch sensitive operations to eliminate SQL injection. Let’s talk about how it integrates with spring in detail, and learn about its monitoring configuration by the way.

Key points of the article:

  • Spring integration Druid
  • Monitor filters configuration (stat, wall, config, log)
  • How should hikaricp and Druid choose

How to integrate Druid

1. Increase related dependencies

org.springframework.boot
			spring-boot-starter-actuator
		
		
			org.springframework.boot
			spring-boot-starter-web
		

		
			mysql
			mysql-connector-java
			runtime
		
		
			org.projectlombok
			lombok
			true
		
		
			org.springframework.boot
			spring-boot-starter-test
			test
		

		
			org.springframework.boot
			spring-boot-starter-data-jdbc
		

		
			org.mybatis.spring.boot
			mybatis-spring-boot-starter
			2.2.0
		

		
			com.alibaba
			druid-spring-boot-starter
			1.2.6

2. Configure datasource

@Configuration
public class DataSourceConfiguration {

    @ConfigurationProperties(prefix = "spring.datasource.druid")
    @Bean
    public DataSource dataSource(){
        return new DruidDataSource();
    }
}

3. Configuration item parameter application properties

#Or spring datasource. url
spring.datasource.druid.url=jdbc:mysql://localhost:3306/chenrui
#Or spring datasource. username
spring.datasource.druid.username=root
#Or spring datasource. password
spring.datasource.druid.password=root

#The number of physical connections established during initialization. Initialization occurs when the display calls the init method or the first getconnection
spring.datasource.druid.initial-size=5
#Maximum number of connection pools
spring.datasource.druid.max-active=20
#Minimum number of connection pools
spring.datasource.druid.min-idle=5
#The maximum waiting time to get a connection, in milliseconds. After maxwait is configured, the fair lock is enabled by default, and the concurrency efficiency will be reduced. If necessary, you can use the unfair lock by configuring the useunfairlock attribute to true
spring.datasource.druid.max-wait=500
#Whether to cache Preparedstatement, that is, pscache. Pscache greatly improves the performance of databases that support cursors, such as Oracle. It is recommended to close under mysql.
spring.datasource.druid.pool-prepared-statements=false
#To enable pscache, it must be configured to be greater than 0. When greater than 0, poolpreparedstatements will be automatically triggered and modified to true. In Druid, there will be no problem that pscache in Oracle occupies too much memory. You can configure this value to be larger, such as 100
spring.datasource.druid.max-pool-prepared-statement-per-connection-size=-1
#The SQL used to check whether the connection is valid requires a query statement, usually select 'x'. If validationquery is null, testonmirror, testonreturn and testwhiteidle will not work.
spring.datasource.druid.validation-query=select 'x'
#Unit: second, the timeout for detecting whether the connection is valid. The underlying layer calls the void setquerytimeout (int seconds) method of the JDBC statement object
spring.datasource.druid.validation-query-timeout=1
#When applying for a connection, execute validationquery to check whether the connection is valid. This configuration will reduce the performance.
spring.datasource.druid.test-on-borrow=true
#When returning the connection, execute validationquery to check whether the connection is valid. This configuration will reduce the performance.
spring.datasource.druid.test-on-return=true
#It is recommended to configure to true, which will not affect performance and ensure security. Check when applying for connection. If the idle time is greater than timebetween evictionrunsmillis, run validationquery to check whether the connection is valid
spring.datasource.druid.test-while-idle=true
#It has two meanings: 1 minute by default
#1) The destroy thread will detect the connection interval. If the connection idle time is greater than or equal to minevictableidletimemillis, the physical connection will be closed.
#2) For the judgment basis of testwhiteidle, please refer to the description of testwhiteidle attribute in detail
spring.datasource.druid.time-between-eviction-runs-millis=60000
#The minimum time a connection remains idle without being evicted
spring.datasource.druid.min-evictable-idle-time-millis=600000
#The maximum time a connection remains idle without being evicted
spring.datasource.druid.max-evictable-idle-time-millis=900000
#Configure multiple English comma separated
spring.datasource.druid.filters=stat,wall

#Webstatfilter configuration
#Whether to enable statfilter. The default value is false
spring.datasource.druid.web-stat-filter.enabled=true
#Matching URL
spring.datasource.druid.web-stat-filter.url-pattern=/*
#Exclude unnecessary URLs, such as JS, / jslib /, etc
spring.datasource.druid.web-stat-filter.exclusions=*.js,*.gif,*.jpg,*.bmp,*.png,*.css,*.ico,/druid/*
#You can turn off the session statistics function
spring.datasource.druid.web-stat-filter.session-stat-enable=true
#The default sessionstatmaxcount is 1000. You can also configure it as needed
spring.datasource.druid.web-stat-filter.session-stat-max-count=1000
#Enables druid to know who the current session user is
spring.datasource.druid.web-stat-filter.principal-session-name=cross
#If your user information is saved in a cookie, you can configure principalcookiename to make Druid know who the current user is
spring.datasource.druid.web-stat-filter.principal-cookie-name=aniu
#Configure profileenable to monitor the list of SQL invoked by a single URL
spring.datasource.druid.web-stat-filter.profile-enable=


#Disposition_ Statviewservlet configuration is used to display Druid statistics
#Whether to enable statviewservlet (monitoring page) the default value is false (considering the security problem, it is not started by default. If it needs to be enabled, it is recommended to set a password or white list to ensure security)
spring.datasource.druid.stat-view-servlet.enabled=true
spring.datasource.druid.stat-view-servlet.url-pattern=/druid/*
#Allow to empty statistics
spring.datasource.druid.stat-view-servlet.reset-enable=true
#Login user name of monitoring page
spring.datasource.druid.stat-view-servlet.login-username=root
#Password required to log in to the monitoring page
spring.datasource.druid.stat-view-servlet.login-password=1234

#Deny takes precedence over allow. If it is in the deny list, it will be rejected even if it is in the Allow list.
#If allow is not configured or empty, all accesses are allowed
#Allowed IP
# spring.datasource.druid.stat-view-servlet.allow=
#Rejected IP
#spring.datasource.druid.stat-view-servlet.deny=127.0.0.1

#Specify the location of the XML file
mybatis.mapper-locations=classpath:mapper/*Mapper.xml
#Enable the mapping of database fields and class attributes to support hump
mybatis.configuration.map-underscore-to-camel-case=true

4. Code related

Database script

create table user_info
(
    id        bigint unsigned auto_increment
        primary key,
    user_ Id int not null comment 'user ID',
    user_ Name varchar (64) not null comment 'real name',
    Email varchar (30) not null comment 'user mailbox',
    nick_ Name varchar (45) null comment 'nickname',
    Status tinyint not null comment 'user status, 1-normal, 2-logout, 3-frozen',
    address   varchar(128) null
)
    Comment 'basic user information';

Initialization data

INSERT INTO chenrui. user_ Info (ID, user_id, user_name, email, nick_name, status, address) values (1, 80001, 'Zhang Sanfeng', ' [email protected] ',' third brother ', 1,' Wudang Mountain ');
INSERT INTO chenrui. user_ Info (ID, user_id, user_name, email, nick_name, status, address) values (2, 80002, 'Zhang Wuji', ' [email protected] ', '', 1, null);

mapper. XML file writing

select * from user_info
    

    
        select * from user_info where id = #{id}
    

    
        select * from user_info where id =1
    

    
        select * from user_info where id =2

Write Dao interface

public interface UserInfoDAO {
    List findAllUser();

    UserInfo getUserById(@Param("id") int id);

    UserInfo getUserByIdEqualOne();

    UserInfo getUserByIdEqualTwo();
}

Test controller

@RestController
@Slf4j
public class UserInfoController {
    @Resource
    private UserInfoDAO userInfoDAO;

    @GetMapping(path = "/all")
    public List getAllUser(){
        return userInfoDAO.findAllUser();
    }

    @GetMapping(path = "/getUser/{id}")
    public UserInfo getById(@PathVariable int id){
        return userInfoDAO.getUserById(id);
    }

    @GetMapping(path = "/getUser/one")
    public UserInfo getById1(){
        return userInfoDAO.getUserByIdEqualOne();
    }

    @GetMapping(path = "/getUser/two")
    public UserInfo getById2(){
        return userInfoDAO.getUserByIdEqualTwo();
    }
}

Startup class

@SpringBootApplication
@MapperScan(basePackages = "com.example.springdataSourcedruid.dao")
public class SpringDataSourceDruidApplication {

	public static void main(String[] args) {
		SpringApplication.run(SpringDataSourceDruidApplication.class, args);
	}
}

5. Start validation

visit:http://127.0.0.1:8080/druid/, the login interface pops up. The user and password correspond to the user name and password set in our configuration file

Log in and you can see that there are a lot of monitoring. Here we only look at what we care about this time, data source, SQL monitoring, URL monitoring, and others can be studied by ourselves.

file
Above, we can see the information in the data source and our application The configuration in properties is consistent

Next, we will execute several times to verify the prepared interface
http://127.0.0.1:8080/all
http://127.0.0.1:8080/getUser/1
http://127.0.0.1:8080/getUser/2
http://127.0.0.1:8080/getUser/one
http://127.0.0.1:8080/getUser/two

Then take a look at what the monitoring information looks like
SQL monitoring

file
Above, we can see our total of four statements and the operation of the four statements
On the SQL monitoring item, the execution time, the number of read rows and the number of update rows have interval distribution, and the time consumption is divided into 8 intervals:

  • 0 – the number of times that 1 takes 0 to 1 milliseconds
  • 1 – 10 the number of times it takes 1 to 10 milliseconds
  • 10 – 100 the number of times it takes 10 to 100 milliseconds
  • 100 – 1000 the number of times it takes 100 to 1000 milliseconds
  • 1000 – 10000 times in 1 to 10 seconds
  • 10000 – 100000 times in 10 to 100 seconds
  • 100000 – 1000000 times in 100 to 1000 seconds
  • 1000000 – times that take more than 1000 seconds

You may have questions here, how can id = 1 and id = 2 be separated? If I have 100 million IDS, should I have 100 million records on the monitoring page? Shouldn’t all be id =? In the form of? Later on, we will talk about the monitoring configuration of SQL consolidation

URL monitoring

file
Here you can clearly see the information of database execution involved in each URL

Druid’s built-in filters

In Druid jar, meta-inf / Druid filter There is a built-in filter in properties, which is as follows:

druid.filters.default=com.alibaba.druid.filter.stat.StatFilter
druid.filters.stat=com.alibaba.druid.filter.stat.StatFilter
druid.filters.mergeStat=com.alibaba.druid.filter.stat.MergeStatFilter
druid.filters.counter=com.alibaba.druid.filter.stat.StatFilter
druid.filters.encoding=com.alibaba.druid.filter.encoding.EncodingConvertFilter
druid.filters.log4j=com.alibaba.druid.filter.logging.Log4jFilter
druid.filters.log4j2=com.alibaba.druid.filter.logging.Log4j2Filter
druid.filters.slf4j=com.alibaba.druid.filter.logging.Slf4jLogFilter
druid.filters.commonlogging=com.alibaba.druid.filter.logging.CommonsLogFilter
druid.filters.commonLogging=com.alibaba.druid.filter.logging.CommonsLogFilter
druid.filters.wall=com.alibaba.druid.wall.WallFilter
druid.filters.config=com.alibaba.druid.filter.config.ConfigFilter
druid.filters.haRandomValidator=com.alibaba.druid.pool.ha.selector.RandomDataSourceValidateFilter

Default, stat, wall, etc. are aliases of filter, which can be found in application Properties can be accessed through spring datasource. druid. The filters attribute specifies an alias to enable the corresponding filter, or it can be enabled through attribute injection in spring. Next, let’s introduce the commonly used filter

Interceptor stat (default, counter)

In spring datasource. druid. Stats are included in the filters configuration, which means that monitoring statistics are enabled. In the above content, we have seen that stats include execution times, time, slowest SQL and other information. It is also mentioned that because some SQL is nonparametric, many monitored SQL on the monitoring page will be the same, but the parameters are different. At this time, we need to open the contract SQL configuration;
Only in application Properties add configuration:

#To monitor the start of SQL consolidation, set the time of slow SQL to 2 milliseconds and record the slow SQL log
spring.datasource.druid.connection-properties=druid.stat.mergeSql=true;druid.stat.slowSqlMillis=2;druid.stat.logSlowSql=true

Take a look at the running results:
file
1. The following two statements are merged on the monitoring page:

select * from user_info where id=1
select * from user_info where id=2
//The combined result is:  
SELECT * FROM user_info WHERE id = ?

2. Statements exceeding 2ms are displayed in red on the monitoring page
3. Slow SQL will be reflected in the log
file

Interceptor mergestat

Inherit stat, and its basic features are the same as stat, without extension

Interceptor encoding

Due to historical reasons, some databases use wrong codes when saving data, so code conversion is required.
It can be opened in the following way:

spring.datasource.druid.filters=stat,encoding
#Configure the code UTF-8 of the client and the code iso-8859-1 of the server. In this way, there are garbled codes in the database and they will not be garbled when queried.
spring.datasource.druid.connection-properties=druid.stat.mergeSql=true;druid.stat.slowSqlMillis=2;druid.stat.logSlowSql=true;clientEncoding=UTF-8;serverEncoding=ISO-8859-1

Interceptor log4j (log4j2, slf4j, commonlogging, commonlogging)

Druid provides four built-in logfilters (log4jfilter, log4j2filter, commonslogfilter and slf4jlogfilter) to output the logs of JDBC execution

#Here, take log4j2 as an example
spring.datasource.druid.filters=stat,log4j2
#druid. log. Conn record connection, Druid log. Stmt record statement, Druid log. RS record result set, Druid log. stmt. Executablesql records the executable SQL
spring.datasource.druid.connection-properties=druid.stat.mergeSql=true;druid.stat.slowSqlMillis=2;druid.stat.logSlowSql=true;druid.log.conn=true;druid.log.stmt=true;druid.log.rs=true;druid.log.stmt.executableSql=true
#To facilitate verification, we open the following loggername as debug
logging.level.druid.sql.Statement=DEBUG
logging.level.druid.sql.ResultSet=DEBUG
logging.level.druid.sql.Connection=DEBUG
logging.level.druid.sql.DataSource=DEBUG

file
We can see the whole process of executing SQL. Open connection > get a connection from the connection pool > assemble SQL statement > execute > return result set > recycle connection from the connection pool
Only log4j2 is used here, and others can be verified by themselves.

Interceptor wall

The function of wallfilter is to defend against SQL injection attacks. It is based on SQL syntax analysis to understand the SQL semantics and then deal with it. It is intelligent, accurate and low false positive rate. To reduce the risk, the wall interceptor is still very important. For example, truncate and physical deletion are not allowed. At this time, wall can be used. There are two configurations:

First: default configuration

spring.datasource.druid.filters=stat,wall,log4j2

This configuration is the default configuration, and most of them will not be intercepted. It may not conform to specific scenarios. The default attribute value refers to:https://www.bookstack.cn/read/Druid/ffdd9118e6208531.md

The second type: attribute assignment configuration

The advantage of this method is that we can limit specific scenarios, such as whether stored procedures cannot be used, whether physical deletion is allowed, whether comments are allowed in statements, and so on.

//Inject wallfilter before druiddatasource generation
@ConfigurationProperties(prefix = "spring.datasource.druid")
    @Bean
    public DataSource dataSource(){
        DruidDataSource dataSource = new DruidDataSource();
        dataSource.getProxyFilters().add(wallFilter());
        return dataSource;
    }

    @Bean
    @ConfigurationProperties("spring.datasource.druid.filter.wall.config")
    public WallConfig wallConfig(){
        return new WallConfig();
    }
	@Bean
    public WallFilter wallFilter(){
        WallFilter filter = new WallFilter();
        filter.setConfig(wallConfig());
        filter.setDbType("mysql");
        return filter;
    }
#Physical deletion of statements is not allowed
spring.datasource.druid.filter.wall.config.delete-allow=false

Try it out:
file
You can see that the log shows that deletion is not allowed, which can avoid some students from developing code according to the company’s development specifications and reduce risks. Other configurations can be tested by yourself.

Interceptor config

Config function: read the configuration from the configuration file; Read the configuration from the remote http file; Provide encryption function for database password
In fact, the first two functions are of little significance, and the most critical is the third function, because the database password is written directly in the configuration, which is a great challenge for operation and maintenance security. Druid provides a means of database password encryption configfilter
How to use:

#In application Add config. In the link property configuration item of properties File, which can be a local file or a remote file, such as config file= http://127.0.0.1/druid-pool.properties
spring.datasource.druid.connection-properties=config.file=file:///Users/chenrui/druid-pool.properties

Encrypt our database password

Use the following command to generate the ciphertext and secret key pair of the database password

java -cp druid-1.0.16.jar com.alibaba.druid.filter.config.ConfigTools you_password

file

druid-pool. Properties file content

file

The value of the database password configuration item is changed to ciphertext

spring.datasource.druid.password=kPYuT1e6i6M929mNvKfvhyBx3eCI+Fs0pqA3n7GQYIoo76OaWVg3KALr7EdloivFVBSeF0zo5IGIfpbGtAKa+Q==

Try to start it yourself and find that everything is normal and the problem of information security has been solved.

How druid and hikaricp choose

There is such a diagram on the network. It can be seen that Druid is consistent with its statement (born for monitoring), but there are many monitoring related middleware and technologies on the market. Hikaricp can make up for the shortcomings of monitoring through these technologies
file
Hikaricp said that it is the best connection pool, but Druid also withstood the test of Alibaba double 11. In fact, its performance is also very good

There are different opinions on which one to choose, but both are open source products. Alibaba’s Druid has a Chinese open source community, which makes communication more convenient. After Ali’s experiments on several systems, it must be very stable, and Hikari is springboot 2 0 is the default connection pool, which is widely used all over the world. For most businesses, which one is used is the same. After all, the performance bottleneck is generally not in the connection pool. You can choose freely according to your preferences