Shardingsphere JDBC chapter of Amazon Aurora’s reading and writing ability expansion

Time:2022-5-24

Source: Amazon AWS official blog

Sun Jinhua, senior solution architect of Amazon cloud technology, is responsible for helping customers design and consult the cloud architecture. Before joining AWS, I started my own business and was responsible for the construction of e-commerce platform and the overall architecture design of auto enterprise e-commerce platform. Once worked in the world’s leading communication equipment company as a senior engineer, responsible for the development and architecture design of multiple subsystems of LTE equipment system. He has rich experience in high concurrency and high availability system architecture design, micro service architecture design, database, middleware, IOT and so on.

1. Preface

Amazon aurora is a relational database service developed by Amazon cloud technology. It not only provides complete compatibility with open source databases MySQL and PostgreSQL, but also provides performance and availability comparable to commercial databases. In terms of performance, Aurora MySQL can support five times the throughput under the same configuration as the open source standard mysql, and Aurora PostgreSQL can support three times the throughput under the same configuration as the open source standard PostgreSQL. From the perspective of scalability, Aurora has enhanced and innovated its functions in storage and computing, horizontal and vertical aspects.

Aurora’s maximum data storage capacity now supports up to 128tb, and can support dynamic shrinkage of storage. In terms of computing, Aurora provides the scalability configuration of multiple read replicas, supports the expansion of up to 15 read replicas in a region, provides a multi master architecture to support the expansion of four write nodes in the same region, provides serverless serverless architecture, second level vertical expansion at the instance level, and provides a global database to realize low latency cross regional expansion of the database.

With the growth of user data volume, Aurora has provided good scalability. Can it further enhance more data volume and more concurrent access capabilities? You can consider using sub database and sub table to support the configuration of multiple Aurora clusters at the bottom. Based on this, the series of blogs including this blog will be introduced accordingly, in order to provide a reference for your choice of agent or JDBC when dividing databases and tables.

1.1 why do we need to divide the warehouse and table

AWS Aurora provides relational database, single machine, master-slave, multi master, global database and other managed architecture forms, which can meet the above various architecture scenarios. However, Aurora does not provide direct support in the scenario of sub database and sub table, and the sub database and sub table are also derived, such as vertical and horizontal forms. When further improving the data capacity, it also brings some problems to be solved, such as cross node database join Association query Distributed transactions, SQL sorting, page turning, function calculation, database global primary key, capacity planning, secondary capacity expansion after sub database and sub table, etc.

1.2 method of warehouse and table division

The time taken for a query is generally recognized in the industry that MySQL with a single table capacity of less than 10 million is the best state, because its BTREE index tree is between 3 and 5. Through data segmentation, we can reduce the amount of data in a single table and allocate the reading and writing pressure to different data nodes. Data segmentation can be divided into vertical segmentation and horizontal segmentation.

  • Advantages of vertical segmentation

Solve the coupling at the business system level and make the business clear;

Similar to the governance of microservices, it can also manage, maintain, monitor and expand the data of different businesses at different levels;

In high concurrency scenarios, vertical segmentation can improve the bottleneck of IO, database connections and stand-alone hardware resources to a certain extent.

  • Disadvantages of vertical segmentation

It can’t join after the database is divided, which can only be solved by interface aggregation, which increases the complexity of development;

The distributed transaction processing is complex after database distribution;

There is still a problem that the amount of data in a single table is too large (horizontal segmentation is required).

  • Advantages of horizontal segmentation

There is no performance bottleneck of large amount of single database data and high concurrency, so as to improve the stability and load capacity of the system;

The transformation of the application end is small, and there is no need to split the business module.

  • Disadvantages of horizontal segmentation

The consistency of transactions across shards is difficult to guarantee;

The performance of cross database join Association query is poor;

The data is difficult to expand and maintain for many times.

Combined with the above analysis, based on the investigation of common database and table middleware, we select shardingsphere open source products combined with Amazon aurora to introduce how the combination of these two products meets various forms of database and table distribution methods and how to solve some problems caused by database and table distribution.

2. Sharding JDBC function test

2.1 example project description

Download the sample project code locally. To ensure the stability of the test code, we choose shardingsphere-example-4.0.0 as the tag version.

git clone https://github.com/apache/shardingsphere-example.git

Project Description:

shardingsphere-example
  ├── example-core
  │   ├── config-utility
  │   ├── example-api
  │   ├── example-raw-jdbc
  │♪ -- example spring JPA #spring + JPA integration foundation entity, repository
  │   └── example-spring-mybatis
  ├── sharding-jdbc-example
  │   ├── sharding-example
  │   │   ├── sharding-raw-jdbc-example
  │ │♪ -- sharding spring boot JPA example # integrates the functions of basic sharding JDBC
  │   │   ├── sharding-spring-boot-mybatis-example
  │   │   ├── sharding-spring-namespace-jpa-example
  │   │   └── sharding-spring-namespace-mybatis-example
  │   ├── orchestration-example
  │   │   ├── orchestration-raw-jdbc-example
  │ │♪ - organization spring boot example # integration of basic sharding JDBC governance functions
  │   │   └── orchestration-spring-namespace-example
  │   ├── transaction-example
  │ │♪ -- transaction-2pc-xa-example #sharding-jdbc example of two-phase commit of distributed transaction
  │ │ └ -- transaction base Seata example #sharding JDBC example of distributed transaction Seata
  │   ├── other-feature-example
  │   │   ├── hint-example
  │   │   └── encrypt-example
  ├── sharding-proxy-example
  │   └── sharding-proxy-boot-mybatis-example
  └── src/resources
        └── manual_schema.sql

Profile description:

application-master-slave. Properties # read write separation configuration file
application-sharding-databases-tables. Properties # sub database and sub table configuration file
application-sharding-databases. Properties # only sub library configuration files
application-sharding-master-slave. Properties # sub database and sub table plus read-write separation configuration file
application-sharding-tables. Properties # sub table configuration file
application. Properties #spring boot configuration file

Code logic description:

The entry class of spring boot application. The project can be run by executing this class

The execution logic of demo is as follows:

2.2 read write separation verification

With the growth of business, the separation of write and read requests to different database nodes can effectively improve the processing capacity of the whole database cluster. Aurora can meet the needs of users for writing and strongly consistent reading through the read / write endpoint, and the single read-only endpoint can meet the needs of users for non strongly consistent reading. Aurora’s read-write latency is usually at the millisecond level, which is much lower than MySQL’s binlog based logical replication, so a lot of load is directly connected to the read-only endpoint.

Through the configuration of one master and many slaves, the query requests can be evenly distributed to multiple data copies, which can further improve the processing capacity of the system. Although read-write separation can improve the throughput and availability of the system, it also brings the problem of data inconsistency. Aurora provides a master-slave architecture in the form of full hosting, but when the upper application interacts with Aurora, it still needs to manage multiple data sources and route SQL requests to different nodes according to the read-write type of SQL statements and certain routing strategies.

Sharding JDBC provides the feature of read-write separation. The application is integrated with sharding JDBC to separate the complex configuration relationship between the application and the database cluster from the application. Developers can completely separate these copied logic from the code by managing shard through the configuration file and combining with some ORM frameworks such as spring JPA and mybatis. Greatly improve the maintainability of the code and reduce the coupling between the code and the database.

2.2.1 database environment preparation

First, create a set of Aurora MySQL read-write separation cluster with the model of DB r5. 2xlarge, each cluster has one write node and two read nodes. As shown in the figure below


2.2.2 sharding JDBC configuration

application. Properties spring boot main configuration file description

As shown in the following figure: you need to replace the part marked with green with the configuration of your own environment

#JPA automatically creates and drops data tables based on entities
spring.jpa.properties.hibernate.hbm2ddl.auto=create-drop
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true

#spring.profiles.active=sharding-databases
#spring.profiles.active=sharding-tables
#spring.profiles.active=sharding-databases-tables
#Activate the master slave configuration item so that sharding JDBC will use the master slave configuration file
spring.profiles.active=master-slave
#spring.profiles.active=sharding-master-slave

application-master-slave. Properties sharding JDBC configuration file description

spring.shardingsphere.datasource.names=ds_master,ds_slave_0,ds_slave_1
#Data source master
spring.shardingsphere.datasource.ds_master.driver-class-name=com.mysql.jdbc.Driver
spring. shardingsphere. datasource. ds_ master. Password = your own master DB password
spring.shardingsphere.datasource.ds_master.type=com.zaxxer.hikari.HikariDataSource
spring. shardingsphere. datasource. ds_ master. JDBC url = your own primary DB data source URL spring shardingsphere. datasource. ds_ master. Username = your own primary DB user name 
#Data source slave Library
spring.shardingsphere.datasource.ds_slave_0.driver-class-name=com.mysql.jdbc.Driver
spring. shardingsphere. datasource. ds_ slave_ 0. Password = your own slave DB password
spring.shardingsphere.datasource.ds_slave_0.type=com.zaxxer.hikari.HikariDataSource
spring. shardingsphere. datasource. ds_ slave_ 0. JDBC url = your own URL from DB data source
spring. shardingsphere. datasource. ds_ slave_ 0. Username = your own slave DB username
#Data source slave Library
spring.shardingsphere.datasource.ds_slave_1.driver-class-name=com.mysql.jdbc.Driver
spring. shardingsphere. datasource. ds_ slave_ 1. Password = your own slave DB password
spring.shardingsphere.datasource.ds_slave_1.type=com.zaxxer.hikari.HikariDataSource
spring. shardingsphere. datasource. ds_ slave_ 1. JDBC url = your own URL from DB data source
spring. shardingsphere. datasource. ds_ slave_ 1. Username = your own slave DB username
#Routing policy configuration
spring.shardingsphere.masterslave.load-balance-algorithm-type=round_robin
spring.shardingsphere.masterslave.name=ds_ms
spring.shardingsphere.masterslave.master-data-source-name=ds_master
spring.shardingsphere.masterslave.slave-data-source-names=ds_slave_0,ds_slave_1
#Sharding JDBC configuration information storage method
spring.shardingsphere.mode.type=Memory
#Turn on the shardingsphere log. When it is turned on, you can see the conversion from logical SQL to actual SQL from printing
spring.shardingsphere.props.sql.show=true

2.2.3 description of test verification process

  • Test environment data initialization: Spring JPA initialization automatically creates tables for testing
  • Write data on the primary instance
    As shown in the shardingsphere SQL log below, write SQL in DS_ Execute on the master data source.

  • The data query operation is performed on the slave library
    As shown in the shardingsphere SQL log in the following figure, reading SQL is displayed in DS by polling_ Execute on the slave data source.
[INFO ] 2022-04-02 19:43:39,376 --main-- [ShardingSphere-SQL] Rule Type: master-slave 
[INFO ] 2022-04-02 19:43:39,376 --main-- [ShardingSphere-SQL] SQL: select orderentit0_.order_id as order_id1_1_, orderentit0_.address_id as address_2_1_, 
orderentit0_.status as status3_1_, orderentit0_.user_id as user_id4_1_ from t_order orderentit0_ ::: DataSources: ds_slave_0 
---------------------------- Print OrderItem Data -------------------
Hibernate: select orderiteme1_.order_item_id as order_it1_2_, orderiteme1_.order_id as order_id2_2_, orderiteme1_.status as status3_2_, orderiteme1_.user_id 
as user_id4_2_ from t_order orderentit0_ cross join t_order_item orderiteme1_ where orderentit0_.order_id=orderiteme1_.order_id
[INFO ] 2022-04-02 19:43:40,898 --main-- [ShardingSphere-SQL] Rule Type: master-slave 
[INFO ] 2022-04-02 19:43:40,898 --main-- [ShardingSphere-SQL] SQL: select orderiteme1_.order_item_id as order_it1_2_, orderiteme1_.order_id as order_id2_2_, orderiteme1_.status as status3_2_, 
orderiteme1_.user_id as user_id4_2_ from t_order orderentit0_ cross join t_order_item orderiteme1_ where orderentit0_.order_id=orderiteme1_.order_id ::: DataSources: ds_slave_1

Note: as shown in the figure below, if there are both read and write operations in a transaction, sharding JDBC routes the read and write operations to the main database; If the read-write request is not in a transaction, the corresponding read request will be distributed to different read nodes according to the routing policy.

@Override
@Transactional // when a transaction is started, all reads and writes in the transaction go to the primary database; When a transaction is closed, the slave database is read and the master database is written
public void processSuccess() throws SQLException {
    System.out.println("-------------- Process Success Begin ---------------");
    List<Long> orderIds = insertData();
    printData();
    deleteData(orderIds);
    printData();
    System.out.println("-------------- Process Success Finish --------------");
}

2.2.4 Aurora failover scenario verification

Aurora database environment adopts the configuration in 2.2.1.

2.2.4.1 description during verification

  • Start the spring boot project
  • Perform failover on Aurora’s console
  • Execute rest API request
  • Post multiple times(http://localhost:8088/save-user)Until the call of the API fails to write to aurora and the final recovery is successful.
  • Observe the execution code failover process as shown in the figure below. From the log, it can be analyzed that it takes about 37s from the success of the last SQL write operation to the success of the next write operation, that is, the application can be automatically recovered from Aurora failover, and the recovery time is about 37s.

2.3 functional verification of sub table only

2.3.1 sharding JDBC configuration

application. Properties spring boot main configuration file description

#JPA automatically creates and drops data tables based on entities
spring.jpa.properties.hibernate.hbm2ddl.auto=create-drop
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true
#spring.profiles.active=sharding-databases
#Activate sharding tables configuration item
#spring.profiles.active=sharding-tables
#spring.profiles.active=sharding-databases-tables
# spring.profiles.active=master-slave
#spring.profiles.active=sharding-master-slave

application-sharding-tables. Properties sharding JDBC configuration file description

##Primary key policy configuration
spring.shardingsphere.sharding.tables.t_order.key-generator.column=order_id
spring.shardingsphere.sharding.tables.t_order.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order.key-generator.props.worker.id=123
spring.shardingsphere.sharding.tables.t_order_item.actual-data-nodes=ds.t_order_item_$->{0..1}
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.algorithm-expression=t_order_item_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order_item.key-generator.column=order_item_id
spring.shardingsphere.sharding.tables.t_order_item.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order_item.key-generator.props.worker.id=123
#Configuration t_ Order and t_ order_ Binding relationship of item
spring.shardingsphere.sharding.binding-tables[0]=t_order,t_order_item
#Configure broadcast table
spring.shardingsphere.sharding.broadcast-tables=t_address
#Sharding JDBC mode
spring.shardingsphere.mode.type=Memory
#Open shardingsphere log
spring.shardingsphere.props.sql.show=true

2.3.2 description of test verification process

  • DDL operation
    As shown in the figure below, JPA automatically creates tables for testing. When the routing rules of sharding JDBC are configured, the client side executes DDL, and sharding JDBC will automatically create corresponding tables according to the table splitting rules; If taddress is a broadcast table, since there is only one primary instance, create a taddress; Torder is divided into tables according to the module. When creating torder, two physical tables, torder0 and torder1, will be created.

  • Write operation
    As shown in the figure below, logic SQL inserts a record into the torder. When sharding JDBC is executed, the data will be distributed to torder0 and torder according to the table splitting rules_ 1 medium.

When the binding relationship between torder and torderitem is configured, the records associated with order and orderitem will be placed in the same physical sub table.

  • Read operation
    Join query operation order and order under binding table_ Item, as shown in the figure below, will be accurately located on the corresponding physical shard according to the binding relationship.

Join query operation order and order under unbound table_ Item, as shown in the figure below, will traverse all Shards.

2.4 sub database function verification only

2.4.1 database environment preparation

As shown in the figure below, create two instances on Aurora: DS0 and DS1

When starting the sharding spring boot JPA example project, the tables torder, torderitem, and taddress will be created on the two Aurora instances

2.4.2 sharding JDBC configuration

application. Properties springboot main configuration file description

#JPA automatically creates and drops data tables based on entities
spring.jpa.properties.hibernate.hbm2ddl.auto=create
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true

#Activate sharding databases configuration item
spring.profiles.active=sharding-databases
#spring.profiles.active=sharding-tables
#spring.profiles.active=sharding-databases-tables
#spring.profiles.active=master-slave
#spring.profiles.active=sharding-master-slave

application-sharding-databases. Properties sharding JDBC configuration file description

spring.shardingsphere.datasource.names=ds_0,ds_1
# ds_0
spring.shardingsphere.datasource.ds_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_0.jdbc-url= spring.shardingsphere.datasource.ds_0.username= 
spring.shardingsphere.datasource.ds_0.password=
# ds_1
spring.shardingsphere.datasource.ds_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_1.jdbc-url= 
spring.shardingsphere.datasource.ds_1.username= 
spring.shardingsphere.datasource.ds_1.password=
spring.shardingsphere.sharding.default-database-strategy.inline.sharding-column=user_id
spring.shardingsphere.sharding.default-database-strategy.inline.algorithm-expression=ds_$->{user_id % 2}
spring.shardingsphere.sharding.binding-tables=t_order,t_order_item
spring.shardingsphere.sharding.broadcast-tables=t_address
spring.shardingsphere.sharding.default-data-source-name=ds_0

spring.shardingsphere.sharding.tables.t_order.actual-data-nodes=ds_$->{0..1}.t_order
spring.shardingsphere.sharding.tables.t_order.key-generator.column=order_id
spring.shardingsphere.sharding.tables.t_order.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order.key-generator.props.worker.id=123
spring.shardingsphere.sharding.tables.t_order_item.actual-data-nodes=ds_$->{0..1}.t_order_item
spring.shardingsphere.sharding.tables.t_order_item.key-generator.column=order_item_id
spring.shardingsphere.sharding.tables.t_order_item.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order_item.key-generator.props.worker.id=123
#Sharding JDBC mode
spring.shardingsphere.mode.type=Memory
#Open shardingsphere log
spring.shardingsphere.props.sql.show=true

2.4.3 description of test and verification process

  • DDL operation
    JPA automatically creates tables for testing, as shown in the figure below. When the sub database routing rules of sharding JDBC are configured, the client side executes DDL, and sharding JDBC will automatically create corresponding tables according to the sub table rules; If taddress is a broadcast table, physical tables taddress, torder and torderitem will be created on both DS0 and DS1. These three tables will be stored in DS according to the module library_ 0 and DS_ Created on 1.
  • Write operation
    For the broadcast table taddress, every record written will be written in the taddress tables of DS0 and DS1

For sub database tables, torder, torder_ Item will be written into the table on the corresponding instance according to the sub database field and routing policy.

  • Read operation
    As shown in the figure below, query the order and route to the corresponding Aurora instance according to the sub database routing rules.

As shown in the figure below, address is queried. Since address is a broadcast table, an instance of address will be randomly selected from the nodes used for query.

As shown in the following figure, the join query under the binding table operates order and order_ When item, it will be accurately located on the corresponding physical shard according to the binding relationship.

2.5 function verification of sub warehouse and sub table

2.5.1 database environment preparation

As shown in the following figure, create two instances on Aurora: DS0 and DS1

When starting the sharding spring boot JPA example project, the physical table T will be created on the two Aurora instances_ order_ 01, t_ order_ 02, t_ order_ item_ 01,t_ order_ item_ 02 and t_ Address global table.

2.5.2 sharding JDBC configuration

application. Properties springboot main configuration file description

#JPA automatically creates and drops data tables based on entities
spring.jpa.properties.hibernate.hbm2ddl.auto=create
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true
#Activate the sharing databases tables configuration item
#spring.profiles.active=sharding-databases
#spring.profiles.active=sharding-tables
spring.profiles.active=sharding-databases-tables
#spring.profiles.active=master-slave
#spring.profiles.active=sharding-master-slave

application-sharding-databases. Properties sharding JDBC configuration file description

spring.shardingsphere.datasource.names=ds_0,ds_1
# ds_0
spring.shardingsphere.datasource.ds_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_0.jdbc-url= 306/dev?useSSL=false&characterEncoding=utf-8
spring.shardingsphere.datasource.ds_0.username= 
spring.shardingsphere.datasource.ds_0.password=
spring.shardingsphere.datasource.ds_0.max-active=16
# ds_1
spring.shardingsphere.datasource.ds_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_1.jdbc-url= 
spring.shardingsphere.datasource.ds_1.username= 
spring.shardingsphere.datasource.ds_1.password=
spring.shardingsphere.datasource.ds_1.max-active=16
#Default sub database policy
spring.shardingsphere.sharding.default-database-strategy.inline.sharding-column=user_id
spring.shardingsphere.sharding.default-database-strategy.inline.algorithm-expression=ds_$->{user_id % 2}
spring.shardingsphere.sharding.binding-tables=t_order,t_order_item
spring.shardingsphere.sharding.broadcast-tables=t_address
#Tables that do not meet the sub database strategy are placed in DS_ On 0
spring.shardingsphere.sharding.default-data-source-name=ds_0
# t_ Order table splitting strategy
spring.shardingsphere.sharding.tables.t_order.actual-data-nodes=ds_$->{0..1}.t_order_$->{0..1}
spring.shardingsphere.sharding.tables.t_order.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order.table-strategy.inline.algorithm-expression=t_order_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order.key-generator.column=order_id
spring.shardingsphere.sharding.tables.t_order.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order.key-generator.props.worker.id=123
# t_ order_ Item table splitting strategy
spring.shardingsphere.sharding.tables.t_order_item.actual-data-nodes=ds_$->{0..1}.t_order_item_$->{0..1}
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.algorithm-expression=t_order_item_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order_item.key-generator.column=order_item_id
spring.shardingsphere.sharding.tables.t_order_item.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order_item.key-generator.props.worker.id=123
#Sharding JDBC mode
spring.shardingsphere.mode.type=Memory
#Open shardingsphere log
spring.shardingsphere.props.sql.show=true

2.5.3 description of test verification process

  • DDL operation
    JPA automatically creates tables for testing, as shown in the figure below. When the database and table splitting routing rules of sharding JDBC are configured, the client side executes DDL, and sharding JDBC will automatically create corresponding tables according to the table splitting rules; If taddress is a broadcast table, it will be created on both DS0 and DS1. Torder and torderitem are divided into tables according to the module and database. These three tables will be in DS0 and DS respectively_ Created on 1.

  • Write operation
    For the broadcast table taddress, every record written will be written in the taddress tables of DS0 and DS1.

For sub database tables, torder, torder_ Item will be written into the table on the corresponding instance according to the sub database field and routing policy.

  • Read operation
    The read operation is similar to the database only function verification, which will not be repeated here

2.6 verification of separate database and table plus read-write separation function

2.6.1 database environment preparation

The created database instance corresponds to the corresponding physical table, as shown in the following figure.

2.6.2 sharding JDBC configuration

application. Properties spring boot main configuration file description

#JPA automatically creates and drops data tables based on entities
spring.jpa.properties.hibernate.hbm2ddl.auto=create
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true

#Activate the sharing databases tables configuration item
#spring.profiles.active=sharding-databases
#spring.profiles.active=sharding-tables
#spring.profiles.active=sharding-databases-tables
#spring.profiles.active=master-slave
spring.profiles.active=sharding-master-slave

application-sharding-master-slave. Properties sharding JDBC configuration file description

The URL, name and password of the database need to be modified into the parameters of your own database.

spring.shardingsphere.datasource.names=ds_master_0,ds_master_1,ds_master_0_slave_0,ds_master_0_slave_1,ds_master_1_slave_0,ds_master_1_slave_1
spring.shardingsphere.datasource.ds_master_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_0.jdbc-url= spring.shardingsphere.datasource.ds_master_0.username= 
spring.shardingsphere.datasource.ds_master_0.password=
spring.shardingsphere.datasource.ds_master_0.max-active=16
spring.shardingsphere.datasource.ds_master_0_slave_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_0_slave_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_0_slave_0.jdbc-url= spring.shardingsphere.datasource.ds_master_0_slave_0.username= 
spring.shardingsphere.datasource.ds_master_0_slave_0.password=
spring.shardingsphere.datasource.ds_master_0_slave_0.max-active=16
spring.shardingsphere.datasource.ds_master_0_slave_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_0_slave_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_0_slave_1.jdbc-url= spring.shardingsphere.datasource.ds_master_0_slave_1.username= 
spring.shardingsphere.datasource.ds_master_0_slave_1.password=
spring.shardingsphere.datasource.ds_master_0_slave_1.max-active=16
spring.shardingsphere.datasource.ds_master_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_1.jdbc-url= 
spring.shardingsphere.datasource.ds_master_1.username= 
spring.shardingsphere.datasource.ds_master_1.password=
spring.shardingsphere.datasource.ds_master_1.max-active=16
spring.shardingsphere.datasource.ds_master_1_slave_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_1_slave_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_1_slave_0.jdbc-url=
spring.shardingsphere.datasource.ds_master_1_slave_0.username=
spring.shardingsphere.datasource.ds_master_1_slave_0.password=
spring.shardingsphere.datasource.ds_master_1_slave_0.max-active=16
spring.shardingsphere.datasource.ds_master_1_slave_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_1_slave_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_1_slave_1.jdbc-url= spring.shardingsphere.datasource.ds_master_1_slave_1.username=admin
spring.shardingsphere.datasource.ds_master_1_slave_1.password=
spring.shardingsphere.datasource.ds_master_1_slave_1.max-active=16
spring.shardingsphere.sharding.default-database-strategy.inline.sharding-column=user_id
spring.shardingsphere.sharding.default-database-strategy.inline.algorithm-expression=ds_$->{user_id % 2}
spring.shardingsphere.sharding.binding-tables=t_order,t_order_item
spring.shardingsphere.sharding.broadcast-tables=t_address
spring.shardingsphere.sharding.default-data-source-name=ds_master_0
spring.shardingsphere.sharding.tables.t_order.actual-data-nodes=ds_$->{0..1}.t_order_$->{0..1}
spring.shardingsphere.sharding.tables.t_order.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order.table-strategy.inline.algorithm-expression=t_order_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order.key-generator.column=order_id
spring.shardingsphere.sharding.tables.t_order.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order.key-generator.props.worker.id=123
spring.shardingsphere.sharding.tables.t_order_item.actual-data-nodes=ds_$->{0..1}.t_order_item_$->{0..1}
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.algorithm-expression=t_order_item_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order_item.key-generator.column=order_item_id
spring.shardingsphere.sharding.tables.t_order_item.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order_item.key-generator.props.worker.id=123
#Master-slave data source, sub database data source configuration
spring.shardingsphere.sharding.master-slave-rules.ds_0.master-data-source-name=ds_master_0
spring.shardingsphere.sharding.master-slave-rules.ds_0.slave-data-source-names=ds_master_0_slave_0, ds_master_0_slave_1
spring.shardingsphere.sharding.master-slave-rules.ds_1.master-data-source-name=ds_master_1
spring.shardingsphere.sharding.master-slave-rules.ds_1.slave-data-source-names=ds_master_1_slave_0, ds_master_1_slave_1
#Sharding JDBC mode
spring.shardingsphere.mode.type=Memory
#Open shardingsphere log
spring.shardingsphere.props.sql.show=true

2.6.3 description of test verification process

  • To which DDL operation belongs
    JPA automatically creates tables for testing. As shown in the figure below, when the sub database routing rules of sharding JDBC are configured, the client side executes DDL, and sharding JDBC will automatically create corresponding tables according to the sub table rules; If taddress is a broadcast table, it will be created on both DS0 and DS1. Taddress, torder and torderitem will be divided into modules according to the library. These three tables will be created on DS0 and DS respectively_ Created on 1.

  • Write operation
    For the broadcast table taddress, every record written will be written in the taddress tables of DS0 and DS1

For sub database tables, torder, torder_ Item will be written into the table on the corresponding instance according to the sub database field and routing policy.

  • Read operation
    Join query operation order and order under binding table_ Item, as shown in the figure below.

3. Conclusion

Shardingsphere, as an open source product focusing on database enhancement, is relatively good in terms of community activity, product maturity and document richness. Sharding JDBC is a database and table splitting scheme based on the client. It supports all scenarios of database and table splitting, and does not need to introduce an intermediate layer such as proxy, so it reduces the complexity of operation and maintenance. Compared with proxy, this method has less intermediate layer, so the delay is theoretically lower than proxy, Secondly, sharding JDBC can support various relational databases based on SQL standards, such as MySQL / PostgreSQL / Oracle / SQL server. However, due to the integration of sharding JDBC and application, the currently supported language is limited to Java, which has a certain coupling to the application. However, sharding JDBC separates the configuration of database and table from the application, so the changes caused by switching other middleware are relatively small. To sum up, sharding JDBC will be a good choice if you do not want to introduce the middle tier, use the system developed based on Java language, and need to connect to different relational databases.

Welcome to the link to learn more:

Apache shardingsphere official website:https://shardingsphere.apache…

Apache shardingsphere GitHub address:https://github.com/apache/sha…

Sphereex official website:https://www.sphere-ex.com

Welcome to join the community manager wechat (ss_assistant_1) to communicate with many shardingsphere lovers.