How can MySQL store trillions of data?

Time:2021-3-3

preface

The industry has basic requirements for the high availability of the system. In short, these requirements can be summarized as follows.

  • There is no single point problem in the system architecture.
  • It can guarantee the availability of services to the maximum extent.

In general, the high availability of the system can be evaluated by several 9s. The so-called “9” is the percentage of the total time that the system can guarantee the external service. For example, if high availability of 99.99 is required, the total time of system failure in the whole year cannot exceed 52 minutes.

System high availability architecture

Since we need to realize the high availability architecture of the system, what kind of system architecture do we need to build? We can simplify the system architecture as shown in the figure below.

How can MySQL store trillions of data?

Server planning

Due to my limited computer resources, I build a highly available environment on four servers here. You can expand the environment to more servers according to this article, and the building steps are the same.

host name IP address Installed services
binghe151 192.168.175.151 Mycat、Zookeeper、MySQL、HAProxy、Keepalived、Xinetd
binghe152 192.168.175.152 Zookeeper、MySQL
binghe153 192.168.175.153 Zookeeper、MySQL
binghe154 192.168.175.154 Mycat、MySQL、HAProxy、Keepalived、Xinetd
binghe155 192.168.175.155 MySQL

<font color=”#FF0000″>Note: haproxy and preserved are best deployed on the same server as MYCAT.</font>

Install MySQL

Little buddies can be concerned about the ice technology WeChat official account.MySQL source code compilation MySQL 8. X + upgrade GCC + upgrade cmake (pro test full version)

Install JDK

Since the operation of MYCAT and zookeeper requires the support of JDK environment, we need to install JDK environment on each server.

Here, I take the installation of JDK on binghe151 server as an example. The installation mode of other servers is the same as that on binghe151 server. The installation steps are as follows.

(1) Download JDK version 1.8 from the official website of JDK at the following address:https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html

<font color=”#FF0000″>Note: the version of JDK installation package I downloaded is: jdk-8u212-linux-x64 tar.gz , if the JDK version has been updated, you can download the corresponding version.</font>

(2) Jdk-8u212-linux-x64 tar.gz Upload the installation package to the / usr / local / SRC directory of binghe151 server.

(3) Decompress jdk-8u212-linux-x64 tar.gz File, as shown below.

tar -zxvf jdk-8u212-linux-x64.tar.gz

(4) Jdk1.8.0 will be decompressed_ The directory 212 is moved to the / usr / local directory under the binghe151 server, as shown below.

mv jdk1.8.0_212/ /usr/local/src/

(5) Configure JDK system environment variables as follows.

vim /etc/profile
JAVA_HOME=/usr/local/jdk1.8.0_212
CLASS_PATH=.:$JAVA_HOME/lib
PATH=$JAVA_HOME/bin:$PATH
export JAVA_HOME CLASS_PATH PATH

Make the system environment variable take effect, as shown below.

source /etc/profile

(6) Check the JDK version, as shown below.

[[email protected] ~]# java -version
java version "1.8.0_212"
Java(TM) SE Runtime Environment (build 1.8.0_212-b10)
Java HotSpot(TM) 64-Bit Server VM (build 25.212-b10, mixed mode)

The results show that the JDK version information is output correctly, which indicates that the JDK installation is successful.

Install MYCAT

Download the release version of MYCAT 1.6.7.4, unzip it to the / usr / local / MYCAT directory of the server, configure the system environment variables of MYCAT, and then configure the configuration file of MYCAT. The final result of MYCAT configuration is as follows.

  • schema.xml
<?xml version="1.0"?>
<!DOCTYPE mycat:schema SYSTEM "schema.dtd">
<mycat:schema xmlns:mycat="http://io.mycat/">

    <schema name="shop" checkSQLschema="false" sqlMaxLimit="1000">
        <!--<table name="order_master" primaryKey="order_id" dataNode = "ordb"/>-->
        <table name="order_master" primaryKey="order_id" dataNode = "orderdb01,orderdb02,orderdb03,orderdb04" rule="order_master" autoIncrement="true">
            <childTable name="order_detail" primaryKey="order_detail_id" joinKey="order_id" parentKey="order_id" autoIncrement="true"/>
        </table>
        <table name="order_cart" primaryKey="cart_id" dataNode = "ordb"/>
        <table name="order_customer_addr" primaryKey="customer_addr_id" dataNode = "ordb"/>
        <table name="region_info" primaryKey="region_id" dataNode = "ordb,prodb,custdb" type="global"/>
        <table name="serial" primaryKey="id" dataNode = "ordb"/>
        <table name="shipping_info" primaryKey="ship_id" dataNode = "ordb"/>
        <table name="warehouse_info" primaryKey="w_id" dataNode = "ordb"/>
        <table name="warehouse_proudct" primaryKey="wp_id" dataNode = "ordb"/>
        
        <table name="product_brand_info" primaryKey="brand_id" dataNode = "prodb"/>
        <table name="product_category" primaryKey="category_id" dataNode = "prodb"/>
        <table name="product_comment" primaryKey="comment_id" dataNode = "prodb"/>
        <table name="product_info" primaryKey="product_id" dataNode = "prodb"/>
        <table name="product_pic_info" primaryKey="product_pic_id" dataNode = "prodb"/>
        <table name="product_supplier_info" primaryKey="supplier_id" dataNode = "prodb"/>
        
        <table name="customer_balance_log" primaryKey="balance_id" dataNode = "custdb"/>
        <table name="customer_inf" primaryKey="customer_inf_id" dataNode = "custdb"/>
        <table name="customer_level_inf" primaryKey="customer_level" dataNode = "custdb"/>
        <table name="customer_login" primaryKey="customer_id" dataNode = "custdb"/>
        <table name="customer_login_log" primaryKey="login_id" dataNode = "custdb"/>
        <table name="customer_point_log" primaryKey="point_id" dataNode = "custdb"/>
        
    </schema>
    
    <dataNode name="mycat" dataHost="binghe151" database="mycat" />
     
    <dataNode name="ordb" dataHost="binghe152" database="order_db" />
    <dataNode name="prodb" dataHost="binghe153" database="product_db" />
    <dataNode name="custdb" dataHost="binghe154" database="customer_db" />
    
    <dataNode name="orderdb01" dataHost="binghe152" database="orderdb01" />
    <dataNode name="orderdb02" dataHost="binghe152" database="orderdb02" />
    <dataNode name="orderdb03" dataHost="binghe153" database="orderdb03" />
    <dataNode name="orderdb04" dataHost="binghe153" database="orderdb04" />
    
    <dataHost name="binghe151" maxCon="1000" minCon="10" balance="1"
              writeType="0" dbType="mysql" dbDriver="native" switchType="1"  slaveThreshold="100">
        <heartbeat>select user()</heartbeat>
        <writeHost host="binghe51" url="192.168.175.151:3306" user="mycat" password="mycat"/>
    </dataHost>
    
    <dataHost name="binghe152" maxCon="1000" minCon="10" balance="1"
              writeType="0" dbType="mysql" dbDriver="native" switchType="1"  slaveThreshold="100">
        <heartbeat>select user()</heartbeat>
        <writeHost host="binghe52" url="192.168.175.152:3306" user="mycat" password="mycat"/>
    </dataHost>
    
    <dataHost name="binghe153" maxCon="1000" minCon="10" balance="1"
              writeType="0" dbType="mysql" dbDriver="native" switchType="1"  slaveThreshold="100">
        <heartbeat>select user()</heartbeat>
        <writeHost host="binghe53" url="192.168.175.153:3306" user="mycat" password="mycat"/>
    </dataHost>
    
    <dataHost name="binghe154" maxCon="1000" minCon="10" balance="1"
              writeType="0" dbType="mysql" dbDriver="native" switchType="1"  slaveThreshold="100">
        <heartbeat>select user()</heartbeat>
        <writeHost host="binghe54" url="192.168.175.154:3306" user="mycat" password="mycat"/>
    </dataHost>
    
</mycat:schema>
  • server.xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE mycat:server SYSTEM "server.dtd">
<mycat:server xmlns:mycat="http://io.mycat/">
    <system>
        <property name="useHandshakeV10">1</property>
        <property name="defaultSqlParser">druidparser</property>
        <property name="serverPort">3307</property>
        <property name="managerPort">3308</property>
        <property name="nonePasswordLogin">0</property>
        <property name="bindIp">0.0.0.0</property>
        <property name="charset">utf8mb4</property>
        <property name="frontWriteQueueSize">2048</property>
        <property name="txIsolation">2</property>
        <property name="processors">2</property>
        <property name="idleTimeout">1800000</property>
        <property name="sqlExecuteTimeout">300</property>
        <property name="useSqlStat">0</property>
        <property name="useGlobleTableCheck">0</property>
        <property name="sequenceHandlerType">1</property>
        <property name="defaultMaxLimit">1000</property>
        <property name="maxPacketSize">104857600</property>
        
        <property name="sqlInterceptor">
            io.mycat.server.interceptor.impl.StatisticsSqlInterceptor
        </property>
        <property name="sqlInterceptorType">
            UPDATE,DELETE,INSERT
        </property>
        <property name="sqlInterceptorFile">/tmp/sql.txt</property>
    </system>
    
    <firewall>
        <whitehost>
            <host user="mycat" host="192.168.175.151"></host>
        </whitehost>
        <blacklist check="true">
            <property name="noneBaseStatementAllow">true</property>
            <property name="deleteWhereNoneCheck">true</property>
        </blacklist>
    </firewall>
    
    <user name="mycat" defaultAccount="true">
        <property name="usingDecrypt">1</property>
        <property name="password">cTwf23RrpBCEmalp/nx0BAKenNhvNs2NSr9nYiMzHADeEDEfwVWlI6hBDccJjNBJqJxnunHFp5ae63PPnMfGYA==</property>
        <property name="schemas">shop</property>
    </user>

</mycat:server>
  • rule.xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE mycat:rule SYSTEM "rule.dtd">
<mycat:rule xmlns:mycat="http://io.mycat/">
    <tableRule name="order_master">
        <rule>
            <columns>customer_id</columns>
            <algorithm>mod-long</algorithm>
        </rule>
    </tableRule>
    
    <function name="mod-long" class="io.mycat.route.function.PartitionByMod">
        <property name="count">4</property>
    </function>
</mycat:rule>
  • sequence_db_conf.properties
#sequence stored in datanode
GLOBAL=mycat
ORDER_MASTER=mycat
ORDER_DETAIL=mycat

The configuration of MYCAT is for your reference only. You don’t have to configure it according to my business needs. The focus of this paper is to build a high availability environment for MYCAT.

Create an account for MYCAT to connect to MySQL in mysql, as shown below.

CREATE USER 'mycat'@'192.168.175.%' IDENTIFIED BY 'mycat';
ALTER USER 'mycat'@'192.168.175.%' IDENTIFIED WITH mysql_native_password BY 'mycat'; 
GRANT SELECT, INSERT, UPDATE, DELETE,EXECUTE  ON *.* TO 'mycat'@'192.168.175.%';
FLUSH PRIVILEGES;

Install zookeeper cluster

After the JDK is installed and configured, the zookeeper cluster needs to be built. According to the server planning, the zookeeper cluster is now built on three servers: “binghe151”, “binghe152” and “binghe153”.

1. Download zookeeper

Download the installation package of zookeeper from the official Apache website. The download address of the installation package of zookeeper is:https://mirrors.tuna.tsinghua.edu.cn/apache/zookeeper/. The details are shown in the figure below.

How can MySQL store trillions of data?

You can also directly download zookeeper-3.5.5 by executing the following command on binghe151 server.

wget https://mirrors.tuna.tsinghua.edu.cn/apache/zookeeper/zookeeper-3.5.5/apache-zookeeper-3.5.5-bin.tar.gz

By executing the above command, apache-zookeeper-3.5.5 can be saved directly- bin.tar.gz Download the installation package to binghe151 server.

2. Install and configure zookeeper

<font color=”#FF0000″>Note: (1), (2), (3) steps are executed on binghe152 server.</font>

(1) Decompress the zookeeper installation package

Execute the following command on binghe151 server, unzip zookeeper to the directory of / usr / local /, and modify zookeeper directory to zookeeper-3.5.5.

tar -zxvf apache-zookeeper-3.5.5-bin.tar.gz
mv apache-zookeeper-3.5.5-bin zookeeper-3.5.5

(2) Configure zookeeper system environment variables

Similarly, zookeeper system environment variables need to be configured in the / etc / profile file, as follows:

ZOOKEEPER_HOME=/usr/local/zookeeper-3.5.5
PATH=$ZOOKEEPER_HOME/bin:$PATH
export ZOOKEEPER_HOME PATH

Combined with the previously configured JDK system environment variable, / etc / profile, the overall configuration is as follows:

MYSQL_HOME=/usr/local/mysql
JAVA_HOME=/usr/local/jdk1.8.0_212
MYCAT_HOME=/usr/local/mycat
ZOOKEEPER_HOME=/usr/local/zookeeper-3.5.5
MPC_HOME=/usr/local/mpc-1.1.0
GMP_HOME=/usr/local/gmp-6.1.2
MPFR_HOME=/usr/local/mpfr-4.0.2
CLASS_PATH=.:$JAVA_HOME/lib
LD_LIBRARY_PATH=$MPC_LIB_HOME/lib:$GMP_HOME/lib:$MPFR_HOME/lib:$LD_LIBRARY_PATH
PATH=$MYSQL_HOME/bin:$JAVA_HOME/bin:$ZOOKEEPER_HOME/bin:$MYCAT_HOME/bin:$PATH
export JAVA_HOME ZOOKEEPER_HOME MYCAT_HOME CLASS_PATH MYSQL_HOME MPC_LIB_HOME GMP_HOME MPFR_HOME LD_LIBRARY_PATH PATH

(3) Configure zookeeper

First, you need to add $zookeeper_ HOME/conf($ZOOKEEPER_ Home is the installation directory of zookeeper_ sample.cfg The document is amended to read zoo.cfg Documents. The specific orders are as follows:

cd /usr/local/zookeeper-3.5.5/conf/
mv zoo_sample.cfg zoo.cfg

Next, modify it zoo.cfg The specific contents after revision are as follows:

tickTime=2000
initLimit=10
syncLimit=5
dataDir=/usr/local/zookeeper-3.5.5/data
dataLogDir=/usr/local/zookeeper-3.5.5/dataLog
clientPort=2181
server.1=binghe151:2888:3888
server.2=binghe152:2888:3888
server.3=binghe153:2888:3888

Create data and datalog folders in the installation directory of zookeeper.

mkdir -p /usr/local/zookeeper-3.5.5/data
mkdir -p /usr/local/zookeeper-3.5.5/dataLog

Switch to the new data directory and create a myid file with the number 1 as follows:

cd /usr/local/zookeeper-3.5.5/data
vim myid

Write the number 1 to the file myid.

3. Copy zookeeper and system environment variable files to other servers

<font color=”#FF0000″>Note: (1) and (2) are performed on binghe151 server.</font>

(1) Copy zookeeper to other servers

According to the plan for the server, now copy zookeeper to binghe152 and binghe53 servers. The specific operations are as follows:

scp -r /usr/local/zookeeper-3.5.5/ binghe152:/usr/local/
scp -r /usr/local/zookeeper-3.5.5/ binghe153:/usr/local/

(2) Copy system environment variable files to other servers

According to the planning of the server, copy the system environment variable file / etc / profile to binghe152 and binghe153 servers. The specific operations are as follows:

scp /etc/profile binghe152:/etc/
scp /etc/profile binghe153:/etc/

The above operation may require a password. Just enter the password according to the prompt.

4. Modify the myid file on other servers

Modify the content of zookeeper’s myid file on binghe152 server to number 2, and modify the content of zookeeper’s myid file on binghe153 server to number 3. The details are as follows:

Perform the following operations on binghe152 server:

echo "2" > /usr/local/zookeeper-3.5.5/data/myid
cat /usr/local/zookeeper-3.5.5/data/myid
2

Perform the following operations on binghe153 server:

echo "3" > /usr/local/zookeeper-3.5.5/data/myid
cat /usr/local/zookeeper-3.5.5/data/myid
3

5. Make the environment variable effective

Perform the following operations on binghe151, binghe152, and binghe153 to make the system environment variables effective.

source /etc/profile

6. Start zookeeper cluster

Perform the following operations on binghe151, binghe152 and binghe153 to start the zookeeper cluster.

zkServer.sh start

7. Check the startup status of zookeeper cluster

  • Binghe151 server
[[email protected] ~]# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper-3.5.5/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Mode: follower
  • Binghe152 server
[[email protected] local]# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper-3.5.5/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Mode: leader
  • Binghe153 server
[[email protected] ~]# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper-3.5.5/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Mode: follower

As you can see, the role of zookeeper on binghe151 and binghe153 servers is follower, and the role of zookeeper on binghe152 servers is leader.

Initialize MYCAT configuration to zookeeper cluster

<font color=”#FF0000″>Note: the data in zookeeper is initialized on binghe151 server because we have installed MYCAT on binghe151 server before.</font>

1. View initialization script

An init is provided in the bin directory under the MYCAT installation directory_ zk_ data.sh Script file, as shown below.

[[email protected] ~]# ll /usr/local/mycat/bin/
total 384
-rwxr-xr-x 1 root root   3658 Feb 26 17:10 dataMigrate.sh
-rwxr-xr-x 1 root root   1272 Feb 26 17:10 init_zk_data.sh
-rwxr-xr-x 1 root root  15701 Feb 28 20:51 mycat
-rwxr-xr-x 1 root root   2986 Feb 26 17:10 rehash.sh
-rwxr-xr-x 1 root root   2526 Feb 26 17:10 startup_nowrap.sh
-rwxr-xr-x 1 root root 140198 Feb 28 20:51 wrapper-linux-ppc-64
-rwxr-xr-x 1 root root  99401 Feb 28 20:51 wrapper-linux-x86-32
-rwxr-xr-x 1 root root 111027 Feb 28 20:51 wrapper-linux-x86-64

init_ zk_ data.sh The script file is used to initialize the configuration of MYCAT in zookeeper. This file will be initialized to zookeeper cluster by reading the configuration file in conf directory under MYCAT installation directory.

2. Copy the MYCAT configuration file

First, let’s look at the file information in the conf directory under the MYCAT installation directory, as shown below.

[[email protected] ~]# cd /usr/local/mycat/conf/
[[email protected] conf]# ll
total 108
-rwxrwxrwx 1 root root   92 Feb 26 17:10 autopartition-long.txt
-rwxrwxrwx 1 root root   51 Feb 26 17:10 auto-sharding-long.txt
-rwxrwxrwx 1 root root   67 Feb 26 17:10 auto-sharding-rang-mod.txt
-rwxrwxrwx 1 root root  340 Feb 26 17:10 cacheservice.properties
-rwxrwxrwx 1 root root 3338 Feb 26 17:10 dbseq.sql
-rwxrwxrwx 1 root root 3532 Feb 26 17:10 dbseq - utf8mb4.sql
-rw-r--r-- 1 root root   86 Mar  1 22:37 dnindex.properties
-rwxrwxrwx 1 root root  446 Feb 26 17:10 ehcache.xml
-rwxrwxrwx 1 root root 2454 Feb 26 17:10 index_to_charset.properties
-rwxrwxrwx 1 root root 1285 Feb 26 17:10 log4j2.xml
-rwxrwxrwx 1 root root  183 Feb 26 17:10 migrateTables.properties
-rwxrwxrwx 1 root root  271 Feb 26 17:10 myid.properties
-rwxrwxrwx 1 root root   16 Feb 26 17:10 partition-hash-int.txt
-rwxrwxrwx 1 root root  108 Feb 26 17:10 partition-range-mod.txt
-rwxrwxrwx 1 root root  988 Mar  1 16:59 rule.xml
-rwxrwxrwx 1 root root 3883 Mar  3 23:59 schema.xml
-rwxrwxrwx 1 root root  440 Feb 26 17:10 sequence_conf.properties
-rwxrwxrwx 1 root root   84 Mar  3 23:52 sequence_db_conf.properties
-rwxrwxrwx 1 root root   29 Feb 26 17:10 sequence_distributed_conf.properties
-rwxrwxrwx 1 root root   28 Feb 26 17:10 sequence_http_conf.properties
-rwxrwxrwx 1 root root   53 Feb 26 17:10 sequence_time_conf.properties
-rwxrwxrwx 1 root root 2420 Mar  4 15:14 server.xml
-rwxrwxrwx 1 root root   18 Feb 26 17:10 sharding-by-enum.txt
-rwxrwxrwx 1 root root 4251 Feb 28 20:51 wrapper.conf
drwxrwxrwx 2 root root 4096 Feb 28 21:17 zkconf
drwxrwxrwx 2 root root 4096 Feb 28 21:17 zkdownload

Next, under the MYCAT installation directory, under the conf directory, under the schema.xml Documents server.xml Documents rule.xml Files and sequences_ db_ conf.properties Copy the file to the zkconf directory under the conf directory, as shown below.

cp schema.xml server.xml rule.xml sequence_db_conf.properties zkconf/

3. Write MYCAT configuration information to zookeeper cluster

Execute init_ zk_ data.sh Script file to initialize the configuration information to the zookeeper cluster, as shown below.

[[email protected] bin]# /usr/local/mycat/bin/init_zk_data.sh  
o2020-03-08 20:03:13 INFO JAVA_CMD=/usr/local/jdk1.8.0_212/bin/java
o2020-03-08 20:03:13 INFO Start to initialize /mycat of ZooKeeper
o2020-03-08 20:03:14 INFO Done

According to the above information, MYCAT successfully wrote initialization configuration information to zookeeper.

4. Verify whether MYCAT configuration information is successfully written to MYCAT

We can use the client command of zookeeper zkCli.sh Log in to zookeeper to verify whether the configuration information of MYCAT is successfully written to MYCAT.

First, log in to zookeeper, as shown below.

[[email protected] ~]# zkCli.sh 
Connecting to localhost:2181
###################N-line output is omitted here######################
Welcome to ZooKeeper!

WATCHER::

WatchedEvent state:SyncConnected type:None path:null
[zk: localhost:2181(CONNECTED) 0]

Next, view the information of MYCAT on the zookeeper command line, as shown below.

[zk: localhost:2181(CONNECTED) 0] ls /
[mycat, zookeeper]
[zk: localhost:2181(CONNECTED) 1] ls /mycat
[mycat-cluster-1]
[zk: localhost:2181(CONNECTED) 2] ls /mycat/mycat-cluster-1
[cache, line, rules, schema, sequences, server]
[zk: localhost:2181(CONNECTED) 3] 

You can see that there are six directories under / MYCAT / mycat-cluster-1. Next, check the information under the schema directory, as shown below.

[zk: localhost:2181(CONNECTED) 3] ls /mycat/mycat-cluster-1/schema
[dataHost, dataNode, schema]

Next, let’s look at the configuration of datahost, as shown below.

[zk: localhost:2181(CONNECTED) 4] get /mycat/mycat-cluster-1/schema/dataHost
[{"balance":1,"maxCon":1000,"minCon":10,"name":"binghe151","writeType":0,"switchType":1,"slaveThreshold":100,"dbType":"mysql","dbDriver":"native","heartbeat":"select user()","writeHost":[{"host":"binghe51","url":"192.168.175.151:3306","password":"root","user":"root"}]},{"balance":1,"maxCon":1000,"minCon":10,"name":"binghe152","writeType":0,"switchType":1,"slaveThreshold":100,"dbType":"mysql","dbDriver":"native","heartbeat":"select user()","writeHost":[{"host":"binghe52","url":"192.168.175.152:3306","password":"root","user":"root"}]},{"balance":1,"maxCon":1000,"minCon":10,"name":"binghe153","writeType":0,"switchType":1,"slaveThreshold":100,"dbType":"mysql","dbDriver":"native","heartbeat":"select user()","writeHost":[{"host":"binghe53","url":"192.168.175.153:3306","password":"root","user":"root"}]},{"balance":1,"maxCon":1000,"minCon":10,"name":"binghe154","writeType":0,"switchType":1,"slaveThreshold":100,"dbType":"mysql","dbDriver":"native","heartbeat":"select user()","writeHost":[{"host":"binghe54","url":"192.168.175.154:3306","password":"root","user":"root"}]}]

The format of the above output information is messy, but it can be seen that it is in JSON format. We can format the output information, and the formatted result is as follows.

[
    {
        "balance": 1,
        "maxCon": 1000,
        "minCon": 10,
        "name": "binghe151",
        "writeType": 0,
        "switchType": 1,
        "slaveThreshold": 100,
        "dbType": "mysql",
        "dbDriver": "native",
        "heartbeat": "select user()",
        "writeHost": [
            {
                "host": "binghe51",
                "url": "192.168.175.151:3306",
                "password": "root",
                "user": "root"
            }
        ]
    },
    {
        "balance": 1,
        "maxCon": 1000,
        "minCon": 10,
        "name": "binghe152",
        "writeType": 0,
        "switchType": 1,
        "slaveThreshold": 100,
        "dbType": "mysql",
        "dbDriver": "native",
        "heartbeat": "select user()",
        "writeHost": [
            {
                "host": "binghe52",
                "url": "192.168.175.152:3306",
                "password": "root",
                "user": "root"
            }
        ]
    },
    {
        "balance": 1,
        "maxCon": 1000,
        "minCon": 10,
        "name": "binghe153",
        "writeType": 0,
        "switchType": 1,
        "slaveThreshold": 100,
        "dbType": "mysql",
        "dbDriver": "native",
        "heartbeat": "select user()",
        "writeHost": [
            {
                "host": "binghe53",
                "url": "192.168.175.153:3306",
                "password": "root",
                "user": "root"
            }
        ]
    },
    {
        "balance": 1,
        "maxCon": 1000,
        "minCon": 10,
        "name": "binghe154",
        "writeType": 0,
        "switchType": 1,
        "slaveThreshold": 100,
        "dbType": "mysql",
        "dbDriver": "native",
        "heartbeat": "select user()",
        "writeHost": [
            {
                "host": "binghe54",
                "url": "192.168.175.154:3306",
                "password": "root",
                "user": "root"
            }
        ]
    }
]

As you can see, we’re at MYCAT schema.xml The information of the datahost node configured in the file has been successfully written into zookeeper.

In order to verify whether the configuration information of MYCAT has been synchronized to other nodes of zookeeper, we can also log in to zookeeper on binghe152 and binghe153 servers to check whether the configuration information of MYCAT has been written successfully.

  • Binghe152 server
[[email protected] ~]# zkCli.sh 
Connecting to localhost:2181
#################Omit n lines of output information################
[zk: localhost:2181(CONNECTED) 0] get /mycat/mycat-cluster-1/schema/dataHost
[{"balance":1,"maxCon":1000,"minCon":10,"name":"binghe151","writeType":0,"switchType":1,"slaveThreshold":100,"dbType":"mysql","dbDriver":"native","heartbeat":"select user()","writeHost":[{"host":"binghe51","url":"192.168.175.151:3306","password":"root","user":"root"}]},{"balance":1,"maxCon":1000,"minCon":10,"name":"binghe152","writeType":0,"switchType":1,"slaveThreshold":100,"dbType":"mysql","dbDriver":"native","heartbeat":"select user()","writeHost":[{"host":"binghe52","url":"192.168.175.152:3306","password":"root","user":"root"}]},{"balance":1,"maxCon":1000,"minCon":10,"name":"binghe153","writeType":0,"switchType":1,"slaveThreshold":100,"dbType":"mysql","dbDriver":"native","heartbeat":"select user()","writeHost":[{"host":"binghe53","url":"192.168.175.153:3306","password":"root","user":"root"}]},{"balance":1,"maxCon":1000,"minCon":10,"name":"binghe154","writeType":0,"switchType":1,"slaveThreshold":100,"dbType":"mysql","dbDriver":"native","heartbeat":"select user()","writeHost":[{"host":"binghe54","url":"192.168.175.154:3306","password":"root","user":"root"}]}]

As you can see, the configuration information of MYCAT is successfully synchronized to zookeeper on binghe152 server.

  • Binghe153 server
[[email protected] ~]# zkCli.sh 
Connecting to localhost:2181
#####################N lines of output information are omitted here#####################
[zk: localhost:2181(CONNECTED) 0] get /mycat/mycat-cluster-1/schema/dataHost
[{"balance":1,"maxCon":1000,"minCon":10,"name":"binghe151","writeType":0,"switchType":1,"slaveThreshold":100,"dbType":"mysql","dbDriver":"native","heartbeat":"select user()","writeHost":[{"host":"binghe51","url":"192.168.175.151:3306","password":"root","user":"root"}]},{"balance":1,"maxCon":1000,"minCon":10,"name":"binghe152","writeType":0,"switchType":1,"slaveThreshold":100,"dbType":"mysql","dbDriver":"native","heartbeat":"select user()","writeHost":[{"host":"binghe52","url":"192.168.175.152:3306","password":"root","user":"root"}]},{"balance":1,"maxCon":1000,"minCon":10,"name":"binghe153","writeType":0,"switchType":1,"slaveThreshold":100,"dbType":"mysql","dbDriver":"native","heartbeat":"select user()","writeHost":[{"host":"binghe53","url":"192.168.175.153:3306","password":"root","user":"root"}]},{"balance":1,"maxCon":1000,"minCon":10,"name":"binghe154","writeType":0,"switchType":1,"slaveThreshold":100,"dbType":"mysql","dbDriver":"native","heartbeat":"select user()","writeHost":[{"host":"binghe54","url":"192.168.175.154:3306","password":"root","user":"root"}]}]

As you can see, the configuration information of MYCAT is successfully synchronized to zookeeper on binghe153 server.

Configure MYCAT to support zookeeper startup

1. Configure MYCAT on binghe151 server

On the binghe151 server, enter the conf directory of the MYCAT installation directory to view the file information, as shown below.

[[email protected] ~]# cd /usr/local/mycat/conf/
[[email protected] conf]# ll
total 108
-rwxrwxrwx 1 root root   92 Feb 26 17:10 autopartition-long.txt
-rwxrwxrwx 1 root root   51 Feb 26 17:10 auto-sharding-long.txt
-rwxrwxrwx 1 root root   67 Feb 26 17:10 auto-sharding-rang-mod.txt
-rwxrwxrwx 1 root root  340 Feb 26 17:10 cacheservice.properties
-rwxrwxrwx 1 root root 3338 Feb 26 17:10 dbseq.sql
-rwxrwxrwx 1 root root 3532 Feb 26 17:10 dbseq - utf8mb4.sql
-rw-r--r-- 1 root root   86 Mar  1 22:37 dnindex.properties
-rwxrwxrwx 1 root root  446 Feb 26 17:10 ehcache.xml
-rwxrwxrwx 1 root root 2454 Feb 26 17:10 index_to_charset.properties
-rwxrwxrwx 1 root root 1285 Feb 26 17:10 log4j2.xml
-rwxrwxrwx 1 root root  183 Feb 26 17:10 migrateTables.properties
-rwxrwxrwx 1 root root  271 Feb 26 17:10 myid.properties
-rwxrwxrwx 1 root root   16 Feb 26 17:10 partition-hash-int.txt
-rwxrwxrwx 1 root root  108 Feb 26 17:10 partition-range-mod.txt
-rwxrwxrwx 1 root root  988 Mar  1 16:59 rule.xml
-rwxrwxrwx 1 root root 3883 Mar  3 23:59 schema.xml
-rwxrwxrwx 1 root root  440 Feb 26 17:10 sequence_conf.properties
-rwxrwxrwx 1 root root   84 Mar  3 23:52 sequence_db_conf.properties
-rwxrwxrwx 1 root root   29 Feb 26 17:10 sequence_distributed_conf.properties
-rwxrwxrwx 1 root root   28 Feb 26 17:10 sequence_http_conf.properties
-rwxrwxrwx 1 root root   53 Feb 26 17:10 sequence_time_conf.properties
-rwxrwxrwx 1 root root 2420 Mar  4 15:14 server.xml
-rwxrwxrwx 1 root root   18 Feb 26 17:10 sharding-by-enum.txt
-rwxrwxrwx 1 root root 4251 Feb 28 20:51 wrapper.conf
drwxrwxrwx 2 root root 4096 Feb 28 21:17 zkconf
drwxrwxrwx 2 root root 4096 Feb 28 21:17 zkdownload

As you can see, under the conf directory of MYCAT, there is a myid.properties Next, edit the file using the VIM editor, as shown below.

vim myid.properties 

Edited myid.properties The contents of the document are as follows.

loadZk=true
zkURL=192.168.175.151:2181,192.168.175.152:2181,192.168.175.153:2181
clusterId=mycat-cluster-1
myid=mycat_151
clusterSize=2
clusterNodes=mycat_151,mycat_154
#server  booster  ;   booster install on db same server,will reset all minCon to 2
type=server
boosterDataHosts=dataHost1

Several important parameters are described below.

  • Loadzk: indicates whether to load zookeeper configuration. True: Yes; false: No;
  • Zkurl: the connection address of zookeeper. Multiple zookeeper connection addresses are separated by commas;
  • Clusterid: the ID of the current MYCAT cluster. This ID should be the same as the directory name in the / MYCAT directory in zookeeper, as shown below.
[zk: localhost:2181(CONNECTED) 1] ls /mycat
[mycat-cluster-1]
  • Myid: the ID of the current MYCAT node. My name here is MYCAT_ Prefix plus the last three digits of IP address;
  • Clustersize: indicates the number of MYCAT nodes in the MYCAT cluster. Here, we deploy MYCAT on binghe151 and binghe154 nodes, so the number of MYCAT nodes is 2.
  • Clusternodes: in the MYCAT cluster, all MYCAT nodes need to be configured with the MYCAT node ID configured in myid, and multiple nodes are separated by commas. The node i configured here is MYCAT_ 151,mycat_ 154。

2. Install new MYCAT on binghe154 server

Download and install the same version of MYCAT on binghe154 server as that on binghe151 server, and unzip it to / usr / local / MYCAT directory on binghe154 server.

You can also directly enter the following command on binghe151 server to copy the installation directory of MYCAT to binghe154 server.

[[email protected] ~]# scp -r /usr/local/mycat binghe154:/usr/local

<font color=”#FF0000″>Note: don’t forget to configure MYCAT’s system environment variables on binghe154 server.</font>

3. Modify the configuration of MYCAT on binghe154 server

On binghe154 server, modify the myid.properties File, as shown below.

vim /usr/local/mycat/conf/myid.properties

Revised myid.properties The contents of the document are as follows.

loadZk=true
zkURL=192.168.175.151:2181,192.168.175.152:2181,192.168.175.153:2181
clusterId=mycat-cluster-1
myid=mycat_154
clusterSize=2
clusterNodes=mycat_151,mycat_154
#server  booster  ;   booster install on db same server,will reset all minCon to 2
type=server
boosterDataHosts=dataHost1

4. Restart MYCAT

Restart MYCAT on binghe151 server and binghe154 server respectively, as shown below.

Note: restart first

  • Binghe151 server
[[email protected] ~]# mycat restart
Stopping Mycat-server...
Stopped Mycat-server.
Starting Mycat-server...
  • Binghe154 server
[[email protected] ~]# mycat restart
Stopping Mycat-server...
Stopped Mycat-server.
Starting Mycat-server...

Check the startup log of MYCAT on binghe151 and binghe154 servers respectively, as shown below.

STATUS | wrapper  | 2020/03/08 21:08:15 | <-- Wrapper Stopped
STATUS | wrapper  | 2020/03/08 21:08:15 | --> Wrapper Started as Daemon
STATUS | wrapper  | 2020/03/08 21:08:15 | Launching a JVM...
INFO   | jvm 1    | 2020/03/08 21:08:16 | Wrapper (Version 3.2.3) http://wrapper.tanukisoftware.org
INFO   | jvm 1    | 2020/03/08 21:08:16 |   Copyright 1999-2006 Tanuki Software, Inc.  All Rights Reserved.
INFO   | jvm 1    | 2020/03/08 21:08:16 | 
INFO   | jvm 1    | 2020/03/08 21:08:28 | MyCAT Server startup successfully. see logs in logs/mycat.log

It can be seen from the output of the log that MYCAT is restarted successfully.

<font color=”#FF0000″>At this point, restart MYCAT on binghe151 server first, and then restart MYCAT on binghe154 server. After that, we will find the schema.xml 、 server.xml 、 rule.xml And sequence_ db_ conf.properties The file is the same as the configuration file of MYCAT on binghe151 server. This is the result of MYCAT on binghe154 server reading the configuration file from zookeeper.</font>

<font color=”#FF0000″>In the future, we only need to modify the configuration of MYCAT in zookeeper, and these configurations will be automatically synchronized to MYCAT, which can ensure that the configuration of multiple MYCAT nodes is consistent.</font>

Configure virtual IP

Configure virtual IP on binghe151 and binghe154 servers respectively, as shown below.

ifconfig eth0:1 192.168.175.110 broadcast 192.168.175.255 netmask 255.255.255.0 up
route add -host 192.168.175.110 dev eth0:1

The effect of configuring virtual IP is as follows, taking binghe151 server as an example.

[[email protected] ~]# ifconfig
eth0      Link encap:Ethernet  HWaddr 00:0C:29:10:A1:45  
          inet addr:192.168.175.151  Bcast:192.168.175.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:fe10:a145/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:116766 errors:0 dropped:0 overruns:0 frame:0
          TX packets:85230 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:25559422 (24.3 MiB)  TX bytes:55997016 (53.4 MiB)

eth0:1    Link encap:Ethernet  HWaddr 00:0C:29:10:A1:45  
          inet addr:192.168.175.110  Bcast:192.168.175.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:51102 errors:0 dropped:0 overruns:0 frame:0
          TX packets:51102 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:2934009 (2.7 MiB)  TX bytes:2934009 (2.7 MiB)

<font color=”#FF0000″>be careful:< / font > after adding a VIP on the command line, the VIP information will disappear after the server restarts. Therefore, it is better to write the command to create a VIP to a script file, for example, to / usr / local / script/ vip.sh File, as shown below.

mkdir /usr/local/script
vim /usr/local/script/vip.sh

The contents of the document are as follows.

ifconfig eth0:1 192.168.175.110 broadcast 192.168.175.255 netmask 255.255.255.0 up
route add -host 192.168.175.110 dev eth0:1

Next, change / usr / local / script/ vip.sh Add the file to the server boot entry, as shown below.

echo /usr/local/script/vip.sh >> /etc/rc.d/rc.local

Configure IP forwarding

Configure the system kernel IP forwarding function on binghe151 and binghe154 servers and edit / etc/ sysctl.conf File, as shown below.

vim /etc/sysctl.conf

Find the following line.

net.ipv4.ip_forward = 0

Modify it to the code shown below.

net.ipv4.ip_forward = 1

Save and exit the VIM editor, and run the following command to make the configuration take effect.

sysctl -p

Install and configure xinetd service

We need to install xinetd service on the server where haproxy is installed, that is, binghe151 and binghe154 servers to open port 48700.

(1) Execute the following command on the server command line to install xinetd service, as shown below.

yum install xinetd -y

(2) Edit / etc/ xinetd.conf File, as shown below.

vim /etc/xinetd.conf

Check whether the following configuration exists in the file.

includedir /etc/xinetd.d

If / etc/ xinetd.conf If the above configuration is not available in the file, the/ xinetd.conf Add the above configuration to the file; if the above configuration exists, do not modify it.

(3) Create the / etc / xinetd. D directory as follows.

mkdir /etc/xinetd.d

Note: if the / etc / xinetd. D directory already exists, the following error will be reported when creating the directory.

mkdir: cannot create directory `/etc/xinetd.d': File exists

You don’t have to pay attention to this error message.

(4) Add the configuration file MYCAT of MYCAT status detection server in / etc / xinetd. D directory_ Status, as shown below.

touch /etc/xinetd.d/mycat_status

(5) Edit MYCAT_ Status file, as shown below.

vim /etc/xinetd.d/mycat_status

MYCAT after editing_ The contents of the status file are as follows.

service mycat_status
{
flags = REUSE
socket_type = stream
port = 48700
wait = no
user = root
server =/usr/local/bin/mycat_check.sh
log_on_failure += USERID
disable = no
}

Some xinetd configuration parameters are described below.

  • socket_ Type: indicates the packet processing method, and stream is TCP packet.
  • Port: indicates the port number monitored by xinetd service.
  • Wait: indicates that there is no need to wait, that is, the service will run in a multithreaded mode.
  • User: the user running the xinted service.
  • Server: the service script that needs to be started.
  • log_ on_ Failure: record the failed log content.
  • Disable: this configuration item needs to be set to no when xinted service needs to be started.

(6) Add MYCAT in / usr / local / bin_ check.sh Service script, as shown below.

touch /usr/local/bin/mycat_check.sh

(7) Edit / usr / local / bin / MYCAT_ check.sh File, as shown below.

vim /usr/local/bin/mycat_check.sh

The contents of the edited file are as follows.

#!/bin/bash
mycat=`/usr/local/mycat/bin/mycat status | grep 'not running' | wc -l`
if [ "$mycat" = "0" ]; then
/bin/echo -e "HTTP/1.1 200 OK\r\n"
else
/bin/echo -e "HTTP/1.1 503 Service Unavailable\r\n"
/usr/local/mycat/bin/mycat start
fi

For MYCAT_ check.sh The file gives executable permissions, as shown below.

chmod a+x /usr/local/bin/mycat_check.sh

(8) Edit the / etc / services file as shown below.

vim /etc/services

Add the following at the end of the file.

mycat_status  48700/tcp        # mycat_status

The port number should be the same as that in / etc / xinetd.d/mycat_ The port number configured in the status file is the same.

(9) Restart the xinetd service, as shown below.

service xinetd restart

(10) View MYCAT_ Whether the status service started successfully is shown below.

  • Binghe151 server
[[email protected] ~]# netstat -antup|grep 48700
tcp    0   0 :::48700          :::*      LISTEN   2776/xinetd
  • Binghe154 server
[[email protected] ~]# netstat -antup|grep 48700
tcp    0   0 :::48700          :::*      LISTEN   6654/xinetd

The results show that MYCAT on both servers_ Status server started successfully.

So far, xinetd service is installed and configured successfully, that is, MYCAT status check service is installed successfully.

Install and configure haproxy

We directly install haproxy on binghe151 and binghe154 servers with the following command.

yum install haproxy -y

After the installation, we need to configure haproxy. The configuration file directory of haproxy is / etc / haproxy. Let’s check the file information in this directory, as shown below.

[[email protected] ~]# ll /etc/haproxy/
total 4
-rw-r--r-- 1 root root 3142 Oct 21  2016 haproxy.cfg

It is found that there is a password in / etc / haproxy / directory haproxy.cfg Documents. Next, let’s revise it haproxy.cfg Document, modified haproxy.cfg The contents of the document are as follows.

global
    log         127.0.0.1 local2

    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon
    stats socket /var/lib/haproxy/stats

defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000

listen  admin_status
      bind 0.0.0.0:48800
      stats uri /admin-status
      stats auth  admin:admin
listen    allmycat_service
      bind 0.0.0.0:3366
      mode tcp
      option tcplog
      option httpchk OPTIONS * HTTP/1.1\r\nHost:\ www
      balance    roundrobin
      server    mycat_151 192.168.175.151:3307 check port 48700 inter 5s rise 2 fall 3
      server    mycat_154 192.168.175.154:3307 check port 48700 inter 5s rise 2 fall 3
listen    allmycat_admin
      bind 0.0.0.0:3377
      mode tcp
      option tcplog
      option httpchk OPTIONS * HTTP/1.1\r\nHost:\ www
      balance    roundrobin
      server    mycat_151 192.168.175.151:3308 check port 48700 inter 5s rise 2 fall 3
      server    mycat_154 192.168.175.154:3308 check port 48700 inter 5s rise 2 fall 3

Next, start haproxy on binghe151 server and binghe154 server, as shown below.

haproxy -f /etc/haproxy/haproxy.cfg 

Next, we use the MySQL command to connect the virtual IP and port monitored by haproxy to connect to MYCAT, as shown below.

[[email protected] ~]# mysql -umycat -pmycat -h192.168.175.110 -P3366 --default-auth=mysql_native_password  
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.6.29-mycat-1.6.7.4-release-20200228205020 MyCat Server (OpenCloudDB)

Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> 

As you can see, the connection to MYCAT is successful.

Install keepalived

1. Install and configure keepalived

Directly enter the following command on binghe151 and binghe154 servers to install keepalived.

yum install keepalived -y

After successful installation, a kept directory will be generated in the / etc directory. Next, we will configure it in the / etc / kept directory keepalived.conf File, as shown below.

vim /etc/keepalived/keepalived.conf
  • Binghe151 server configuration
! Configuration Fileforkeepalived
vrrp_script chk_http_port {
  script "/etc/keepalived/check_haproxy.sh"
  interval 2
  weight 2
}
vrrp_instance VI_1 {
  state MASTER 
  interface eth0 
  virtual_router_id 51 
  priority 150 
  advert_int 1 
  authentication {
  auth_type PASS
  auth_pass 1111
}
track_script {
  chk_http_port
}
virtual_ipaddress { 
  192.168.175.110 dev eth0 scope global
}
}
  • Binghe154 server configuration
! Configuration Fileforkeepalived
vrrp_script chk_http_port {
  script "/etc/keepalived/check_haproxy.sh"
  interval 2
  weight 2
}
vrrp_instance VI_1 {
  state SLAVE 
  interface eth0 
  virtual_router_id 51 
  priority 120
  advert_int 1 
  authentication {
  auth_type PASS
  auth_pass 1111
}
track_script {
  chk_http_port
}
virtual_ipaddress { 
  192.168.175.110 dev eth0 scope global
}
}

2. Write script to detect haproxy

Next, you need to create a check in the / etc / kept directory on binghe151 and binghe154 servers, respectively_ haproxy.sh Script, as shown below.

#!/bin/bash
STARTHAPROXY="/usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg"
STOPKEEPALIVED="/etc/init.d/keepalived stop"
#STOPKEEPALIVED="/usr/bin/systemctl stop keepalived"
LOGFILE="/var/log/keepalived-haproxy-state.log"
echo "[check_haproxy status]" >> $LOGFILE
A=`ps -C haproxy --no-header |wc -l`
echo "[check_haproxy status]" >> $LOGFILE
date >> $LOGFILE
if [ $A -eq 0 ];then
   echo $STARTHAPROXY >> $LOGFILE
   $STARTHAPROXY >> $LOGFILE 2>&1
   sleep 5
fi
if [ `ps -C haproxy --no-header |wc -l` -eq 0 ];then
   exit 0
else
   exit 1
fi

Use the following command for check_ haproxy.sh The script grants executable permissions.

chmod a+x /etc/keepalived/check_haproxy.sh 

3. Start kept

After the configuration is completed, we can start keepalived on binghe151 and binghe154 servers, as shown below.

/etc/init.d/keepalived start

Check whether keepalived is started successfully, as shown below.

  • Binghe151 server
[[email protected] ~]# ps -ef | grep keepalived
root       1221      1  0 20:06 ?        00:00:00 keepalived -D
root       1222   1221  0 20:06 ?        00:00:00 keepalived -D
root       1223   1221  0 20:06 ?        00:00:02 keepalived -D
root      93290   3787  0 21:42 pts/0    00:00:00 grep keepalived
  • Binghe154 server
[[email protected] ~]# ps -ef | grep keepalived
root       1224      1  0 20:06 ?        00:00:00 keepalived -D
root       1225   1224  0 20:06 ?        00:00:00 keepalived -D
root       1226   1224  0 20:06 ?        00:00:02 keepalived -D
root      94636   3798  0 21:43 pts/0    00:00:00 grep keepalived

As you can see, the keepalived service on the two servers is started successfully.

4. Verify the virtual IP bound to kept

Next, let’s check whether the keepalived on the two servers are bound with virtual IP.

  • Binghe151 server
[[email protected] ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:10:a1:45 brd ff:ff:ff:ff:ff:ff
    inet 192.168.175.151/24 brd 192.168.175.255 scope global eth0
    inet 192.168.175.110/32 scope global eth0
    inet 192.168.175.110/24 brd 192.168.175.255 scope global secondary eth0:1
    inet6 fe80::20c:29ff:fe10:a145/64 scope link 
       valid_lft forever preferred_lft forever

You can see the following line of code.

inet 192.168.175.110/32 scope global eth0

This indicates that the preserved on binghe151 server is bound with virtual IP 192.168.175.110.

  • Binghe154 server
[[email protected] ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:50:56:22:2a:75 brd ff:ff:ff:ff:ff:ff
    inet 192.168.175.154/24 brd 192.168.175.255 scope global eth0
    inet 192.168.175.110/24 brd 192.168.175.255 scope global secondary eth0:1
    inet6 fe80::250:56ff:fe22:2a75/64 scope link 
       valid_lft forever preferred_lft forever

You can see that the keepalived on binghe154 server is not bound with virtual IP.

5. Test the drift of virtual IP

How to test the drift of virtual IP? First, we stop keepalived on the binghe151 server, as shown below.

/etc/init.d/keepalived stop

Next, check the binding of virtual IP to kept on binghe154 server, as shown below.

[[email protected] ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:50:56:22:2a:75 brd ff:ff:ff:ff:ff:ff
    inet 192.168.175.154/24 brd 192.168.175.255 scope global eth0
    inet 192.168.175.110/32 scope global eth0
    inet 192.168.175.110/24 brd 192.168.175.255 scope global secondary eth0:1
    inet6 fe80::250:56ff:fe22:2a75/64 scope link 
       valid_lft forever preferred_lft forever

You can see that in the output result information, there is the following line of information.

 inet 192.168.175.110/32 scope global eth0

This indicates that the virtual IP 192.168.175.110 is bound to the kept in binghe154 server, and the virtual IP drifts to the binghe154 server.

6. Preempted virtual IP on binghe151 server

Next, we start keepalived on the binghe151 server, as shown below.

/etc/init.d/keepalived start

After successful startup, we will check the binding of virtual IP again, as shown below.

  • Binghe151 server
[[email protected] ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:10:a1:45 brd ff:ff:ff:ff:ff:ff
    inet 192.168.175.151/24 brd 192.168.175.255 scope global eth0
    inet 192.168.175.110/32 scope global eth0
    inet 192.168.175.110/24 brd 192.168.175.255 scope global secondary eth0:1
    inet6 fe80::20c:29ff:fe10:a145/64 scope link 
       valid_lft forever preferred_lft forever
  • Binghe154 server
[[email protected] ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:50:56:22:2a:75 brd ff:ff:ff:ff:ff:ff
    inet 192.168.175.154/24 brd 192.168.175.255 scope global eth0
    inet 192.168.175.110/24 brd 192.168.175.255 scope global secondary eth0:1
    inet6 fe80::250:56ff:fe22:2a75/64 scope link 
       valid_lft forever preferred_lft forever

Since the priority of keepalived configured on binghe151 server is higher than that on binghe154 server, after starting keepalived on binghe151 server again, keepalived on binghe151 server will preempt virtual IP.

Configure MySQL master-slave replication

Here, for simplicity,I configured MySQL on binghe154 and binghe155 servers as master-slave replication, you can also configure the master-slave replication of MySQL on other servers according to the actual situation (Note: I configure the master-slave mode here).

1. Editing my.cnf file

  • Binghe154 server
server_id = 154
log_bin = /data/mysql/log/bin_log/mysql-bin
binlog-ignore-db=mysql
binlog_format= mixed
sync_binlog=100
log_slave_updates = 1
binlog_cache_size=32m
max_binlog_cache_size=64m
max_binlog_size=512m
lower_case_table_names = 1
relay_log = /data/mysql/log/bin_log/relay-bin
relay_log_index = /data/mysql/log/bin_log/relay-bin.index
master_info_repository=TABLE
relay-log-info-repository=TABLE
relay-log-recovery
  • Binghe155 server
server_id = 155
log_bin = /data/mysql/log/bin_log/mysql-bin
binlog-ignore-db=mysql
binlog_format= mixed
sync_binlog=100
log_slave_updates = 1
binlog_cache_size=32m
max_binlog_cache_size=64m
max_binlog_size=512m
lower_case_table_names = 1
relay_log = /data/mysql/log/bin_log/relay-bin
relay_log_index = /data/mysql/log/bin_log/relay-bin.index
master_info_repository=TABLE
relay-log-info-repository=TABLE
relay-log-recovery

2. Synchronize the data of MySQL on two servers

There is only one customer on the binghe154 server_ DB database, we use mysqldump command to export customer_ DB database, as shown below.

[[email protected] ~]# mysqldump --master-data=2 --single-transaction -uroot -p --databases customer_db > binghe154.sql
Enter password: 

Next, let’s look at the binghe154. SQL file.

more binghe154.sql

In the file, we can find the following information.

CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000042', MASTER_LOG_POS=995;

It shows that the current binary log file of MySQL is mysql-bin.000042, and the location of the binary log file is 995.

Next, we copy the binghe154. SQL file to the binghe155 server, as shown below.

scp binghe154.sql 192.168.175.155:/usr/local/src

On the binghe155 server, import the binghe154. SQL script into mysql, as shown below.

mysql -uroot -p < /usr/local/src/binghe154.sql

At this point, the initialization of data is completed.

3. Create master-slave copy account

In MySQL of binghe154 server, create a MySQL account for master-slave replication, as shown below.

mysql> CREATE USER 'repl'@'192.168.175.%' IDENTIFIED BY 'repl123456';
Query OK, 0 rows affected (0.01 sec)

mysql> ALTER USER 'repl'@'192.168.175.%' IDENTIFIED WITH mysql_native_password BY 'repl123456';                           
Query OK, 0 rows affected (0.00 sec)

mysql> GRANT REPLICATION SLAVE ON *.* TO 'repl'@'192.168.175.%';
Query OK, 0 rows affected (0.00 sec)

mysql> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)

4. Configure the replication link

Log in to MySQL on binghe155 server and use the following command to configure the replication link.

mysql> change master to 
     > master_host='192.168.175.154',
     > master_port=3306,
     > master_user='repl',
     > master_password='repl123456',
     > MASTER_LOG_FILE='mysql-bin.000042',
     > MASTER_LOG_POS=995;

Among them, master_ LOG_ FILE=’mysql-bin.000042′, MASTER_ LOG_ POS = 995 is found in binghe154. SQL file.

5. Start the slave library

Start the slave library from the MySQL command line of binghe155 server, as shown below.

mysql> start slave;

Check whether the slave library is started successfully, as shown below.

mysql> SHOW slave STATUS \G
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 192.168.175.151
                  Master_User: binghe152
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: mysql-bin.000007
          Read_Master_Log_Pos: 1360
               Relay_Log_File: relay-bin.000003
                Relay_Log_Pos: 322
        Relay_Master_Log_File: mysql-bin.000007
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
#################Omit part of the output result information##################

The results show that slave_ IO_ Running options and slave_ SQL_ The value of the running option is yes, which indicates that the master-slave replication environment of MySQL has been built successfully.

Finally, don’t forget to create MYCAT users in MySQL of binghe155 server, as shown below.

CREATE USER 'mycat'@'192.168.175.%' IDENTIFIED BY 'mycat';
ALTER USER 'mycat'@'192.168.175.%' IDENTIFIED WITH mysql_native_password BY 'mycat'; 
GRANT SELECT, INSERT, UPDATE, DELETE,EXECUTE  ON *.* TO 'mycat'@'192.168.175.%';
FLUSH PRIVILEGES;

Configure MYCAT read / write separation

Modify mycatd’s schema.xml File to achieve the separation of MySQL reading and writing on binghe154 and binghe155 servers. Under the conf / zkconf directory of MYCAT installation directory, modify the schema.xml Document, modified schema.xml The file is shown below.

<!DOCTYPE mycat:schema SYSTEM "schema.dtd">
<mycat:schema xmlns:mycat="http://io.mycat/">
    <schema name="shop" checkSQLschema="true" sqlMaxLimit="1000">
        <table name="order_master" dataNode="orderdb01,orderdb02,orderdb03,orderdb04" rule="order_master" primaryKey="order_id" autoIncrement="true">
            <childTable name="order_detail" joinKey="order_id" parentKey="order_id" primaryKey="order_detail_id" autoIncrement="true"/>
        </table>
        <table name="order_cart" dataNode="ordb" primaryKey="cart_id"/>
        <table name="order_customer_addr" dataNode="ordb" primaryKey="customer_addr_id"/>
        <table name="region_info" dataNode="ordb,prodb,custdb" primaryKey="region_id" type="global"/>
        <table name="serial" dataNode="ordb" primaryKey="id"/>
        <table name="shipping_info" dataNode="ordb" primaryKey="ship_id"/>
        <table name="warehouse_info" dataNode="ordb" primaryKey="w_id"/>
        <table name="warehouse_proudct" dataNode="ordb" primaryKey="wp_id"/>
        <table name="product_brand_info" dataNode="prodb" primaryKey="brand_id"/>
        <table name="product_category" dataNode="prodb" primaryKey="category_id"/>
        <table name="product_comment" dataNode="prodb" primaryKey="comment_id"/>
        <table name="product_info" dataNode="prodb" primaryKey="product_id"/>
        <table name="product_pic_info" dataNode="prodb" primaryKey="product_pic_id"/>
        <table name="product_supplier_info" dataNode="prodb" primaryKey="supplier_id"/>
        <table name="customer_balance_log" dataNode="custdb" primaryKey="balance_id"/>
        <table name="customer_inf" dataNode="custdb" primaryKey="customer_inf_id"/>
        <table name="customer_level_inf" dataNode="custdb" primaryKey="customer_level"/>
        <table name="customer_login" dataNode="custdb" primaryKey="customer_id"/>
        <table name="customer_login_log" dataNode="custdb" primaryKey="login_id"/>
        <table name="customer_point_log" dataNode="custdb" primaryKey="point_id"/>
    </schema>
    
    <dataNode name="mycat" dataHost="binghe151" database="mycat"/>
    <dataNode name="ordb" dataHost="binghe152" database="order_db"/>
    <dataNode name="prodb" dataHost="binghe153" database="product_db"/>
    <dataNode name="custdb" dataHost="binghe154" database="customer_db"/>
    <dataNode name="orderdb01" dataHost="binghe152" database="orderdb01"/>
    <dataNode name="orderdb02" dataHost="binghe152" database="orderdb02"/>
    <dataNode name="orderdb03" dataHost="binghe153" database="orderdb03"/>
    <dataNode name="orderdb04" dataHost="binghe153" database="orderdb04"/>
    
    <dataHost balance="1" maxCon="1000" minCon="10" name="binghe151" writeType="0" switchType="1" slaveThreshold="100" dbType="mysql" dbDriver="native">
        <heartbeat>select user()</heartbeat>
        <writeHost host="binghe51" url="192.168.175.151:3306" password="mycat" user="mycat"/>
    </dataHost>
    <dataHost balance="1" maxCon="1000" minCon="10" name="binghe152" writeType="0" switchType="1" slaveThreshold="100" dbType="mysql" dbDriver="native">
        <heartbeat>select user()</heartbeat>
        <writeHost host="binghe52" url="192.168.175.152:3306" password="mycat" user="mycat"/>
    </dataHost>
    <dataHost balance="1" maxCon="1000" minCon="10" name="binghe153" writeType="0" switchType="1" slaveThreshold="100" dbType="mysql" dbDriver="native">
        <heartbeat>select user()</heartbeat>
        <writeHost host="binghe53" url="192.168.175.153:3306" password="mycat" user="mycat"/>
    </dataHost>
    <dataHost balance="1" maxCon="1000" minCon="10" name="binghe154" writeType="0" switchTymycate="1" slaveThreshold="100" dbType="mysql" dbDriver="native">
        <heartbeat>select user()</heartbeat>
        <writeHost host="binghe54" url="192.168.175.154:3306" password="mycat" user="mycat">
            <readHost host="binghe55", url="192.168.175.155:3306" user="mycat" password="mycat"/>
        </writeHost>
         <writeHost host="binghe55" url="192.168.175.155:3306" password="mycat" user="mycat"/>
    </dataHost>
</mycat:schema>

Save and exit the VIM editor. Next, initialize the data in zookeeper, as shown below.

/usr/local/mycat/bin/init_zk_data.sh 

After the above command is successfully executed, the configuration will be automatically synchronized to the configuration in the conf directory under the MYCAT installation directory on binghe151 and binghe154 servers schema.xml In the middle.

Next, start the MYCAT services on binghe151 and binghe154 servers respectively.

mycat restart

How to access high availability environment

At this time, the entire high availability environment is configured. When the upper application connects to the high availability environment, it needs to connect the IP and port monitored by haproxy. For example, use the MySQL command to connect to a highly available environment, as shown below.

[[email protected] ~]# mysql -umycat -pmycat -h192.168.175.110 -P3366 --default-auth=mysql_native_password
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.6.29-mycat-1.6.7.4-release-20200228205020 MyCat Server (OpenCloudDB)

Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> show databases;
+----------+
| DATABASE |
+----------+
| shop     |
+----------+
1 row in set (0.10 sec)

mysql> use shop;
Database changed
mysql> show tables;
+-----------------------+
| Tables in shop        |
+-----------------------+
| customer_balance_log  |
| customer_inf          |
| customer_level_inf    |
| customer_login        |
| customer_login_log    |
| customer_point_log    |
| order_cart            |
| order_customer_addr   |
| order_detail          |
| order_master          |
| product_brand_info    |
| product_category      |
| product_comment       |
| product_info          |
| product_pic_info      |
| product_supplier_info |
| region_info           |
| serial                |
| shipping_info         |
| warehouse_info        |
| warehouse_proudct     |
+-----------------------+
21 rows in set (0.00 sec)

Here, I just extend the read-write separation environment for MySQL on binghe154 server. You can also implement master-slave replication and read-write separation for MySQL on other servers according to the actual situation. In this way, the whole high availability environment realizes the high availability of haproxy, MYCAT, mysql, zookeeper and kept.

OK, that’s all for today. I’m glacier. I’ll see you next time!!

Recommended Today

asp.net Application of regular expression

1. Balanced group / recursive matching (?’ Group ‘), which is called the corresponding content of group, and counts it on the stack;(?’- Group ‘), and count the corresponding content named group out of the stack(?!) Zero width negative look ahead assertion. Since there is no suffix expression, attempts to match always failRegular example:,{0,1}”5″:\[[^\[\]]*(((?’Open’\[)[^\[\]]*)+((?’-Open’\])[^\[\]]*)+)*(?(Open)(?!))\],{0,1} Test […]