Technology sharing | zookeeper rapidly deploys dble cluster environment

Time:2021-2-25

Author: ye Xiaoli
This paper requires readers to master the basic principles of zookeeper and dble.
If you want to learn about dble, you can see the dble open class at the end of the article.

Zookeeper – distributed coordination technology / dble – distributed database middleware are used to build a cluster, which can provide a stable distributed coordination environment for MySQL Cluster and further realize high service availability. Note: zookeeper (hereinafter referred to as zk) can manage dble cluster in a single instance. In this paper, we use ZK cluster mode (at least 3 sets) to manage dble cluster

  • Environment architecture
  • objective
  • Deployment dependency
  • Installation of dble (3 sets)
  • Deploy zookeeper cluster
  • Deploying dble cluster
  • Simple test
  • other

Environment architecture

Technology sharing | zookeeper rapidly deploys dble cluster environment

Environment version information:
System | Linux: CentOS Linux release 7.5.1804 (core)
Middleware | dble: 2.19.05.0
Database | MySQL: 5.7.13
Coordinator | zookeeper: 3.4.12

objective

ZK cluster is used to achieve the consistency of the status and metadata of each node in dble cluster. When there is a node failure in the cluster, other nodes can continue to provide services. Through experiments, the feasibility of this architecture is verified.

Deployment dependencies (important)

  1. Install JDK version 1.8 or above to ensure Java_ The home configuration is correct
  2. Start the MySQL instance, create a user, and authorize the user to log in remotely

Installation of dble (3 sets)

  1. Download installation package
wget https://github.com/actiontech/dble/releases/download/2.19.05.0%2Ftag/actiontech-dble-2.19.05.0.tar.gz
  1. Unzip the installation package
tar zxvf actiontech-dble-2.19.05.0.tar.gz
  1. Configure the dble back-end node and modify the dble / conf / directoryschema.xmlFile, configure writehost and readhost as their own MySQL instance, as follows:
<writeHost host="hostM1" url="ip1:3306" user="your_user" password="your_psw">
     <readHost host="hostS1" url="ip2:3306" user="your_user" password="your_psw"/>
</writeHost>

Deploy zookeeper cluster

  1. Download and unzip the installation package

http://archive.apache.org/dis…

  1. Correctly configure zookeeper_ HOME
  2. to configurezoo.cfgAdd thezoo_sample.cfgRename tozoo.cfg, modifyzoo.cfg, as follows:
tickTime=2000
initLimit=10
syncLimit=5
#Port of zookeeper client
clientPort=2181
#Zookeeper's data directory path
dataDir=/opt/zookeeper/data
#Path of zookeeper's log directory
dataLoginDir=/opt/zookeeper/logs
#ID is the server ID, which is used to identify the serial number of the server in the cluster_ IP is the IP address of the server
server.id=zk1_server_ip:2888:3888
server.id=zk2_server_ip:2888:3888
server.id=zk3_server_ip:2888:3888
  1. The other two ZKS have the same configuration
  2. establishmyidFile, create the file in the directory zookeeper-3.4.12/datamyidThe file has only one line, which corresponds to the server ID of each server
  3. Start the cluster and start the ZK service on each machine
cd zookeeper-3.4.12/bin && ./zkServer.sh start
  1. Verify that the ZK cluster is deployed successfully, and check the role of each server in the cluster. One server is the leader, and the other two servers are the follower
cd zookeeper-3.4.12/bin && ./zkServer.sh status
//As planned

Deploying dble cluster

  1. Edit files in dble / conf directorymyid.propertiesAs follows:
cluster=zk
#clinet info
ipAddress=zk_server_ip:zk_port
#cluster namespace, please use the same one in one cluster
clusterId=cluster-1
#it must be different for every node in cluster
myid=1

Note: IPAddress can configure the IP and port of one or more or all ZKS in the cluster. When configuring multiple IP, you need to use “,”. For example: address = 172.100.9.1:2181172.100.9.2:2181. For details, please refer to the document: https://github.com/actiontech…

  1. The other two dbles have the same configuration. It is specially pointed out that the myid value of each dble must be different
  2. Start all dbles
cd dble/bin && ./dble start
  1. Verify the success of dble cluster deployment through ZK client. Log on ZK client on any machine with ZK and check that all dbles are online. Dble cluster deployment is successful!
cd zookeeper-3.4.12/bin && ./zkCli.sh

[zk: localhost:2181(CONNECTED) 0] ls /dble/cluster-1/online
[001, 3, server_2]

Simple test

Scenario 1: reload @ config_ All command

Objective: to verify reload @ config_ The all command synchronizes the configuration to all nodes in the cluster

  • Step 1: modify dble-a’sschema.xml, add a global table as follows:
<table name="test_global" dataNode="dn1,dn2" type="global"/>
  • Step 2: connect the dble-a management port and execute the management command: reload @ config_ all;
  • Step 3: check dble-b and dble-cschema.xmlTest was observed_ Global has been updated to other nodes in the cluster

Conclusion: the configuration of master node update in dble cluster is in the process of reload @ @ config_ All is pushed down to other nodes.

Scenario 2: View synchronization

Objective: to verify that the view created by dble is only saved on ZK and synchronized in all nodes of the cluster, and does not exist in MySQL

  • Step 1: connect the dble-a service port and create the view as follows:
mysql> create view view_test as select * from test_global;
Query OK, 0 rows affected (0.10 sec)
mysql> show tables;
+-------------------+
| Tables in schema1 |
+-------------------+
| test_global |
| view_test |
+-------------------+
2 rows in set (0.00 sec)
  • Step 2: log in to the ZK client to view. The created view already exists in the ZK view directory
[zk: localhost:2181(CONNECTED) 5] ls /dble/cluster-1/view
[schema1:view_test]
  • Step 3: connect the dble-b service port and observe that the view exists in the node
mysql> show tables;
+-------------------+
| Tables in schema1 |
+-------------------+
| test_global |
| view_test |
+-------------------+
2 rows in set (0.00 sec)
  • Step 4: connect to MySQL directly, and observe that the created view does not exist in MySQL
mysql> show tables;
+-------------------+
| Tables in schema1 |
+-------------------+
| test_global |
+-------------------+
1 rows in set (0.00 sec)

Conclusion: the view created by dble node is stored in ZK, and all nodes in the cluster are synchronized, and does not exist in MySQL.

Scenario 3: when a node in the cluster is offline

Objective: to verify that when there is a node failure in the cluster, it does not affect the service provided by the cluster

  • Step 1: stop dble-a manually
  • Step 2: connect the dble-c service port and modify the test_ Global table structure
mysql> alter table test_global add column name varchar(20);
Query OK, 0 rows affected (0.15 sec)
  • Step 3: connect the dble-b service port and view the test_ Global table structure
mysql> desc test_global;
+-------+-------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------+-------------+------+-----+---------+-------+
| id | int(11) | YES | | NULL | |
| name | varchar(20) | YES | | NULL | |
+-------+-------------+------+-----+---------+-------+
2 rows in set (0.01 sec)

Conclusion: in dble cluster, when other nodes drop or fail, it does not affect other nodes to provide services

other
To understand the operation and details of cluster synchronization status, please refer to the open source document: https://actiontech.github.io/…

Recommended Today

Third party calls wechat payment interface

Step one: preparation 1. Wechat payment interface can only be called if the developer qualification has been authenticated on wechat open platform, so the first thing is to authenticate. It’s very simple, but wechat will charge 300 yuan for audit 2. Set payment directory Login wechat payment merchant platform( pay.weixin.qq . com) — > Product […]