Using ansible to install and deploy tidb

Time:2019-12-2

background knowledge

As a distributed database, tidb is very cumbersome to configure and install services in multiple nodes. In order to simplify operation and facilitate management, it is a good choice to use automated tools for batch deployment.

Ansible is an automatic operation and maintenance tool developed based on python, which combines the advantages of many old operation and maintenance tools to realize the functions of batch operating system configuration, batch program deployment, batch operation command and so on. Moreover, it is simple to use. Only the ansible program is installed on the management workstation to configure the IP information of the controlled host, and the controlled host has no client. For the above reasons, we choose ansible to install and deploy tidb in batches.

Let’s show you how to use ansible to deploy tidb.

The configuration of tidb installation environment is as follows

The operating system uses CentOS 7.2 or later, and the file system uses ext4.

Note: there will be some kernel bugs in the lower version operating system (such as CentOS 6.6) and XFS file system, which will affect the performance. We do not recommend it.

IP Services
192.168.1.101 PD Prometheus Grafana Pushgateway Node_exporter
192.168.1.102 PD TiDB Node_exporter
192.168.1.103 PD TiDB Node_exporter
192.168.1.104 TiKV Node_exporter
192.168.1.105 Tikv Node_exporter
192.168.1.106 TiKV Node_exporter

We choose to use 3 PDS, 2 tidbs, and 1 tikv. Let’s briefly explain why we deploy this way.

  • For PD. PD itself is a distributed system, which is composed of multiple nodes, and there is only one primary node to provide external services. The primary node is determined by the election algorithm between each node. The election algorithm requires that the number of nodes is odd (2n + 1). The risk of one node is relatively high, so we choose to use three nodes.

  • For tikv. The underlying layer of tidb uses distributed storage. We recommend using odd (2n + 1) backups. After n backups are hung up, the data is still available. If 1 or 2 backup is used, some data will be unavailable if one node is hung up, so we choose to use 3 nodes and set 3 backups (default).

  • For tidb. Our tidb is stateless. If the existing cluster’s tidb service is under great pressure, you can directly add tidb services to other nodes without redundant configuration. We choose to use two tidbs for HA and load balancing.

  • Of course, if only testing the cluster, you can use one PD, one tidb and three tikv (if less than three, you need to modify the number of backups)

Download and unzip the tidb installation package

#Create directory to store ansible installation package
mkdir /root/workspace                 

#Switch directories
cd /root/workspace                    

#Download the installation package
wget https://github.com/pingcap/tidb-ansible/archive/master.zip     

#Extract the compressed package to the current directory
unzip master.zip                      

#Check the structure of the installation package. The main contents are as follows
cd tidb-ansible-master && ls

Meaning of some contents

Ansible.cfg: ansible configuration file
Inventoty.ini: configuration of groups and hosts
Conf: tidb related configuration template
Group vars: configuration of related variables
Scripts: grafana monitoring JSON template
Local_prepare.yml: used to download related installation packages
Bootstrap.yml: initialize each node of the cluster
Deploy.yml: install the corresponding services of tidb in each node
Roles: set of ansible tasks
Start.yml: start all services
Stop.yml: stop all services
Unsafe? Cleanup? Data.yml: clear data
Unsafe_cleanup.yml: destroy cluster

Modify profile

It mainly configures the distribution of cluster nodes and the installation path.

The tidb service will be installed on the machine in the tidb? Servers group (similar to others). By default, all services will be installed under the variable deploy? Dir path.

#The node where the tidb service will be installed
[tidb_servers]
192.168.1.102
192.168.1.103

#Nodes to be installed for tikv services
[tikv_servers]
192.168.1.104
192.168.1.105
192.168.1.106

#The node where the PD service will be installed
[pd_servers]
192.168.1.101
192.168.1.102
192.168.1.103

#Node where promethues service will be installed
# Monitoring Part
[monitoring_servers]
192.168.1.101

#The node where the grafana service will be installed
[grafana_servers]
192.168.1.101

#The node where the node? Exporter service will be installed
[monitored_servers:children]
tidb_servers
tikv_servers
pd_servers

[all:vars]
#The service installation path is the same for each node. It is configured according to the actual situation
deploy_dir = /home/tidb/deploy

## Connection
#Mode 1: install using root
# ssh via root:
# ansible_user = root
# ansible_become = true
# ansible_become_user = tidb

#Mode 2: install using ordinary users (sudo permission is required)
# ssh via normal user
ansible_user = tidb

#The name of the cluster, which can be customized
cluster_name = test-cluster

# misc
enable_elk = False
enable_firewalld = False
enable_ntpd = False

# binlog trigger
#Whether to enable pump, and generate the binlog of tidb from pump 
#If you need to synchronize data from this tidb cluster, you can turn it on to true
enable_binlog = False

The installation process can be divided into root user installation and ordinary user installation. It is best to have root user. Modifying system parameters, creating directory, etc. will not involve insufficient permissions, and can be installed directly.
However, some environments do not directly give root permission. In this scenario, ordinary users are required to install. In order to make the configuration simple, we suggest that all nodes use the same common user; in order to meet the permission requirements, we also need to give sudo permission to this common user.
The following describes the detailed process of the two installation methods. After the installation, you need to manually start the service.

1. Use root user to install

  • Download the binary package to the downloads directory, extract and copy it to resources / bin, and then install the binary program under Resources / bin

ansible-playbook -i inventory.ini local_prepare.yml
  • Initialize each node of the cluster. Check the inventory.ini configuration file, python version, network status, operating system version, etc., modify some kernel parameters, and create the corresponding directory.

    • Modify the configuration file as follows

    ## Connection
    # ssh via root:
    ansible_user = root
    # ansible_become = true
    ansible_become_user = tidb
    
    # ssh via normal user
    # ansible_user = tidb
    • Execute initialization command

    The ansible playboo K - I inventory.ini bootstrap.yml - K command is described in the appendix
  • Install services. This step installs the corresponding services on the server and automatically sets the configuration file and required scripts.

    • Modify the configuration file as follows

    ## Connection
    # ssh via root:
      ansible_user = root
      ansible_become = true
      ansible_become_user = tidb
    
    # ssh via normal user
    # ansible_user = tidb
    • Execute installation command

    ansible-playbook -i inventory.ini deploy.yml -k

2. Installation by ordinary users

  • Download binary package to central control computer

ansible-playbook -i inventory.ini local_prepare.yml
  • Initialize each node of the cluster.

    • Modify the configuration file as follows

    ## Connection
    # ssh via root:
    # ansible_user = root
    # ansible_become = true
    # ansible_become_user = tidb
    
    # ssh via normal user
    ansible_user = tidb
-Execute initialization command
ansible-playbook -i inventory.ini bootstrap.yml -k -K
  • Installation service

ansible-playbook -i inventory.ini deploy.yml -k -K

Start stop service

  • Start all services

ansible-playbook -i inventory.ini start.yml -k
  • Stop all services

ansible-playbook -i inventory.ini stop.yml

appendix

ansible-playbook -i inventory.ini xxx.yml -k -K

-After K is executed, you need to enter the password of the SSH connection user. If you have done the mutual trust between the central control computer and all nodes, you do not need this parameter

-After K execution, you need to enter the password required by sudo. If you use root or sudo, you do not need this parameter

Recommended Today

Query SAP multiple database table sizes

Query SAP multiple database table sizes https://www.cnblogs.com/ken-yu/p/12973009.html Item code db02 Here are two approaches, In the first graphical interface, the results of the query data table are displayed in MB, and only one table can be queried at a time. SPACE—Segments—Detailed Analysis—Detailed Analysis  In the pop-up window segment / object, enter the name of the […]