Using docker to build MySQL monitoring system
[toc]
brief introduction
Using Prometheus and grafana to build the monitoring system of index collection, storage and display.
MySQL is built directly on the host computer. Prometheus and grafana are built through docker.
service | Start mode | Private IP | port | remarks |
---|---|---|---|---|
mysql | VM | 172.17.0.1 | 3306 | |
grafana | docker | 172.17.0.3 | 3000:3000 | |
prometheus | docker | 172.17.0.2 | 9090:9090 | |
mysqld-exporter | docker | 172.17.0.1 | 9104 | |
node-exporter | docker | 172.17.0.1 | 9100 |
Prometheus building
Prometheus building
Prometheus is an open source combination of monitoring & Alarm & time series database, which was initially developed by Soundcloud company. With the development of Prometheus, more and more companies and organizations accept Prometheus, and the community is also very active. They will turn Prometheus into an independent open source project, and have companies to operate it. Google SRE’s book also mentioned that the implementation similar to their borgmon monitoring system is Prometheus. Now the most common kubernetes container management system, usually with Prometheus for monitoring.
Powerful multidimensional data model:
- Time series data are distinguished by metric name and key value pair.
- All metrics can set any multidimensional label.
- The data model is more arbitrary and does not need to be deliberately set to a string separated by dots.
- Data models can be aggregated, sliced and sliced.
- Support double precision floating-point type, label can be set to all Unicode.
Using docker to start Prometheus
#Map the configuration file of Prometheus to the host to modify the configuration later
docker run -d -p 9090:9090 -v ~/docker/prometheus/:/etc/prometheus/ prom/prometheus
configuration file~/docker/prometheus/prometheus.yml
172.17.0.1 is the docker private IP of the host.
global:
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
# scrape_timeout is set to the global default (10s).
alerting:
alertmanagers:
- static_configs:
- targets:
# - alertmanager:9093
rule_files:
# - "first_rules.yml"
# - "second_rules.yml"
scrape_configs:
- job_name: 'prometheus'
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ['localhost:9090']
- job_name: 'server'
static_configs:
- targets: ['172.17.0.1:9100']
- job_name: 'mysql'
static_configs:
- targets: ['172.17.0.1:9104']
Construction of Prometheus collector
Monitor exporter — monitor the status of the current machine itself, including hard disk, CPU, traffic, etc. Because Prometheus already has many ready-made common exporters.
So we use the node exporter directly. According to Prometheus, a machine or a node is a node, so the exporter is reporting the status of the current node. Mysqld exporter is dedicated to monitoring MySQL and providing the collected data to Prometheus.
The exporter starts in the host mode and shares IP with the host, which is convenient to obtain the relevant information of MySQL
#Host information collection exporter
docker run -d \
--net="host" \
--pid="host" \
-v "/:/host:ro,rslave" \
quay.io/prometheus/node-exporter \
--path.rootfs=/host
#Database information collection exporter
#DATA_ SOURCE_ Name = "user name: [email protected]( mysqlip:port "
docker run -d \
-p 9104:9104 \
--net="host" \
--pid="host" \
-e DATA_SOURCE_NAME="root:[email protected](172.17.0.1:3306)/" \
prom/mysqld-exporter
Grafana building
Grafana starts
docker run -d -p 3000:3000 grafana/grafana
Grafana configuration
- Configure the Prometheus data source. The URL is the private IP of Prometheus, and the server mode is selected for access, because both grafana and Prometheus are started by docker. Use private IP to get data through the server.
- Import the grafana monitoring template, the MySQL monitoring template ID is 7362, and fill in the following input box
3. Import the monitoring template of the host, the template ID is 8919, and import it in the same way.