Great! Send you 6 MySQL essential tips!

Time:2020-10-22

Great! Send you 6 MySQL essential tips!
Author: Lao Zhang
Original text: http://blog.51cto.com/sumongo…

Today I’d like to share with you an article about the use of MySQL DBA essential tools. It can help us manage our database conveniently and make our work more efficient.

This tool is from percona, an important branch of MySQL,It’s called percona toolkit,It is a set of commands. Today, I’d like to introduce some of the longest used ones in the production environment.

Download the toolkit from: https://www.percona.com/downl…

The installation process is very simple. First unzip:
tar -zxvf percona-toolkit-3.0.3_x86_64.tar.gz
Since it is a binary package, you can directly use it in the percona-toolkit-3.0.3/bin directory after decompressing.

Tips 1:

pt-online-schema-change

The function can organize table structure online, collect fragments, and add fields and indexes to large tables. Avoid blocking read and write operations caused by lock table. For MySQL version 5.7, you can directly online DDL without using this command.

The presentation process is as follows:
Because it is a test environment, we don’t create a table with a large amount of data, so we can understand the process.
This is the data situation and table structure in the table

mysql> select count(*) from su;  
+----------+  
| count(*) |  
+----------+  
|   100000 |  
+----------+  
1 row in set (0.03 sec)  
mysql> desc su;  
+-------+------------------+------+-----+-------------------+-----------------------------+  
| Field | Type             | Null | Key | Default           | Extra                       |  
+-------+------------------+------+-----+-------------------+-----------------------------+  
| id    | int(10) unsigned | NO   | PRI | NULL              | auto_increment              |  
| c1    | int(11)          | NO   |     | 0                 |                             |  
| c2    | int(11)          | NO   |     | 0                 |                             |  
| c3    | int(11)          | NO   |     | 0                 |                             |  
| c4    | int(11)          | NO   |     | 0                 |                             |  
| c5    | timestamp        | NO   |     | CURRENT_TIMESTAMP | on update CURRENT_TIMESTAMP |  
| c6    | varchar(200)     | NO   |     |                   |                             |

The process of adding fields online:

[[email protected] bin]# ./pt-online-schema-change --user=root --password=root123 
--host=localhost  --alter="ADD COLUMN city_id INT" D=test,t=su --execute
No slaves found.  See --recursion-method if host node3 has slaves.
Not checking slave lag because no slaves were found and --check-slave-lag was not specified.
Operation, tries, wait:  
  analyze_table, 10, 1  
  copy_rows, 10, 0.25  
  create_triggers, 10, 1  
  drop_triggers, 10, 1  
  swap_tables, 10, 1  
  update_foreign_keys, 10, 1
  
Altering `test`.`su`...
Creating new table...
Created new table test._su_new OK.
Altering new table...
Altered `test`.`_su_new` OK.
2017-08-10T14:53:59 Creating triggers...
2017-08-10T14:53:59 Created triggers OK.
2017-08-10T14:53:59 Copying approximately 100163 rows...
2017-08-10T14:54:00 Copied rows OK.
2017-08-10T14:54:00 Analyzing new table...
2017-08-10T14:54:00 Swapping tables...
2017-08-10T14:54:00 Swapped original and new tables OK.
2017-08-10T14:54:00 Dropping old table...
2017-08-10T14:54:00 Dropped old table `test`.`_su_old` OK.
2017-08-10T14:54:00 Dropping triggers...
2017-08-10T14:54:00 Dropped triggers OK.
Successfully altered `test`.`su`.

A new city has been added to view the results_ ID field:

mysql> desc su;
+---------+------------------+------+-----+-------------------+-----------------------------+
| Field   | Type             | Null | Key | Default           | Extra                       |
+---------+------------------+------+-----+-------------------+-----------------------------+
| id      | int(10) unsigned | NO   | PRI | NULL              | auto_increment              | 
| c1      | int(11)          | NO   |     | 0                 |                             | 
| c2      | int(11)          | NO   |     | 0                 |                             | 
| c3      | int(11)          | NO   |     | 0                 |                             | 
| c4      | int(11)          | NO   |     | 0                 |                             | 
| c5      | timestamp        | NO   |     | CURRENT_TIMESTAMP | on update CURRENT_TIMESTAMP | 
| c6      | varchar(200)     | NO   |     |                   |                             | 
| city_id | int(11)          | YES  |     | NULL              |                             | 
+---------+------------------+------+-----+-------------------+-----------------------------+

Tips 2:

pt-query-digest

Function: now capture the top 10 slow SQL statements online.
We all know that most of the database performance problems are caused by SQL statements, so we should catch them as criminals. Do relevant optimization in time.

The presentation process is as follows:
You can sample slow SQL statements according to the time interval. Since is an adjustable SQL statement

[[email protected] bin]# ./pt-query-digest --since=24h /data/mysql/slow.log > 1.log

Check the SQL report, summarize the slow statements, and see the time consumption.

The following is only part of the reporting process

cat 1.log
# Profile
# Rank Query ID           Response time Calls R/Call  V/M   Item
# ==== ================== ============= ===== ======= ===== ==============
#    1 0x040ADBE3A1EED0A2 16.8901 87.2%     1 16.8901  0.00 CALL insert_su
#    2 0x8E44F4ED46297D4C  1.3013  6.7%     3  0.4338  0.18 INSERT SELECT test._su_new test.su
#    3 0x12E7CAFEA3145EEF  0.7431  3.8%     1  0.7431  0.00 DELETE su
# MISC 0xMISC              0.4434  2.3%     3  0.1478   0.0  <3ITEMS>
# Query 1: 0 QPS, 0x concurrency, ID 0x040ADBE3A1EED0A2 at byte 19060 ____
# Scores: V/M = 0.00
# Time range: all events occurred at 2017-08-02 12:12:07
# Attribute    pct   total     min     max     avg     95%  stddev  median
# ============ === ======= ======= ======= ======= ======= ======= =======
# Count          2       1
# Exec time     47     18s     18s     18s     18s     18s       0     18s
# Lock time      0   103us   103us   103us   103us   103us       0   103us
# Rows sent      0       0       0       0       0       0       0       0
# Rows examine   0       0       0       0       0       0       0       0
# Query size     0      21      21      21      21      21       0      21
# String:# Databases    test
# Hosts        localhost
# Users        root
# Query_time distribution
#   1us
#  10us
# 100us
#   1ms
#  10ms
# 100ms
#    1s
#  10s+ 
################################################################
call insert_su(50000)\G

As you can see, the report lists the proportion of SQL statement response time and the execution time of SQL statement. It is convenient for us to observe which sentences have problems intuitively. (only one SQL is listed here)

Tips 3:

pt-heartbeat
The function monitors the master-slave delay. Monitor how long the slave database lags behind the master database.
Environment introduction: 192.168.56.132 master database, 192.168.56.133 slave database

The operation is as follows:
Execute on the master database:

[[email protected] bin]# ./pt-heartbeat --database test --update 
--create-table --daemonize -uroot -proot123

Test monitors the synchronized library for me, creates a monitoring table heartbeat under the library, and the background process will update this table from time to time.

Execute a statement on the slave library to monitor the delay time of master-slave synchronization
Master server ID is the server ID of the master database, – H (master database IP)

[[email protected] bin]# ./pt-heartbeat --master-server-id=1323306
--monitor --database test  -uzs -p123456 -h 192.168.56.132
0.00s [  0.00s,  0.00s,  0.00s ]
0.00s [  0.00s,  0.00s,  0.00s ]
0.00s [  0.00s,  0.00s,  0.00s ]
0.00s [  0.00s,  0.00s,  0.00s ]
0.00s [  0.00s,  0.00s,  0.00s ]
0.00s [  0.00s,  0.00s,  0.00s ]

The time is 0s. There is no delay at present.

Tips 4:

pt-table-checksum
Function check master slave replication consistency

Principle: execute a check statement on the master to check the consistency of the master-slave replication of MySQL, generate a replace statement, and then pass it to the slave database through replication, and update the master through update_ Src. Finally, through the detection from the top of this_ SRC and master_ SRC
Value to determine whether the replication is consistent.

Compare the differences between the test library and the main database

[[email protected] bin]# ./pt-table-checksum --no-check-binlog-format --nocheck-replication-filters 
--databases=test --replicate=test.checksums --host=192.168.56.132 -uzs -p123456            
                    TS ERRORS  DIFFS     ROWS  CHUNKS SKIPPED    TIME TABLE
08-10T16:01:02      0      0        1       1       0   0.013 test.heartbeat
08-10T16:01:02      0      0        0       1       0   0.015 test.su
08-10T16:01:02      0      0        0       1       0   0.011 test.t

It can be seen that diff is 0, which proves that there is no difference between master and slave test libraries.

Compare which tables in the test library are different (need to add replicate check only), and execute the following on the main database:

[[email protected] bin]# ./pt-table-checksum --no-check-binlog-format 
--nocheck-replication-filters --databases=test --replicate=test.checksums  
--replicate-check-only  --host=192.168.56.132 -uzs -p123456
Differences on node4
TABLE CHUNK CNT_DIFF CRC_DIFF CHUNK_INDEX LOWER_BOUNDARY UPPER_BOUNDARY
test.t 1 1 1

It can be seen that the master-slave data of the table t under the test library is inconsistent.

Tips 5:

pt-slave-restart
Function: monitor master-slave errors and try to restart MySQL master-slave
Note: skip the error command to solve the problem of multiple data from the database (error code 1062). If there is little data in the slave database and the error is skipped, the master-slave synchronization problem cannot be solved from the root (error code 1032). You need to find out what the missing data is first. If there is a large amount of missing data, it is recommended to rebuild the master-slave environment.

1062 error from library:

Slave_IO_Running: Yes
Slave_SQL_Running: No
Last_Errno: 1062
Last_Error: Could not execute Write_rows event on table test.t;
Duplicate entry '1' for key 'PRIMARY', Error_code: 1062; handler error HA_ERR_FOUND_DUPP_KEY; 
the event's master log mysql-bin.000006, end_log_pos 757482

You need to execute on the slave Library:

[[email protected] bin]# ./pt-slave-restart -uroot -proot123 --error-numbers=1062
2017-08-10T16:28:12 p=...,u=root node4-relay-bin.000002      751437 1062

After skipping the error, check the master-slave results:

Slave_IO_Running: Yes
Slave_SQL_Running: Yes

The synchronization state is consistent again.

Tips 6:

pt-ioprofile
Function: it is convenient to locate IO problems, which can be located by IO throughput.

[[email protected] bin]# ./pt-ioprofile 
Thu Aug 10 16:33:47 CST 2017
Tracing process ID 3907     
total       read     pwrite      write      fsync filename 
13.949355   0.839006   0.000000   0.286556  12.823793 /data/mysql/mysql-bin.000006  
7.454844   0.000000   2.913702   0.000000   4.541142 /data/mysql/ib_logfile0  0.000193   
0.000000   0.000000   0.000193   0.000000 /data/mysql/slow.log

read:Read data from a file. The file to be read is identified by a file descriptor, and the data is read into a pre-defined buffer.
write:Write the buffer data to a file.
pread:Because the kernel may suspend the process temporarily between lseek and read calls, the synchronization problem is caused. Calling pread is equivalent to calling lseek and read sequentially, which are equivalent to a binding atomic operation.
pwrite:The synchronization between the kernel and the write may cause problems,
Calling pwrite is equivalent to calling lseek and write sequentially, which are equivalent to a binding atomic operation.
fsync:Ensure that all modified contents of the file have been correctly synchronized to the hard disk. This call will block waiting until the device reports IO completion.
filename:The name of the file that interacts with the disk

Through this report, we can see which file takes up more IO time and has the busiest interaction with disk, which is easy to lock IO problems.

Because there are a lot of commands in this toolset, I’d like to introduce some of the more commonly used ones for you to study in private.
Official address: https://www.percona.com/doc/p…

Welcome to leave comments. If you have any help, please click like + forward to share.

Welcome to the official account of the brother of migrant workers:Migrant workers’ technology road
Great! Send you 6 MySQL essential tips!

Recommended Today

Summary of computer foundation for 2021 autumn recruitment interview database, redis

Series of articles: Summary of computer foundation in autumn 2021 interview algorithm, data structure, design pattern, Linux Summary of computer foundation in autumn 2021 interview – Java foundation, JVM, spring framework Summary of computer foundation for 2021 autumn recruitment interview database, redis Summary of computer foundation for 2021 autumn recruitment interview – operating system Summary […]