How to prevent a slow query from blowing up the server

Time:2022-11-25

Hey? xxx? What happened to your service, the machine is down again~!
ah? How many units have been hung up?
Two of the 40 you borrowed have been hung up!
Wait a minute, let me see what’s going on!

The server is smoking again~~~ The reason is this:

Some time ago, the project ushered in the peak of Qixi Festival, and the SQL of an interface originally looked like this:

mysql> explain SELECT *,sum(num) AS sum FROM search WHERE search_time >= '2016-08-30' AND type = 0 AND state = 1 GROUP BY keyword ORDER BY sum DESC LIMIT 50;
+----+-------------+-----------+------+--------------------------+------+---------+-------+--------+----------------------------------------------+
| id | select_type | table     | type | possible_keys            | key  | key_len | ref   | rows   | Extra                                        |
+----+-------------+-----------+------+--------------------------+------+---------+-------+--------+----------------------------------------------+
|  1 | SIMPLE      | search | ref  | type,search_time,keyword | type | 2       | const | 651114 | Using where; Using temporary; Using filesort |
+----+-------------+-----------+------+--------------------------+------+---------+-------+--------+----------------------------------------------+

search_time,type,stateare indexed.typeandstateThe value range is limited, so it is basically useless, mainly relying onsearch_time,butexplainThe result indicates that no effective index is used. In actual cases, there are130w+When the data is used, the statement takes an average time to run5sMuch, it was certainly intolerable.

What about mandatory indexing? Try it out:

mysql> explain SELECT *,sum(num) AS sum FROM search FORCE INDEX (search_time) WHERE search_time >= '2016-08-30' AND type = 0 AND state = 1 GROUP BY keyword ORDER BY sum DESC LIMIT 50;
+----+-------------+-----------+-------+---------------------+-------------+---------+------+--------+---------------------------------------------------------------------+
| id | select_type | table     | type  | possible_keys       | key         | key_len | ref  | rows   | Extra                                                               |
+----+-------------+-----------+-------+---------------------+-------------+---------+------+--------+---------------------------------------------------------------------+
|  1 | SIMPLE      | search | range | search_time,keyword | search_time | 4       | NULL | 290616 | Using index condition; Using where; Using temporary; Using filesort |
+----+-------------+-----------+-------+---------------------+-------------+---------+------+--------+---------------------------------------------------------------------+

effective,rowsdown to29w, it is reasonable to say that no matter how you check in 29w, it will not be too slow, but you all knowexplaininnerrowsIt’s just for reference, it still takes a while to actually run3sMuch, is also unbearable.

This database machine is also running other businesses at the same time, all of which are relatively large in magnitude. The server load is not low. The Qixi Festival has not yet arrived, because this SQL makes the server smoke, and the slow query of this business is also delayed. Slow execution times for other businesses cause a chain reaction.

The read part of the data has been cached before, but the log records still show that a large number of slow query requests are generated within a certain period of time. At first we suspected that the cache was invalid, but later we found out that it was actually high concurrency that caused a large number of slow database queries within 5 seconds due to the long execution time of SQL statements during the cache setting stage.

Let’s talk about the solution directly:

  1. Narrow down the scope of the query, by the previous query3 daysquery instead1 day, down to130w+data.

  2. Mandatory use of indexes shortens query time to a certain extent.

  3. Write a script to save the query results tomemcacheHere, this is mainly to prevent a large number of database accesses in a short period of time while waiting to be written to mc under high concurrency.

  4. Cache the results of database reads.

  5. Cache the interface result.

After doing these 5 steps, my mother no longer has to worry about my server smoking~~

Note: I will slowly move other blogs here later, and I will mainly write here in the future.