1. When querying the database, the data type will be implicitly converted (for example, if varchar is not quoted, it may be automatically converted to int, and the index will become invalid). How to avoid implicit conversion
Example explanation of implicit conversion of MySQL
Implicit conversion of MySQL
MySQL implicit conversion solution
MySQL implicit conversion problem (case 1)
On MySQL implicit type conversion
MySQL implicit transformation and collation
The following table:
#Sname is of varchar type and is searched with value type explain select sno from s where sname=123; #Sname is of varchar type and is searched with string type explain select sno from s where sname='123';
The above query will not go through the index, but will go through the index below.
Because the types on both sides of the equal sign are inconsistent, an implicit conversion occurs, cast (index_file as signed), and then compare with 123. Because ‘123’ and ‘123abc’ will be converted to 123, MySQL cannot use the index and can only scan the whole table, resulting in slow queries.
Use all string types in 1. In
Attached:Explain field details
2. Leftmost matching principle
Detailed explanation of MySQL leftmost matching principle statement
The left most matching principle of MySQL is explained in detail through examples (this is very clear)
Why use federated indexes
#Create a test_ User's table CREATE TABLE `test_user` ( `name` varchar(20) DEFAULT NULL, `province` int(11) DEFAULT NULL, `sex` varchar(20) DEFAULT NULL, `birthday` int(11) DEFAULT NULL, `phone` double DEFAULT NULL ) ENGINE = InnoDB DEFAULT CHARSET = utf8; #Create a joint index of (name, phone, province) CREATE INDEX test_user_name_phone_province_index ON test_user (name, phone, province);
The following query index results are shown in the figure below:
explain select name from test_user Where name = 'Zhang San'; explain select name from test_user Where name = 'Zhang San' and phone = 32432; explain select name from test_user Where name = 'Zhang San' and phone = 32432 and province = 2; explain select name from test_user where phone=32432; explain select name from test_user where
3. Differences between B + tree and B tree, red black tree and binary tree
Tree of data structure — binary tree / b tree / B + tree / red black tree and related algorithms
On the principle and application of AVL tree, red black tree, B tree and B + tree
Interesting to understand all kinds of trees (very detailed)
Introduction to MySQL index structure and why choose B + tree instead of B tree
Deeply understand the reasons why database indexes adopt B-tree and B + tree
Summary of B-tree, B-tree and B + tree
AVL is a more strict binary balanced tree than red black tree. The disadvantage is that it will degenerate into a linked list in extreme cases;
The red black tree has its own set of rules, and the root node must be black.
B-tree is a multi fork tree with reduced height and data to be searched on nodes.
B + tree on the basis of B tree, non leaf nodes do not store real data, but only indexes. Only leaf nodes store real data, and linked lists are formed between leaf nodes.
4. Overwrite index
Overlay index explanation (very detailed)
Overlay index summary
First, you need to know what a clustered index and a secondary index are.
Index is an efficient way to find rows. When the desired data can be read by retrieving the index, there is no need to read rows in the data table. If an index contains (or overrides) data that meets the fields and conditions in the query statement, it is called an overlay index.
Clustered index (primary key index):
Clustered index is to construct a B + tree according to the primary key of each table, and the record data of the whole table is stored in the leaf node.
The leaf node of a clustered index is called a data page. This feature of a clustered index determines the secondary index (secondary index) of the index group:
Non primary key index, leaf node = key value + bookmark. The bookmark of InnoDB storage engine is the primary key index value of the corresponding row data.
5. What is the difference between drop, truncate and delete? Which is the fastest and why
The difference between drop, delete and truncate (which is the fastest way to delete a table)
The difference between drop, truncate and delete
Drop deletes the table directly, truncate deletes the data in the table, and when inserting again, the ID increases from 1 to delete the data in the table. You can add the word where.
Replace delete with truncate
Truncate does not log, delete logs, so truncate is faster than delete.
However, once deleted with truncate, it cannot be restored. Truncate deletes the data of the whole table. The where condition cannot be added.
6. MySQL index type
Detailed explanation of MySQL index types
MySQL index summary
MySQL index concise formula
MySQL index selection
From the perspective of data structure
1. B + tree index (O (log (n)): for B + tree index, please refer toData structure and algorithm principle behind MySQL index
2. Hash index:
A can only satisfy “=”, “in” and “< = >” queries, and cannot use range queries
B its retrieval efficiency is very high. The retrieval of the index can be located at one time. Unlike the b-tree index, which needs multiple IO accesses from the root node to the branch node and finally to the page node, the query efficiency of the hash index is much higher than that of the b-tree index
C only the memory storage engine supports hash indexing
3. Fulltext index (now both MyISAM and InnoDB engines support it)
4. R-tree index (used to create spatial indexes on GIS data types)
From a physical storage Perspective
1. Clustered index
2. Non clustered index
From a logical point of view
1. Primary key index: a primary key index is a special unique index, and null values are not allowed
2. Ordinary index or single column index
3. Multi column index (composite index): composite index refers to the index created on multiple fields. The index can only be used if the first field when creating the index is used in the query criteria. Follow the leftmost prefix set when using composite indexes
4. Unique or non unique index
5. Spatial index: spatial index is an index established for fields of spatial data types. There are four spatial data types in mysql, namely geometry, point, linestring and polygon.
MySQL uses the spatial keyword to expand, enabling the syntax used to create regular index types to create spatial indexes. Columns that create spatial indexes must be declared not null. Spatial indexes can only be created in tables whose storage engine is MyISAM
CREATE TABLE table_name[col_name data type] [unique|fulltext|spatial][index|key][index_name](col_name[length])[asc|desc]
1. Unique|fulltext|spatial is an optional parameter, representing unique index, full-text index and spatial index respectively;
2. Index and key are synonyms. They have the same function and are used to specify index creation
3、col_ Name is the field column to be indexed, which must be selected from multiple columns defined in the data table;
4、index_ Name specifies the name of the index. It is an optional parameter. If not specified, MySQL defaults to col_ Name is the index value;
5. Length is an optional parameter, indicating the length of the index. Only string type fields can specify the index length;
6. ASC or desc specifies the index value store in ascending or descending order
7. A MySQL operation is very slow. How to check the cause
Slow response troubleshooting example
Troubleshooting and solution of MySQL database slow
1. Top view CPU usage
2. Check lock status
3. View the thread status processlist
8. Pessimistic lock, optimistic lock
Brief explanation of pessimistic lock and optimistic lock
Pessimistic lock and optimistic lock in interesting solution database
Optimistic lock and pessimistic lock in database
Summary of pessimistic lock and optimistic lock in database
Always assume the worst case. Every time you go to get the data, you think others will modify it, so every time you get the data, you will lock it. In this way, if others want to get the data, it will block until it gets the lock（Shared resources are only used by one thread at a time, and other threads are blocked. After they are used up, the resources are transferred to other threads）。 Many such locking mechanisms are used in traditional relational databases, such as row lock, table lock, read lock and write lock, which are locked before operation. In Java
ReentrantLockSuch exclusive lock is the realization of pessimistic lock.
Always assume the best situation. Every time you get the data, you think others will not modify it, so it will not be locked. However, when updating, you will judge whether others have updated the data during this period. You can use the version number mechanism and CAS algorithm.Optimistic locking is suitable for multi read applications, which can improve throughputSimilar to that provided by the databasewrite_ Condition mechanismIn fact, they are all optimistic locks provided. In Java
java.util.concurrent.atomicThe atomic variable class under the package is an implementation using optimistic lockingCASImplemented.
Usage scenarios of two types of locks: from the above introduction to the two types of locks, we know that the two types of locks have their own advantages and disadvantages, and one cannot be considered better than the other, such asOptimistic locking is applicable when there are few writes (multi read scenario), that is, when conflicts really rarely occur, this can save the cost of locking and increase the overall throughput of the system. However, in the case of multiple writes, conflicts often occur, which will cause the upper layer applications to constantly retry, which will reduce the performance, soGenerally, pessimistic lock is more appropriate in the scenario of writing more.
9. Advantages and disadvantages of index, how to implement it at the bottom?
Advantages of indexing: 1. Create a unique index to ensure the uniqueness of each row of data in the database table. 2. Greatly accelerate the speed of data retrieval (the main reason). 3. The connection between accelerometers and meters is particularly meaningful in realizing the reference integrity of data. 4. When using grouping and sorting clauses for data retrieval, it can also significantly reduce the time of grouping and sorting in the query. 5. By using the index, you can use the optimization Concealer in the process of query to improve the performance of the system. Disadvantages of indexing: 1. Creating and maintaining indexes takes time, which increases with the increase of the amount of data.
2. Indexes need to occupy physical space. In addition to the data space occupied by the data table, each index also needs to occupy a certain physical space. If cluster indexes are to be established, the required space will be larger.
3. When adding, deleting and modifying the data in the table, the index should also be maintained dynamically, which reduces the speed of data maintenance.
10. What are the MySQL engines and what are the differences
Detailed explanation of five MySQL engines
InnoDB Myisam Memory
The default index of InnoDB and MyISAM is B + tree, and the default index of memory is hash
1. InnoDB supports transactions, foreign keys and row locks. It operates quickly when writing data. Full text indexing is supported only when MySQL version 5.6 or above.
2. MyISAM does not support transactions. It does not support foreign keys, table locks, full-text indexing and fast data reading.
3. All data in memory is kept in memory and disk IO is not required, so the reading speed is very fast. However, once it is powered off, the structure of the table will be retained, but the data will be lost. The table supports hash index, so the search speed is very fast.
11. Characteristics of transactions
Detailed explanation of transactions and their characteristics and isolation levels
Atomicity(atomicity): when operating these instructions, either all of them are executed successfully or none of them are executed. As long as one instruction fails to execute, all instructions fail to execute, and the data is rolled back to the data state before the instruction is executed.
uniformity(consistency): the execution of a transaction changes the data from one state to another, but the integrity of the entire data remains stable.
Isolation(isolation): when multiple users access the database concurrently, such as operating the same table, the transactions opened by the database for each user cannot be disturbed by the operations of other transactions. Multiple concurrent transactions should be isolated from each other.
That is to achieve such an effect: for any two concurrent transactions T1 and T2, in the view of transaction T1, T2 either ends before T1 starts or starts after T1 ends, so that each transaction does not feel that other transactions are executing concurrently.
persistence(durability): when the transaction completes correctly, its changes to the data are permanent.
12. Difference between primary key and index
The difference between primary key and clustered index
A table usually has a column or set of columns that contain values that uniquely identify each row in the table. Such a column or columns are called the primary key (PK) of the table and are used to enforce the integrity of the table. When creating or modifying tables, you can create primary keys by defining PK constraints.
A table can only have one PK constraint, and the columns in the PK constraint cannot accept null values. Because PK constraints can ensure the uniqueness of data, they are often defined for identification columns.
If a PK constraint is specified for a table, the database engine enforces data uniqueness by creating a unique index for the primary key column. This index can also be used for quick access to data when a primary key is used in a query. Therefore, the selected primary key must comply with the rules for creating a unique index.
When creating a primary key, the database engine will automatically create a unique index to enforce the uniqueness requirements of PK constraints. If the sniper index does not exist in the table or the specified non clustered index is not displayed, a unique clustered index is created to enforce the PK constraint.
A clustered index gives the ability to sort and store data rows in a table. Each table can have only one clustered index because the data rows themselves can only be stored in one order.
Almost every table defines clustered indexes on columns to achieve the following functions:
Can be used for frequently used queries.
Provides a high degree of uniqueness.
13. Why is the InnoDB engine PK set to auto increment?
Why does MySQL’s InnoDB storage engine use self incrementing primary keys?
Why is the InnoDB engine recommended to use self incrementing primary keys?
In InnoDB, the table data file itself is an index structure organized by B + tree. The leaf node data field of the tree saves complete data records. The key of this index is the primary key of the data table, so the InnoDB table data file itself is the primary index.
This index is called a clustered index. Because the InnoDB data file itself needs to be aggregated by primary key, InnoDB requires that the table must have a primary key (MyISAM can not). If it is not explicitly specified, MySQL system will automatically select a column that can uniquely identify the data record as the primary key. If there is no such column, MySQL will automatically generate an implicit field for the InnoDB table as the primary key, The length of this field is 6 bytes and the type is long integer.
The secondary index data field of InnoDB stores the value of the primary key of the corresponding record instead of the address. Therefore, it is not recommended to use an overly long field as the primary key, because all secondary indexes refer to the primary index. An overly long primary index will make the secondary index too large.
The InnoDB data file itself is a B + tree. The non monotonic primary key will cause frequent splitting and adjustment of the data file to maintain the characteristics of B + tree when inserting new records, which is very inefficient.
14. Default isolation level
15. How many levels of isolation? What does each level mean?
The isolation levels of transactions are divided into:
- Read uncommitted
- Read committed
- Repeatable reads
Read uncommitted: the transaction level with the lowest isolation level. At this isolation level, dirty reads, unrepeatable reads, and phantom reads are triggered.
Read committedWhat you read is the value submitted by others. At this isolation level, non repeatable reads and phantom reads are triggered, but dirty reads are avoided.
Repeatable readingUnder this isolation level, unreal reading will be triggered, but dirty reading and non repeatable reading are avoided.
SerializationIs the most stringent isolation level. At the serializable isolation level, all transactions are executed sequentially. Dirty reading, unrepeatable reading and phantom reading will not appear.