Basic principles of database (MySQL)

Time:2020-9-16

1、 Basic definition of MySQL:

A. Introduction to terminology

Database (1)
It is a warehouse that organizes, stores and manages data according to the data structure.
Each database has one or more different APIs for creating, accessing, managing, searching, and copying saved data.
2. RDBMS(Relational Database Management System)
Relational database management system, developed by Sweden MySQL AB company, currently belongs to Oracle company.
It is a database built on the basis of relational model, with the help of mathematical concepts and methods such as set algebra to process the data in the database.
characteristic:
The data is presented in tabular form.
Each row has a variety of record names.
Each column is the data field corresponding to the record name.
Many rows and columns form a single form.
Several forms make up the database.
RDBMS terminology:
Database: a database is a collection of associated tables.
Data table: data table.
Column: a column (data element) contains the same type of data.
Rows: a row is a set of related data.
Redundancy: store twice the data, redundancy reduces performance, but improves data security.
Primary key: the primary key is unique, and only one primary key can be included in a data table.
Foreign key: a foreign key is used to associate two tables.
Composite key: a composite key (composite key) uses multiple columns as an index key, which is generally used for composite indexes.
Index: use index to quickly access specific information in a database table. Index is a structure to sort the values of one or more columns in a database table.
Referential integrity: referential integrity requires that no nonexistent entity can be referenced in the relationship. Entity integrity and entity integrity are the integrity constraints that must be met by the relationship model to ensure data consistency.

B. Common MySQL commands

1. Manage MySQL commands
For MySQL user settings, you only need to add new users to the user table in the MySQL database. (use mysql; insert into user() values())。

Use database name: select the MySQL database to operate. After using this command, all MySQL commands are only for this database.
Show databases: lists the database list of MySQL database management system.
Show tables: displays all tables of the specified database. Before using this command, you need to use the use command to select the database to be operated on.
Show columns from data table name: displays the properties, property types, primary key information, null, default value and other information of the data table.
Show index from data table name: displays the detailed index information of the data table, including primary key.
Show table status from database name like 'pattern': this command outputs the performance and statistics information of MySQL database management system.
2. Basic commands of MySQL
Create database. (or create with root permission: mysqladmin - U root - P create database name).
Drop database database name: delete database. (or use root permission to delete: mysqladmin - U root - P drop database name).

Show create table database name: get the CREATE TABLE statement, which contains the structure and index of the original data table.
Create data table
CREATE TABLE tb_name (column_name column_type);
For example:
create tale tb_student(
    s_id  int  PRIMARY KEY  AUTO_INCREMENT,
    s_name varchar(20)  NOT NULL,
    s_sex varchar(10),
    s_age int
)ENGINE=InnoDB  DEFAULT CHARSET=utf8;

Not null: when operating the database, if the input data of this field is null, an error will be reported.
AUTO_ Increment: defines the attribute of the column as auto increment, which is generally used for primary key. The value will be automatically increased by 1.
PRIMARY_ Key: define the column as the primary key, (or use multiple columns to define the primary key, separated by commas) (primary_ KEY s_ id, s_ sex)。
Engine: set the storage engine. If it is not defined, the default storage engine will be used (InnoDB is the default storage engine for MySQL).
Charset: set the encoding format. The default is UTF-8.
Delete data table
DROP TABLE tb_name;
Delete field in table
ALTER TABLE tb_name DROP column;
Add fields to a table
ALTER TABLE tb_ Name add column type; # added to the last column.
ALTER TABLE tb_ Name add column type first; ා added to the first column.
ALTER TABLE tb_ Name add column type after column; # added after a field.
Modify field type and name
ALTER TABLE tb_ Name modify column type; # modify with modify.
ALTER TABLE tb_ name CHANGE old_ column new_ Column type; # use change to modify.
ALTER TABLE tb_ Name modify column type is not null default 100; # the specified field is not empty, and the default value is 100.
Modify field defaults
ALTER TABLE tb_ Name alter column set default 1000; ා modify the field default value.
ALTER TABLE tb_ Name alter column drop default; ා delete the field default value.
Modify table name
ALTER TABLE old_tb_name RENAME TO new_tb_name;
Modify storage engine
ALTER TABLE tb_name engine=InnoDB;
Modify foreign key constraints
ALTER TABLE tb_name DROP FOREIGN KEY keyname;
insert data
INSERT INTO table_name (field1, field2, ……, fieldN)  VALUES (value1, value2, ……, valueN);
Update data
UPDATE tb_name SET field1=new-value1, field2=new-value2 WHERE … ;
Any condition can be specified in the where clause.
Delete data
DELETE FROM table_name WHERE … ;
Any condition can be specified in the where clause.
Query data
SELECT column_name1, column_name2 FROM table_name1, table_name2 
WHERE [condition1 [AND [OR]] condition2 …… ;

One or more tables can be used in the query statement, and the comma (,) is used to separate the tables, and the where statement is used to set the query conditions.
Any condition can be specified in the where clause.
You can use and or or to specify one or more conditions.
You can use the like clause instead of the equal sign (=), which is usually used with (%), similar to a metacharacter search (like "ACC%").
The where clause can also use the delete or update command of SQL.
You can use the limit attribute to set the number of records returned.
You can use offset to specify the data offset at which the select statement starts the query. The default offset is 0.
The where clause is similar to the if condition in program language, and reads the specified data according to the field value in MySQL table.

Where clause operator:
=: equal to
< >,! =: not equal to
>: greater than
<: less than
>=: greater than or equal to
< =: less than or equal to
Union operator
The union operator is used to join the results of more than two select statements into a result set.
Multiple select statements delete duplicate data.

SELECT expression1, expression2, ……, expressionN FROM tb_name1 WHERE ……
UNION  [ ALL | DISTINCT ]
SELECT expression1, expression2, ……, expressionN FROM tb_name2 WHERE ……;

All: optional, returns all result sets, including duplicate data.
Distinct: optional to delete duplicate data in the result set.
sort
Order by: sort the query results.
SELECT column1, column2 FROM tb_name WHERE …… ORDER BY field1 [ASC/DESC],field2 [ASC/DESC]。

ASC / desc: ascending / descending, ASC is the default.
grouping
Group by: groups the query results.
SELECT column1, column2 FROM tb_name WHERE …… GROUP BY column1, column2。
connect
Join: queries data in two or more tables.

Inner join (inner join, or equivalent join): get the records of field matching relationship in two tables.
 The inner join uses the comparison operator to compare the data in the two tables, and lists the data rows that match the join conditions to form a new record. Only the records that meet the conditions are kept.
 SELECT column1, … FROM tb_name1 INNER JOIN tb_name2 ON tb_name1.id = tb_name2.id;

Left join: gets all records in the left table, even if there are no corresponding matching records in the right table.
The left table remains unchanged, and the right table slides on the right side. The right table is used to match the left table. All rows in the left table are retained. The unmatched rows in the right table are filled with null value by default.
SELECT column1, … FROM tb_name1 LEFT JOIN tb_name2 ON tb_name1.id=tb_name2.id;

Right join: gets all records in the right table, even if there are no corresponding matching records in the left table.
The right table remains unchanged, and the left table slides on the left. The left table is matched with the right table. All rows of the right table are retained. The unmatched rows in the left table are filled with null value by default.
SELECT column1, … FROM tb_name1 RIGHT JOIN tb_name2 ON tb_name1.id=tb_name2.id;
Null value
Is null: this operator returns true when the value of the column is null.
Is not null: this operator returns true when the value of the column is not null.
< = >: comparison operator (different from = operator): returns true when two values of comparison are equal or both are null.
3. MySQL data type
value type
MySQL supports all standard SQL numeric data types.
It includes strict numeric data types (integer, smallint, decimal, numeric).
And approximate numerical data types (float, real, double precision).
Date and time type
Represents the date and time type of the time value (datetime, date, timestamp, time, year).
Each time type has a valid value range and a "zero" value, which is used when specifying a value that is not represented by an illegal mysql.
String type
String type (char, varchar, binary, varbinary, blob, text, enum, set).
4. MySQL index
Index is a condition applied to SQL query statements, which is generally used as the condition of where clause.

Primary key index: data columns cannot be duplicate or null. A table can only have one primary key.
Normal index: basic index type, no uniqueness restriction, and null value is allowed.
Unique index: the value of an index column must be unique, but null values are allowed; if it is a composite index, the combination of column values must be unique.
Prefix index: use the prefix of the column instead of the whole column as the index key, for example: like 'xxx%'.
Hash index: a certain hash algorithm is used to convert the key value into a new hash value. The hash algorithm can locate the corresponding position only once, and the query speed is very fast.
Three ways to create index
CREATE INDEX indexName ON tb_ Name (column (length)); # create the index directly.
ALTER TABLE tb_ Name add index indexname (column); # add index when modifying table structure.
CREATE TABLE tb_ Name (ා directly specify when creating a table.
    id int NOT NULL,
    username VARCHAR(16) NOT NULL,
    INDEX  [indexName] (username(length))
);
Three ways to create a unique index
CREATE UNIQUE INDEX indexName ON tb_ Name (column (length)); # create the index directly.
ALTER TABLE tb_ Name add unique [indexname] (column (length)); # create index when modifying table structure.
CREATE TABLE tb_ Name (ා add index when creating table.
    id int NOT NULL,
    username VARCHAR(16) NOT NULL,
    UNIQUE [indexName] (username(length))
);
Drop index
DROP INDEX [indexName] ON tb_name;
Use the alter command to add and delete indexes
ALTER TABLE tb_ Name add primary key (column); # this statement adds a primary key, and the index value must be unique and cannot be null.
ALTER TABLE tb_ Name add unique indexname (column); # the value of the index created by this statement must be unique (except null).
ALTER TABLE tb_ Name add index indexname (column); # add a common index, and the index value can appear multiple times.
ALTER TABLE tb_ Name add Fulltext indexname (column); # this statement specifies that the index is fulltext for full-text indexing.
ALTER TABLE tb_ Name drop index column; ා delete index.
ALTER TABLE tb_ Name add primary key (column); ා add primary key.
ALTER TABLE tb_ Name DROP primary key; ා delete primary key.

SHOW INDEX FROM tb_ Name; ා displays index information.
explain
View the execution plan. Use the explain keyword to simulate the optimizer to execute SQL statements and view the used index columns and other information.
explain select * from tb_name;
5. Index usage strategy and optimization
The most common underlying storage structure of index is B-tree or B + tree.
There are two kinds of indexes: single column index and composite index
Single column index: an index contains only a single column, and a table can have multiple single column indexes.
Composite index: an index contains multiple columns.

Single column index: an index can only have one field.
Composite index: also known as composite index. Compared with single column index, composite index can create an index for multiple fields.
Leftmost prefix principle
The leftmost matching is the leftmost priority; when creating a composite index, the most frequently used column in the where clause should be placed at the far left according to the business requirements.
The search of composite index is based on the first field, then the second field, or only the first field. However, it is not allowed to skip the first field and start the search directly from the second field. This is the so-called leftmost prefix principle.

The first field is ordered.
When the values of the first field are equal, the second field is also ordered.
When the value of the first field is equal and the value of the second field is equal, the third field is also ordered.

For example: create a joint index on fields a, B and C. The index order will first be sorted by field a, then by field B, and finally by field C.
The following SQL statement uses the index in the order of ((a), (a, b), (a, B, c)).
select * from tb_name where a = 0;
select * from tb_name where a = 0 and b = 1;
select * from tb_name where a = 0 and b = 1 and c = 2;

The following SQL statement uses only one index a.
select * from tb_name where a = 0 and c = 2;

The following SQL statement does not use the index because it does not follow the leftmost matching principle.
select * from tb_name where b = 1 and c = 2;

Taking MySQL as an example, the following SQL statements can also use indexes, and the query optimizer will recompile them. This is not recommended.
select * from tb_name where b = 1 and c = 2 and a = 0;
Index optimization
1. The primary key and foreign key should be indexed.
2. Use the index for the columns in where, on, group by, order by.
3. Leftmost matching principle.
4. Expand the index as much as possible; for example: there is already a field index, and now you want to use the index of (a, b) field, you only need to modify the original index.
5. Do not create too many indexes. Too many indexes will affect the speed of inserting and deleting data.
6. Don't put "%" in front of like query.
7. The where conditional data type does not match and the index cannot be used.
8. Use prefix index for longer strings.
9. The index will be invalid when the function operation is performed on the index column.
6. Clustered index and nonclustered index
Both clustered and nonclustered indexes refer to B + tree indexes at the bottom.
Clustered index
It is also called cluster index. The physical order of data rows is the same as the logical order of column values (primary keys). A table can only have one clustered index.
Nonclustered index (unclustered)
The logical order of the indexes in the index is different from the physical storage order of the disk uplink. A table can have multiple nonclustered indexes.
7. MySQL data import and export (users need file permission to import and export files)
Common export data statements
SELECT … INTO OUTFILE
Write the data of a database to a file. The output cannot be an existing file.
SELECT * FROM tb_name INTO OUTFILE ‘/tmp/tmp.txt’;

LOAD DATA INFILE
Read the file back to the database.
Common import data statements
MySQL command import
MySQL - U user name - P password < database data to be imported( tmp.sql )
Example: MySQL - uroot - p123456< tmp.sql      #The entire database that will be backed up tmp.sql Import

Source command import
Create database abc; # create database.
Use ABC; ා use the database that has been created.
Set names utf8; ා set encoding.
source  /home/abc/ abc.sql ; ා import backup database.

Load data command import
LOAD DATA LOCAL INFILE ‘tmp.txt’ INTO TABLE myTable
FIELDS TERMINATED BY ‘:’
Lines terminated by 'ා tmp.txt The data in the file is imported into the mytable table

If the local keyword is specified, the file is read path from the client host.
If not specified, the file reads the file on the server by path.

2、 Database transaction and Concurrency:

A. Database transactions

In mysql, only databases or tables that use InnoDB engine support transactions.
1. Database transaction definition
Transaction processing can be used to maintain the integrity of the database to ensure that SQL statements are executed completely or not.
Transactions are used to manage insert, update, and delete statements.
1) Atomicity (or indivisibility)
The operations in a transaction are either all completed or not completed, and will not end in a certain link in the middle.
If an error occurs during the execution of the transaction, it will be rolled back to the state before the transaction started.
2) Consistency
The integrity of the database is not compromised before and after the transaction begins and ends.
This means that the data written must fully comply with all default rules.
3) Isolation (also known as independence)
The ability of a database to allow multiple concurrent transactions to read, write and modify its data at the same time.
Isolation can prevent data inconsistency due to cross execution of multiple transactions.
4) Durability
After the end of the transaction, the modification of the data is permanent, even if the system fails, it will not be lost.
2. Transaction control statement
Mysql database transactions are automatically committed by default, that is, the commit operation will be executed immediately after the SQL statement is executed.
Open and commit a transaction explicitly

Begin / start transaction: start a transaction.
Commit / commit work: commit a transaction to make all changes to the database permanent.
Rollback / rollback work: rolls back the transaction and reverses all uncommitted changes in progress.

Savepoint identifier: allows you to create a savepoint in a transaction. You can have multiple savepoints in a transaction.
Release savepoint identifier: delete the savepoint of a transaction. When no savepoint is specified, an exception will be thrown when executing the statement.
Rollback to identifier: rolls back the transaction to the marked point.
Savepoint is a method to implement "sub transaction", also known as nested transaction, in database transaction processing.

Transactions can be rolled back to savepoint without affecting the changes before savepoint is created, and the entire transaction does not need to be abandoned.
SAVEPOINT savepoint_ Name; ා declare a savepoint.
ROLLBACK TO savepoint_ Name; ා rollback to savepoint.
RELEASE SAVEPOINT savepoint_ Name; ා delete the specified retention point.

Set transaction: used to set the isolation level of the transaction.

Two methods of MySQL transaction processing
1) Using begin, rollback, commit.
    Begin starts a transaction.
    Rollback rolls back the transaction.
    Commit commits the transaction.

2) Using set to change MySQL auto submit mode directly
Set autocommit = 0: disable auto commit.
Set autocommit = 1: turn on auto submit.

B. Database concurrency

Different database isolation levels are different, and locking methods are also different.
MySQL supports four transaction isolation levels. The default isolation level is (RR, repeatable read).
Oracle supports two transaction isolation levels (RC and serializable), and the default isolation level is (RC, read committed).
1. The concept of reading data
1) Dirty reads
It refers to the reading of dirty data, which refers to uncommitted data.
2) Non repeatable reads
A transaction reads the same record successively, and the data read twice is different.
3) Phantom reads
A transaction rereads the previously retrieved data according to the same query condition, but finds that other transactions insert new data satisfying its query criteria.
Example: there are two transactions (T1, T2) running at the same time
T1 reads a field that has been modified by T2 but has not yet been committed. For some reason, the T2 transaction is rolled back, and the content read by T1 is temporary and invalid; this is dirty reading.
T1 reads a field, and then T2 updates the field. When T1 reads the field value again, it reads the new value updated by T2. This is non repeatable reading.
T1 reads a field from a table, T2 inserts some new rows into the table, and then T1 reads the table again with a few more rows. This is called fantasy reading.
2. Database isolation level
1) Read uncommitted
Allows a transaction to read change data that has not been committed by another transaction.
There will be dirty read, non repeatable read and phantom read, and the isolation level is the lowest (read not locked).
2) Read committed
Allows a transaction to read change data that has been committed by another transaction.
Can avoid dirty reading, throw will appear non repeatable read and unreal read problem (read lock).
3) Repeatable read
Ensure that transactions can read the same value from a field multiple times.
During the duration of this transaction, other transactions are prohibited from updating this field.
It can avoid dirty read and non repeatable read, and the unreal read problem will occur. RR isolation level locks the read records (write locks).
4) Serializable (serializable)
Ensure that transactions can read the same rows from a table, and prohibit other transactions from performing insert, update, and delete operations on the table during the duration of the transaction.
All concurrency problems can be avoided, but the performance is very low.
All select statements are converted to select by hermits LOCK IN SHARE MODE。
In other words, read and write will block each other, and the isolation level is the highest (table level lock).
3. Storage engine of database
The common database storage engines are: (1) MyISAM (2) InnoDB.
1) MyISAM
Table level locking is supported.
Applicable scenario: read more and write less, hardware configuration is not high.
2) InnoDB (MySQL default storage engine)
Supports table level and row level (default) locks.
Application scenarios: support transactions and foreign keys, i.e. read and write services.
4. Database lock
Lock is mainly used to ensure database integrity and consistency in multi-user environment.
1) Locks are classified by usage
Optimistic lock
Every time I go to get the data, I think that others won't modify it, so I won't lock it.
But when updating, we will judge whether others have updated the data during this period.
If a conflict occurs, the user error message is returned to let the user decide what to do.
Most of them are based on the version recording mechanism.
Pessimistic lock
Every time I go to get the data, I think that others will modify it, so I lock the data every time I get it.
2) Locks are classified by level
Shared lock
Share lock, s lock, also known as read lock, is used for all read-only data operations.
Shared locks are non exclusive, allowing multiple concurrent transactions to read the locked resources.

Shared lock properties:
Multiple transactions can block the same shared page.
No transaction can modify the page.
Usually, the s lock is released immediately after the page is read.
Exclusive lock
Exclusive lock, also called write lock, is used to write data.
If a transaction has an exclusive lock on an object, no other transaction can lock it any more.

Property of exclusive lock
Only one transaction is allowed to block this page.
Any other transaction must wait until the X lock is released before accessing the page.
The X lock cannot be released until the end of the transaction.
3) Locking by granularity (MySQL)
Table level lock
The scope of lock is the whole table.
Small cost, fast lock, no deadlock.
Lock granularity is large, the probability of lock conflict is the highest, and the concurrency is the lowest.
Row level lock
The scope of the lock is the row level.
High cost, slow lock, deadlock.
Lock granularity is the smallest, the probability of lock conflict is the lowest, and the concurrency is the highest.
Page level lock
The scope of the lock is the entire page.
The cost and locking time are between the table lock and the row lock, resulting in deadlock. The locking granularity is between the table lock and the row lock, and the concurrency is general.
4) Intention lock
Intention lock is automatically added by InnoDB, without user intervention.
Intention shared lock
It indicates that the transaction is ready to add a shared lock to a data row. That is, before a data row is locked, the is lock of the table must be obtained.
Intention exclusive lock
Indicates that the transaction is ready to add an exclusive lock to a data row, that is, an IX lock of the table must be obtained before an exclusive lock is applied to a data row.
5. Lock implementation mode
When a transaction obtains a write lock on a table, only the transaction holding the lock can update the table. Other transactions will wait for the read and write operations until the lock is released.
When a transaction obtains a read lock on a table, other transactions can also obtain the read permission of the table, but other transactions cannot obtain the write permission of the table until the lock is released.
1) Implementation of row level lock
Row level lock is not a lock record, but a lock index; row level lock can only be used if data is retrieved through index conditions.
Implicit locking
For update, delete and insert statements, InnoDB will automatically add exclusive locks to the involved data sets.
For normal select statements, InnoDB does not impose any locks.
Explicit locking
SELECT * FROM tb_ name WHERE ……  Lock in share mode; ා add shared lock.
SELECT * FROM tb_ name WHERE ……  For update; ා add exclusive lock.
2) Table level lock implementation
Implicit locking
Before executing the select query statement, all tables involved are automatically read locked.
Before performing update, delete and insert operations, the involved tables will be automatically locked with write locks.
Explicit locking
LOCK TABLE tb_ Name write; ා write lock
LOCK TABLE tb_ Name read; ා add read lock

Unlock tables; ා release locks
6. InnoDB lock mechanism
1) Record Lock
Row lock, a single index record is locked. Record lock locks the index, not the record itself.
2) Gap Lock
Gap lock is used to lock the gap between index records, or lock before or after an index record. It does not include the index record itself.
Gap lock is for transactions with isolation level RR or above.
The only function of gap lock in InnoDB is to prevent the insertion of other transactions, so as to prevent the occurrence of phantom read.
Gap lock is generally used for non unique indexes.
3) Next-key Lock
Next key lock is a combination of record lock and gap lock, which locks the record itself and the gap between indexes.
The default transaction isolation level of MySQL is RR, if InnoDB_ locks_ unsafe_ for_ Binlog parameter is 0, and next key lock is used by default.
Next key lock is a combination of row lock and gap lock. When InnoDB scans index records, it will first add record lock to the index record, and then add gap lock to the gap on both sides of the index record.

With a gap lock, other transactions cannot modify or insert records in the gap.
7. Deadlock
Deadlock refers to the phenomenon of two or more processes waiting for each other because of competing for resources in the process of execution. If there is no external force, they will not be able to advance.
1) Causes of deadlock
The resources of the system are insufficient.
The code is not executed in the right order.
Improper allocation of resources.
2) Necessary conditions for deadlock
Mutex condition: a resource can only be used by one process at a time.
Request and hold condition: when a process is blocked by a request for resources, it holds the acquired resources.
Non deprivation condition: the resources acquired by the process can not be forcibly deprived before they are used up.
Cyclic waiting condition: a cyclic waiting resource relationship is formed between several processes.
3) Reduce deadlock
Access objects in the same order.
Avoid user interaction in transactions.
Keep the transaction short and in a batch.
Use a low isolation level.
Use the binding connection.
8. Mvcc multi version concurrency control
Mvcc: multi version concurrency control.
Mvcc is a method of concurrency control, which can achieve the throughput performance of database in high concurrency scenarios.
1) Mvcc principle
The realization of mvcc is realized by saving the snapshot of data at a certain point in time.
Two hidden columns are saved after each row of record. One column holds the creation time of the row, and the other column holds the expiration time (or deletion time) of the row.
The time stored here is not the actual time value, but the system version number.
Each time you start a new thing, the system version number will automatically increase.
The system version number at the beginning of the transaction is used as the version number of the transaction, which is used to compare with the version number of each line of record queried.
2) Mvcc features
Every time the version is updated, there is a row of data.
When modifying, copy the current version for modification, and each transaction does not interfere with each other.
When saving, the version number is compared. If the commit is successful, the original record will be overwritten. If it fails, the copy (rollback) will be abandoned.
3) Mvcc implementation
Save two additional hidden columns in each row of data: (1) data_ TRX_ ID   (2)DATA_ ROLL_ PTR。

DATA_TRX_ID
Record the transaction ID of the latest modification (insert / update) line record, with a size of 6 bytes.

DATA_ROLL_PTR
The undo log record pointer to the rollback segment of this line, with a size of 7 bytes.
If a row of records is updated, undo log record contains the information necessary to "rebuild the contents of the row before it is updated".
InnoDB finds the data of previous versions through this pointer.
If all the old versions are stored in the row record, they are organized in the form of linked list in undo.

If the table does not have a primary key, there is also a hidden primary key column dB_ ROW_ ID。

DB_ROW_ID
The row ID (hidden monotone auto increment ID) is 6 bytes in size. If the table does not have a primary key, InnoDB will automatically generate a hidden primary key.

Example: transaction 1, transaction 2, data_ TRX_ ID,   DATA_ ROLL_ PTR, DB_ ROW_ ID。
Transaction 1.
Execute a new data insert operation.
At this point, DB_ ROW_ ID = 1, DATA_ TRX_ Id = 1 (system version number), data_ ROLL_ PTR = NULL。

Transaction 2 performs the update operation.
For DB_ ROW_ The row id = 1 is locked with exclusive lock.
Copy the value before copy to undo log.
If you modify the value of this row, a new version number will be generated to update data_ TRX_ ID is the transaction ID of the modified record.
Add data_ ROLL_ PTR points to the old version record that has just been copied to the undo log chain. In this way, data can be passed_ ROLL_ PTR finds the historical version of this record. If continuous updates are performed on the same row of records, undo log will form a linked list. After traversing the linked list, you can see the changes of this record.
Record redo log, including the modification in undo log.
4) Under RR isolation level, mvcc specific operation process.
Select: InnoDB only looks for data rows whose version is earlier than the current transaction version. The deleted version of the row is either undefined or greater than the version number of the current transaction. This ensures that the rows read by the transaction are not deleted before the transaction starts.
Insert: InnoDB saves the current system version number as the row version number for each inserted line.
Delete: InnoDB saves the current system version number for each deleted line as the deletion ID, marking it for deletion rather than actual deletion.
Update: InnoDB will copy the original row to the rollback segment, save the current system version number as the row version number, and save the current system version number to the original line as the deletion identifier.

3、 The underlying implementation of InnoDB engine:

InnoDB core: (1) log (2) memory (buffer pool)) (3) disk (datafile)
These memory pools consist of a large memory block.
The background thread is mainly responsible for refreshing the data in the memory pool and flushing the modified data to the disk.

When a transaction performs a write operation, the InnoDB engine writes the data to the redo log, and the transaction will be committed.
Instead of writing to the disk (datafile), InnoDB will write the data of the new transaction asynchronously to the datafile and store it.

Basic principles of database (MySQL)

A. Redo log and undo log and bin log

The log file used to recover the dirty data block corresponding to the transaction.
1. Roll forward and roll back
Roll forward
A transaction that is not fully committed, that is, the transaction has been executed with the commit command.
Only a part of the dirty data block corresponding to the transaction is written to the data file on disk, and some is still in memory.
If the database instance crashes at this time, you need to roll forward to complete the transaction commit.
RollBACK
Uncommitted transaction, that is, the transaction has not been executed the commit command.
2. Redo log
Redo log, provide roll forward operation.
Redo log is usually a physical log, which records the physical modification of the data page and is used to recover the submitted physical data page.
Restore the data page, and only to the last committed location.
Redo log consists of two parts
The redo log buffer in memory, which is volatile.
Redo log file on disk, which is persistent.
3. Undo log
Rollback log, providing rollback operation.
Undo log is used to roll back the row records to a certain version.
Undo log is generally a logical log, which is recorded according to each line of record.
4. Bin log
Binary log, which records SQL statements that change or potentially change data, and is stored in disk in binary form.
It can be used to view the database change history (specific point in time operations), database incremental backup and recovery.
Show variables like %log_bin% ;

B. InnoDB memory

1. InnoDB thread
Master Thread
The core thread is mainly responsible for flushing the data in the cache pool to the disk asynchronously to ensure the data consistency.
IO Thread
IO thread is mainly responsible for a large number of asynchronous IO to handle write IO requests.
Purge Thread
Purge thread recycles undo pages that have been used and allocated. InnoDB supports multiple purge threads, which can speed up the collection of undo pages.
Page Cleaner Thread
Page cleaner thread puts the refresh operations of dirty pages in previous versions into a separate thread to complete, so as to reduce the work of master thread and the blocking of user query threads.
2. InnoDB memory model
InnoDB engine uses cache pool technology to improve the overall performance of the database.
The size of cache pool page in InnoDB is 16kb by default.
The famous local principle in Computer Science
When a data is used, the data near it is usually used immediately, and the data needed during the running of the program is usually concentrated.
Memory and disk
Main memory and disk exchange data on a page by page basis.
When the data to be read by the program is not in main memory, a page missing exception is triggered.
At this time, the system will send a disk reading signal to the disk. The disk will find the starting position of the data and read a page or several pages backward and load it into memory.
Then the exception returns and the program continues to run.
Database, memory, disk
When reading pages in the database, the pages read from the disk are first stored in the cache pool.
The next time you read the same page, first determine whether the page is in the cache pool.
If the page is hit in the cache pool, read the page directly.
Otherwise, read the page on the disk.

For the modification of the page in the database, first modify the page in the cache pool, and then refresh to the disk with a certain frequency.
3. Buffer pool
In order to better manage these cached pages, InnoDB creates some control information (control blocks) for each cached page.
The control information includes the table space number, page number, address of the page in buffer pool, lock information, LSN information, etc.
Each cache page corresponds to a control block. The memory size of each control block is the same. They are all put into the buffer pool, as shown in the figure below.

Basic principles of database (MySQL)

4. Free list
When starting MySQL server, the buffer pool needs to be initialized and divided into several pairs of control blocks and cache pages.
With the running of the program, the pages on the disk will be continuously cached in the buffer pool, but how to manage the idle cache pages in the buffer pool?
Use free list to manage idle cache pages, as shown in the figure below.

Basic principles of database (MySQL)

Free list control information: contains the address of the head node, the address of the tail node, and the number of nodes in the current list.
Each free list node records the address of a cache page control block.
Each cache page control block records the corresponding cache page address.

Whenever a page needs to be loaded from the disk into the buffer pool, a free cache page is taken from the free list, and the information of the control block corresponding to the cache page is filled in, and then the free list node corresponding to the cache page is deleted from the free list linked list.
5. Buffer pool cleaning mechanism
Cache hit ratio: if a total of N pages are accessed, the number of times the visited page has been in the cache divided by N is the cache hit rate.

InnoDB buffer pool uses the classic LRU algorithm to eliminate pages to improve the cache hit rate.
LRU (least recently used): least recently used to manage the availability of pages in the cache pool.
If the page is not in the buffer pool, when the page is loaded from the disk to the cache page in the buffer pool, the cache page is packaged as a node and inserted into the head of the linked list.
If the page is in the buffer pool, the LRU linked list node corresponding to the page is directly moved to the head of the linked list.

shortcoming
If the hot data is flushed out after a full table scan, the buffer pool pollution problem will be caused and the cache hit rate will be seriously reduced.
All data pages in the buffer pool have been changed once, and other query statements have to be loaded from disk to buffer pool again.
midpoint insertion stategy
InnoDB storage engine optimizes the traditional LRU algorithm.
Midpoint is added to InnoDB. Although the newly read pages are the latest visited pages.
However, it is not directly inserted into the head of the LRU list, but inserted into the middle point of the LRU list.
The default configuration is inserted at 5 / 8 of the list length, and the midpoint is set by the parameter InnoDB_ old_ blocks_ PCT control.

The list before midpoint is called the new list, and the list after it is called old list.
You can simply think of the pages in the new list as the most active hot data.

InnoDB storage engine also introduces InnoDB_ old_ blocks_ Time is used to indicate how long a page needs to wait before it is added to the hot end of the LRU list after it is read to the mid location. You can set this parameter to ensure that the hot data is not easily flushed out.
6. Flush list
Flush linked list is used to manage flushing pages back to disk. The cache pool stores the pages (dirty pages) that need to be flushed to the disk through the flush linked list.
The dirty page here refers to the page that has been modified for the first time since it was loaded into the buffer pool.
Only the first modification needs to add the flush linked list (it already exists in the second modification).
7. LRU list, free list and flush list

Basic principles of database (MySQL)

8. Checkpoint Technology
1) Shorten database recovery time.
The location of checkpoint is recorded in redo log.
The page before this point has been flushed back to the disk. You only need to recover the redo log after checkpoint.
2) Refresh dirty pages when cache pool is insufficient.
According to the LRU algorithm, the least recently used pages are overflowed.
If the page is dirty, force checkpoint to flush the dirty page back to disk.
3) Refresh dirty pages when redo log is not available.
Because the redo log is recycled, the corresponding data of this part has not been flushed to the disk.
When the database is recovered, the log can be covered if it is not needed.
If necessary, checkpoint must be enforced to refresh the pages in the cache pool to at least the location of the current redo log.
There are two kinds of checkpoints in InnoDB storage engine
Sharp checkpoint (default, InnoDB_ fast_ shutdown=1)。
Sharp checkpoint occurs when the database is shut down, flushing all dirty pages back to disk.
Disadvantages: not suitable for database runtime refresh.
Fuzzy Checkpoint。
Fuzzy checkpoint is suitable for refreshing dirty pages when the database is running, and only some dirty pages are refreshed.
MasterThread Checkpoint
Asynchronous refresh, refresh a certain proportion of pages back to disk from the cache pool dirty page list every second or every 10 seconds.

FLUSH_LRU_LIST Checkpoint
If there is not enough space in the buffer pool, overflow the page at the end of the LRU list according to the LRU algorithm.
If there are dirty pages on these pages, checkpoint is required (this is what the page cleaner thread does).
The InnoDB storage engine needs to ensure that almost 100 free pages are available in the LRU list.
Innodb_ lru_ scan_ Dept: controls the number of available pages in the LRU list, 1024 by default.

Asnc / Sync Flush Checkpoint
When redo logs are not available, you need to force a refresh page back to disk.
The page is selected from the flush list.
LSN
The number of each record in the transaction log.
InnoDB storage engine marks the version through the LSN (log sequence number). LSN is an 8-byte number.

redo_ LSN: the LSN written to the log.
Checkpoint_ LSN: flushes back to the latest page LSN on the disk.

Checkpoint_age = redo_lsn – checkpoint_lsn。
Async_water_mark = 75% * total_redo_file_size。
Sync_water_mark = 90% * total_redo_file_size。
Dirty Page too much Checkpoint
That is, if there are too many dirty pages, the checkpoint is forced to ensure that there are enough available pages in the cache pool.
Parameter setting: InnoDB_ max_ dirty_ pages_ pct = 75。
Indicates: when the number of dirty pages in the cache pool accounts for 75%, the checkpoint is forced. After 1.0x, the default is 75.

Basic principles of database (MySQL)

C. InnoDB key features

1. Insert buffer
In the design of insert buffer, the insertion and update operations of nonclustered indexes are not inserted directly into the index page every time.
Instead, determine whether the inserted nonclustered index page is in the cache pool.
If it exists, it will be inserted directly; if not, it will be put into an insert buffer object first.

The nonclustered index of the database is not inserted into the leaf node (because only the leaf node stores data in the B + tree), but stored in another location.
Then merge the insert buffer and sub nodes of the auxiliary index page with a certain frequency and situation.
In this case, it is usually possible to combine multiple inserts into one operation (because it is in an index page), which greatly improves the performance of nonclustered index inserts.
There are two conditions to use insert buffer
1) An index is a secondary index.
2) The index is not unique.
2. Two write operations
Suppose you have a scenario where the database is writing a data page from memory to disk, and the database goes down.
As a result, the page only writes part of the data, which is part write invalidation, which will lead to data loss.
At this time, the redo log cannot be used to recover.
Because redo logging is a physical modification of a page, if the page itself is damaged, redo logging is powerless.

How to solve the above problems?
To solve the above problems, two write operations can be used.
Because there are copies of data pages in the disk shared table space, if the database is down while the page is writing to the data file.
When the instance is restored, the copy of the page can be found from the shared table space, and the original data page can be covered by the copy, and then the redo log can be applied.

The principle of double writing.
1) When the dirty pages of the cache pool are flushed, they are not directly written to the data file, but copied to the two write cache in memory.
2) Then, it distinguishes two writes to disk shared table space from two write cache, and writes 1MB each time.
3) After step 2, write the write buffer twice to the data file.

Two extra parts need to be added.
1) The double write buffer in memory is 2MB in size.
2) There are 128 consecutive pages in the shared table space on the disk, and the size is 2MB.

InnoDB is enabled to write twice by default. You can use skip_ innodb_ Doublewrite disable write twice.

Basic principles of database (MySQL)