Redis data migration

Time:2020-10-17

preparation in advance

Before data migration, it is necessary to prepare for migration.

  • Environment research, source and target database environment, version, data size, business scenario, operating system version, etc

  • Plan preparation, as detailed as possible, the source database is hung due to migration failure. Is there a data backup fallback scheme

  • When the personnel roles arrive, the data migration should be carried out at night, and it is better for the A / b roles to participate together.

  • Fully understand the scope of business impact, use the monitor command to check which IP operations have been carried out on the source data (after the IP of Ping, info and slowlog are removed), and the host is converted to the domain name

  • How to deal with the problem during the shutdown window

inspect

  • Compare source and target environments (info, uname – a commands)

  • Understand the scope of business impact (monitor, wak, sort and other commands, which crud the source)

  • Personnel preparation (development, operation and maintenance)

  • Backup of the source data, in case the source is hung during the migration

  • Data double write. When writing the source, write a copy to the target machine. Double write first and then increment or complete historical data supplement.

Smooth transfer double writing scheme

  • It is mainly divided into four steps.

  • Step 1: upgrade the service and perform the same modification operation on the new database to “modify the data on the old database” (the modification here refers to the data insert, delete and update), which is called “double write”. The main modification operations include:

    1. Insert old library and new library at the same time

    2. Delete old database and new database at the same time

    3. Update old database and new database at the same time

  • Since there is no data in the new database at this time, the affect rows in the old database and the new database may be different, but this does not affect the business function at all. As long as the database is not cut, the old database still provides business services.

  • This service upgrade risk is small:

    1. The write interface is a small number of interfaces, with few changes

    2. Whether the write operation of the new library is successful or not has no impact on the business function


  • Step 2: develop a data migration tool to migrate data from old database to new database.

  • This gadget is less risky:

    1. The whole process is still the old library to provide online services

    2. The complexity of small tools is low

    3. Any time a problem is found, the data in the new database can be deleted and started again

    4. Speed limit can be slow migration, technical students have no time pressure

  • After the data migration is completed, can you switch to the new library to provide services?

    • The answer is yes. Because the front-end steps are double written, the data in the new database and the old database should be completely consistent after data migration.

  • In a very, very extreme situation:

    1. Date migrate tool just takes a piece of data x from the old database

    2. Before x is inserted into the new library, the old library and the new library just double delete X

    3. Date migrate tool inserts x into the new library

  • In this way, there will be one more data X in the new database than in the old database.

  • But in any case, in order to ensure the consistency of the data, data verification is still needed before the database is cut


  • Step 3: after the data migration is completed, it is necessary to use the data verification tool to compare the data in the old database and the new database. If the data in the old database is completely consistent, it is in line with the expectation. If the limit inconsistency in step 2 occurs, the data in the old database shall prevail.

  • The risk of this gadget is still small:

    1. The whole process is still the old library to provide online services

    2. The complexity of small tools is low

    3. Any time you find a problem, you can start from step 2

    4. Can speed limit slowly compare data, technical students have no time pressure


  • Step 4After the data is completely consistent, the traffic is switched to the new database to complete the smooth data migration.

  • At this point, after the upgrade, the whole process can continuously provide online services without affecting the service availability.


summary

For many Internet business scenarios with “large amount of data, large amount of concurrency and high business complexity”, the

  1. Underlying table structure changes

  2. Number transformation of sub database

  3. Underlying storage medium transformation

Data migration is needed to complete the solution of “smooth migration of data, no downtime in the migration process, and continuous service of the system”.

  • Double writingFour steps

  1. The service is upgraded to record the “data modification on the old database” and double write the new database

  2. Research and development of a data migration tool for data migration

  3. Develop a data comparison tool to verify data consistency

  4. The traffic is switched to the new database to complete the smooth migration