A successful case of RAID data recovery and relocation

Time:2019-10-8

The malfunction occurred on a raid0 made up of two disks. One of the disks had a yellow light. After being kicked out of the raid card, raid collapsed. The whole process of rescuing the data at that time is described below.

Because the hard disk is two SAS 300G hard disks, first pull the hard disk out of the machine, then connect it directly to win environment through SAS HBA, and mark the hard disk as offline state in disk management, in order to ensure that the operation process is read-only and protect the security of the original data.

After mirroring all sectors of the bottom of the two hard disks, the original raid environment is built up through file system analysis, disk order and band size, and software virtual reorganization. Then the NTFS file system is further parsed and the data is finally seen. At this time, new problems arise. If the data are copied directly, the original system and application need to be re-established. Deployment, and because there is no support from software service providers, it is difficult to implement, so think of the raid built up and then migrate to the new raid environment, you can do the same as before damage. This saves a lot of time.

As a result of the lessons learned from the past, we decided to use three disks to build RAID5 in the new raid environment. Even if one hard disk fails offline, raid can be degraded and will not collapse immediately, giving users the opportunity to replace the new hard disk.

After installing a new raid card supporting RAID5 and inserting a new hard disk, we created a volume of raid5. Now we’re going to start studying how to migrate the data we made.

Because the front panel of the server is managed by raid card, inserting a new disk directly will not be recognized directly under the system. It needs to create raid under the raid card before it can be used, and is limited to the problem of single disk capacity, it can not adopt this scheme, so other methods are studied. Because the server front panel has a DVD drive, and now the server drive and motherboard are connected by SATA channel, so the machine cover can be opened to use the SATA port above, connecting a SATA hard disk, in the PE or Linux Live CD mode, you can move back the data, and this is the fastest way. But when we are ready to implement, we find that the SATA used by this machine is not the standard size interface type, but minisata. Because there is no ready-made adapter card in hand, this method is not feasible. In fact, when the data volume is small, we can also use USB mode to do it. But now most of the servers’USB still stay in USB2.0, which is too slow for large data. Quantity and time are unacceptable.

Finally, we absolutely use a novel way to relocate data – go online.

At this point, you need to start onelinux live cd Usually we use it.linux system rescue cdAfter Linux start-up, ifconfig configures the IP of the server, then we put the data on a win 2008 R2 machine, and turn on NFS service (which is turned off by default), “Service Manager – Role – Add Role – Check File Service – Check Network File System Service for installation. After the first installation, we need to restart the computer.”

After the restart, we operate on the folder where the mirror data is stored. Right-click on the NFS Sharing tab to check to share this folder, and then there is an emphasis on the permission to check to allow access to the root directory, access type to choose read and write.

After the Win settings are complete, let’s look at the Linux settings and ifconfig to see the current network configuration.

Because we need to assign an IP to him, here we assign the network card “enp4s0”, the IP address is assigned to 10.3.12.3 subnet mask 255.0.0, using the following command: ifconfig enp4s0 10.3.12.3 255.0.0.0.0 and then use ifconfig to view the IP address.

After configuring the ip, check whether the network is connected with the command:ping 10.1.1.1

To see if NFS shared directories on 10.1.1.1 machines are accessible, command:showmount –e 10.1.1.1

The source and target machines are now connected, creating a directory mkdir/mnt/bysjhf in Linux

Once created, we mount the mirrored data into the newly created folder under Linux mount 10.1.1.1:/data/mnt/bysjhf-o nolock

After mounting, check the mount point information DF – K

After confirming that it has been mounted, go into this folder and check the mirror files in the folder:


[email protected] /mnt/bysjhf % ls

View the hard disk and partition information:fdisk –l

After confirming the source and target devices, mirror them:


  dd if=/mnt/bysjhf/data.img of=/dev/sda bs=10M

In Gigabit Network environment, the speed of NF S can run to 70M/S, which is already an ideal speed. After waiting for DD to complete, we restart the IBM X3650 server and select raid boot. The expected Windows startup page finally appears. The hard work ahead is not in vain, and the complete migration of data is successful.

The above is a successful case of RAID data recovery and relocation introduced by Xiaobian. I hope it will be helpful to you. If you have any questions, please leave a message for me. Xiaobian will reply to you in time. Thank you very much for your support to developpaer.