Details of creating software RAID 10 on CentOS

Time:2020-2-23

When reinstalling an old server yesterday, I found that there was a problem with the Intel hardware raid control card. I couldn’t identify all the hard disks, but I could identify all the hard disks during the installation of the operating system. Another problem was that the operating system was installed normally, but it couldn’t be started after the installation. For some reason, the BIOS couldn’t start the system from the hard disk. So I plan to install the operating system on a USB disk, then boot the system from the USB disk, and make the above six hard disks into software RAID 10, and then mount it in the system.

As like as two peas, Software RAID does not require hard disks to be exactly the same, but it is highly recommended to use the same vendor, size and size of hard disk. Why raid 10, not raid0, RAID1, RAID5? A: raid0 is too dangerous, RAID1 performance is slightly worse, RAID5 performance is poor under frequent write, RAID10 seems to be the best choice for today’s disk array, especially suitable for local storage system of KVM / Xen / VMware virtual machine master (host) (if San and distributed storage are not considered).

There are six identical hard disks on this server. Each hard disk is divided into one area. The partition format is linux software RAID:

# fdisk /dev/sda

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
         switch off the mode (command 'c') and change display units to
         sectors (command 'u').

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-91201, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-91201, default 91201):
Using default value 91201

Command (m for help): p

Disk /dev/sda: 750.2 GB, 750156374016 bytes
255 heads, 63 sectors/track, 91201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0005c259

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1       91201   732572001   83  Linux

Command (m for help): t
Selected partition 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

According to the above partition example of / dev / SDA, partition the remaining 5 hard disks SDC, SDD, SDE, SDF, SDG, and change the partition format:

# fdisk /dev/sdc
...
# fdisk /dev/sdd
...
# fdisk /dev/sde
...
# fdisk /dev/sdf
...
# fdisk /dev/sdg
...

After the partition is completed, you can start to create raid. Create RAID10 on the above six partitions of the same size:

# mdadm --create /dev/md0 -v --raid-devices=6 --level=raid10 /dev/sda1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1
mdadm: layout defaults to n2
mdadm: layout defaults to n2
mdadm: chunk size defaults to 512K
mdadm: size set to 732440576K
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

Check the initialization process (build) of the disk array. According to the size and speed of the disk, the whole process will take several hours:

# watch cat /proc/mdstat
Every 2.0s: cat /proc/mdstat                                       Tue Feb 11 12:51:25 2014

Personalities : [raid10]
md0 : active raid10 sdg1[5] sdf1[4] sde1[3] sdd1[2] sdc1[1] sda1[0]
      2197321728 blocks super 1.2 512K chunks 2 near-copies [6/6] [UUUUUU]
      [>....................]  resync =  0.2% (5826816/2197321728) finish=278.9min speed=13
0948K/sec

unused devices: 

After the array is initialized, you can create partitions and file systems for md0 devices. With the file system, you can mount it to the system:

# fdisk /dev/md0
# mkfs.ext4 /dev/md0p1

# mkdir /raid10
# mount /dev/md0p1 /raid10

Modify the / etc / fstab file to mount automatically every time the system starts:

# vi /etc/fstab
...
/dev/md0p1 /raid10 ext4 noatime,rw 0 0

Using the / dev / md0p1 device name in the / etc / fstab file above is not a good way. Because of udev, the device name often changes after system restart. Therefore, it is better to use UUID and use the blkid command to find the UUID of the corresponding partition:

# blkid
...
/dev/md0p1: UUID="093e0605-1fa2-4279-99b2-746c70b78f1b" TYPE="ext4"

Then modify the corresponding fstab and mount it with UUID:

# vi /etc/fstab
...
#/dev/md0p1 /raid10 ext4 noatime,rw 0 0
UUID=093e0605-1fa2-4279-99b2-746c70b78f1b /raid10 ext4 noatime,rw 0 0

To view raid status:

# mdadm --query --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Tue Feb 11 12:50:38 2014
     Raid Level : raid10
     Array Size : 2197321728 (2095.53 GiB 2250.06 GB)
  Used Dev Size : 732440576 (698.51 GiB 750.02 GB)
   Raid Devices : 6
  Total Devices : 6
    Persistence : Superblock is persistent

    Update Time : Tue Feb 11 18:48:10 2014
          State : clean
 Active Devices : 6
Working Devices : 6
 Failed Devices : 0
  Spare Devices : 0

         Layout : near=2
     Chunk Size : 512K

           Name : local:0  (local to host local)
           UUID : e3044b6c:5ab972ea:8e742b70:3f766a11
         Events : 70

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       1       8       33        1      active sync   /dev/sdc1
       2       8       49        2      active sync   /dev/sdd1
       3       8       65        3      active sync   /dev/sde1
       4       8       81        4      active sync   /dev/sdf1
       5       8       97        5      active sync   /dev/sdg1