How to Rebuild mdadm RAID 1

This procedure is assuming the following:

  • /dev/vdW - working device.

  • /dev/vdN - new device.

  1. First, add the new disk into the system and boot.

  2. If using the RAID array as the boot device, install grub on both disks in the RAID, in the event of a failure the system will still have one disk with an MBR.

    • RHEL 5/6

      Raw

      # grub-install /dev/vdW
      # grub-install /dev/vdN
      
    • RHEL 7:

      Raw

      # grub2-install /dev/vdW
      # grub2-install /dev/vdN
      
  3. Partition the new disk to be exactly like the disk that is operational, with the same starting and ending cylinders or sectors:

    1. Use parted /dev/vdW u s p to print the working disk partition layout. Example with GPT:

      Raw

      # parted /dev/vdW u s p
      Model: Virtio Block Device (virtblk)
      Disk /dev/vdW: 10485760s
      Sector size (logical/physical): 512B/512B
      Partition Table: gpt <---
      Disk Flags: 
      
      Number  Start     End        Size      File system  Name  Flags
       1      2048s     194559s    192512s                p1    raid <---
       2      194560s   585727s    391168s                p2    raid <---
       3      585728s   976895s    391168s                p3    raid <---
       4      976896s   1953791s   976896s                p4    raid <---
       5      1953792s  10485726s  8531935s               p5    raid <---
      
    2. Create the partition table on the new device (/dev/vdN):

      Raw

      # parted /dev/vdN mklabel gpt
      # parted /dev/vdN mkpart p1 2048s 194559s
      # parted /dev/vdN mkpart p2 194560s 585727s
      # parted /dev/vdN mkpart p3 585728s 976895s
      # parted /dev/vdN mkpart p4 976896s 1953791s
      # parted /dev/vdN mkpart p5 1953792s 10485726s
      
  4. Set the raid flag to each partition on the new device (/dev/vdN):

    Raw

    # parted /dev/vdN set 1 raid on
    # parted /dev/vdN set 2 raid on
    # parted /dev/vdN set 3 raid on
    # parted /dev/vdN set 4 raid on
    # parted /dev/vdN set 5 raid on
    
  5. Run partprobe to detect the newly created partitions:

    Raw

    # partprobe /dev/vdN
    
  6. Rebuild each /dev/md device by adding the second disk back in:

    1. Use /proc/mdstat output for reference. Example:

      Raw

      # cat /proc/mdstat 
      Personalities : [raid1] 
      md4 : active raid1 vdW5[1] <---
            4261824 blocks super 1.2 [2/1] [_U]
      [...]
      

      This ouput shows that /dev/md4 has /dev/vdW5 partition from the working device.

    2. Run mdadm to add /dev/vdN5 (new device) to it:

      Raw

      # mdadm --manage /dev/md4 --add /dev/vdN5
      
  7. Watch the resync progress with the following command:

    Raw

    # watch -n 2 cat /proc/mdstat
    

Root Cause

Only 1 of the disks is installed with an MBR by default in a raid1 system at install time.

The second disk lacks an MBR, so even if /dev/sda fails, the system contains the data on the second disk (/dev/sdb for example), but will not have a MBR in the first 440 bytes of the disk.

In this scenario, the system will fail to start the grub boot loader.

Diagnostic Steps

Boot the machine from the remaining functional disk. From there:

Raw

[root@host ~]# cat /proc/mdstat 
[root@host ~]# fdisk -l 

Have more questions? Submit a request

0 Comments

Please sign in to leave a comment.
Powered by Zendesk