This procedure is assuming the following:
/dev/vdW
- working device./dev/vdN
- new device.
First, add the new disk into the system and boot.
If using the RAID array as the boot device, install
grub
on both disks in the RAID, in the event of a failure the system will still have one disk with an MBR.Partition the new disk to be exactly like the disk that is operational, with the same starting and ending cylinders or sectors:
Use
parted /dev/vdW u s p
to print the working disk partition layout. Example withGPT
:# parted /dev/vdW u s p Model: Virtio Block Device (virtblk) Disk /dev/vdW: 10485760s Sector size (logical/physical): 512B/512B Partition Table: gpt <--- Disk Flags: Number Start End Size File system Name Flags 1 2048s 194559s 192512s p1 raid <--- 2 194560s 585727s 391168s p2 raid <--- 3 585728s 976895s 391168s p3 raid <--- 4 976896s 1953791s 976896s p4 raid <--- 5 1953792s 10485726s 8531935s p5 raid <---
Create the partition table on the new device (
/dev/vdN
):# parted /dev/vdN mklabel gpt # parted /dev/vdN mkpart p1 2048s 194559s # parted /dev/vdN mkpart p2 194560s 585727s # parted /dev/vdN mkpart p3 585728s 976895s # parted /dev/vdN mkpart p4 976896s 1953791s # parted /dev/vdN mkpart p5 1953792s 10485726s
Set the
raid
flag to each partition on the new device (/dev/vdN
):# parted /dev/vdN set 1 raid on # parted /dev/vdN set 2 raid on # parted /dev/vdN set 3 raid on # parted /dev/vdN set 4 raid on # parted /dev/vdN set 5 raid on
Run
partprobe
to detect the newly created partitions:# partprobe /dev/vdN
Rebuild each
/dev/md
device by adding the second disk back in:Use
/proc/mdstat
output for reference. Example:# cat /proc/mdstat Personalities : [raid1] md4 : active raid1 vdW5[1] <--- 4261824 blocks super 1.2 [2/1] [_U] [...]
This ouput shows that
/dev/md4
has/dev/vdW5
partition from the working device.Run
mdadm
to add/dev/vdN5
(new device) to it:# mdadm --manage /dev/md4 --add /dev/vdN5
Watch the resync progress with the following command:
# watch -n 2 cat /proc/mdstat
Root Cause
Only 1 of the disks is installed with an MBR by default in a raid1 system at install time.
The second disk lacks an MBR, so even if /dev/sda
fails, the system contains the data on the second disk (/dev/sdb
for example), but will not have a MBR in the first 440 bytes of the disk.
In this scenario, the system will fail to start the grub boot loader.
Diagnostic Steps
Boot the machine from the remaining functional disk. From there:
[root@host ~]# cat /proc/mdstat [root@host ~]# fdisk -l
0 Comments