Linux RAID 1: How to make a secondary HD boot

grubmdadmsoftware-raid

I have the following RAID 1 on a Centos 6.5 server:

# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdb1[3]
    974713720 blocks super 1.0 [2/1] [_U]
    bitmap: 7/8 pages [28KB], 65536KB chunk

md1 : active raid1 sdb2[3] sda2[2]
    2045944 blocks super 1.1 [2/2] [UU]

unused devices: <none>

# df -h
Sist. Arq.            Size  Used Avail Use% Montado em
/dev/md0              915G  450G  420G  52% /
tmpfs                 7,8G     0  7,8G   0% /dev/shm

/dev/sda is about to fail. I even marked it as faulty since it was causing read errors.

I got the new HD today which will replace /dev/sda.

The issue is that when I unplug the current /dev/sda, I can't make it boot only with /dev/sdb. It looks like the PC's BIOS can't find anything bootable on /dev/sdb.

1) How can I detect if grub is installed in /dev/sdb's MBR?

2) Is it safe to run grub-install in /dev/sdb? Is this the correct way of making it bootable?

Best Answer

1) How can I detect if grub is installed in /dev/sdb's MBR?

You can issue:

# dd if=/dev/sda bs=512 count=1 | xxd | grep -i grub
1+0 records in
1+0 records out
512 bytes (512 B) copied, 0.00103986 s, 492 kB/s
0000180: 4752 5542 2000 4765 6f6d 0048 6172 6420  GRUB .Geom.Hard

2) Is it safe to run grub-install in /dev/sdb? Is this the correct way of making it bootable?

Yes, you need to have grub installed on both disks in the array.

Related Topic