MDADM – Manually Build RAID Array Every Boot and Cannot Add Third Drive

linuxmdadmraidUbuntu

I have a RAID1 array that I've had to manually rebuild every time the system boots for a long time. Never had time to figure out why. This is the command I've been using to rebuild it every boot:
sudo mdadm --build /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 /dev/sde1

This works great and doesn't loose any data. I can then manually mount /dev/md0 where I need it (/mnt/plex in this case). However, I just installed a third hard drive in my server and I'd like to upgrade to RAID5. I used cfdisk to create a partition on my drive.

I then upgraded the array to RAID5:
sudo mdadm --grow /dev/md0 -l 5

Then, I added my new drive to the array
sudo mdadm /dev/md0 --add /dev/sda1

Finally, I try to increase the array to 3 drives
sudo mdadm /dev/md0 --grow -n 3
At which point I am presented with the following errors:

mdadm: ARRAY line /dev/md0 has no identity information.
mdadm: /dev/md0: cannot get superblock from /dev/sda1

The first error comes up alot, it's the second error causing the issue. Why can't I add /dev/sda1 into the array? While I'm at it, why doesn't the array auto build when the system boots?

Here are my drives/partitions if it helps:

sda       8:0    0   3.7T  0 disk
+-sda1    8:1    0   3.7T  0 part
  +-md0   9:0    0   3.7T  0 raid5 /mnt/plex
sdb       8:16   0   3.7T  0 disk
+-sdb1    8:17   0   3.7T  0 part
  +-md0   9:0    0   3.7T  0 raid5 /mnt/plex
sdc       8:32   0 931.5G  0 disk
+-md1     9:1    0 931.4G  0 raid1 /mnt/nas
sdd       8:48   0 931.5G  0 disk
+-md1     9:1    0 931.4G  0 raid1 /mnt/nas
sde       8:64   0   3.7T  0 disk
+-sde1    8:65   0   3.7T  0 part
  +-md0   9:0    0   3.7T  0 raid5 /mnt/plex
sdf       8:80   0 149.1G  0 disk
+-sdf1    8:81   0   512M  0 part  /boot/efi
+-sdf2    8:82   0 148.6G  0 part  /

SDB and SDF are the correctly functioning RAID members. Here is the details of the array from mdadm if it helps

gradyn@hbi-server:~$ sudo mdadm --detail /dev/md0
mdadm: ARRAY line /dev/md0 has no identity information.
/dev/md0:
           Version :
     Creation Time : Thu Oct 14 22:19:50 2021
        Raid Level : raid5
        Array Size : 3906886464 (3725.90 GiB 4000.65 GB)
     Used Dev Size : 3906886464 (3725.90 GiB 4000.65 GB)
      Raid Devices : 2
     Total Devices : 3

             State : clean
    Active Devices : 2
   Working Devices : 3
    Failed Devices : 0
     Spare Devices : 1

            Layout : left-symmetric
        Chunk Size : 64K

Consistency Policy : resync

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       65        1      active sync   /dev/sde1

       2       8        1        -      spare   /dev/sda1

Best Answer

If you need to issue mdadm --build to assemble the array, it means that you created an "old-style" array, with no superblock. In other words, the array geometry (and other metadata) are not stored on the affected disks, rather the system expect these information to be provided on the command line or to find them in a configuration file called /etc/mdadm.conf

Regarding the other issue (inability to add a third active disk), let see what the man page says about --build (no-superblock) arrays:

When used with --build, only linear, stripe, raid0, 0, raid1, multipath, mp, and faulty are valid.

As you can see, RAID5 is not allowed with legacy arrays. By issuing the first --grow command you forced the system into a unexpected scenario and the following --add could only set the new disk as a spare. The second --grow then fails because it can not find a valid superblock on the member disks.

I strongly suggest you to backup your data and to re-create a RAID5 array with both a superblock and a write bitmap. To accomplish that, you simply need to use mdadm default setting. In other words, something as

mdadm --create /dev/md0 -l 5 -n 3 /dev/sda1 /dev/sdb1 /dev/sde1

should be enough. Be sure to understand that the above command will erase all your data from the affected disks, so be sure to have a confirmed-good backup before issuing it.

Related Topic