LVM RAID 1 – How to Create with 4 Disks of Different Sizes

linuxlvmmdadmraid1Ubuntu

I have a system with

2 x 1TB (NVMe SSDs)
2 x 2TB (SATA SSDs)

disks and would like to create a RAID 1 system using all available disks (so I have a 3TB RAID1 system at the end). Unfortunately, most examples for how to set this up are for just two disks.

What's the recommended approach here?

  • Create RAID1s for each disk size, i.e. mirror the disks of the same kind and then create one large logical volume on top of that?
  • Or is there some other, smarter approach?

If someone has a step by step recipe (or a good link to one), that would be very much appreciated.

Also, do I need a separate system or boot partition?

Thank you.

Best Answer

That is heavily depends on what are you aiming at, because NVMe vs SATA SSD speeds & latency have a huge difference.

Personally, I would've created two different sets of LVM volume groups (VG). One for the NVMe and other for SSD and manually assigning them for a different tasks. I.e. NVMe for IO heavy tasks, like DBs and SSD for more generic storage. Of course you can just combine them into a single VG, but that way you are basically "slowing down" NVMe to SATA speed. Well... not really, but almost.

As for the booting - if you have EFI mode system and a modern bootloader (i.e. GRUB2) you'll need a separate small sized (256-512Mb would be fine) partition for the EFI file of FAT32 type. But at least EFI system can boot directly from NVMe and GRUB2 can boot directly from Linux RAID + LVM.

  1. Create 1st partition (i.e. with fdisk) on both the the NVMe drives of about 256-512 Mb size. Set it's type to EFI boot.
  2. Create 2nd partition for the remaining of space for 100% allocation. Set it's type to Linux RAID.
  3. Format each of 1st EFI partition to FAT32 (i.e. mkfs.vfat -F32 /dev/nvme0n1p1).
  4. You can follow the same for SSD drives if you want to make them bootable in case if both NVMe fail or just a single Linux RAID partition for data.
  5. Create a 1st RAID array for NVMe: mdadm --create /dev/md1 -l 1 -n 2 -b internal /dev/nvme0n1p2 /dev/nvme1n1p2.
  6. Create a 2nd RAID array for SSD members: mdadm --create /dev/md2 -l 1 -n 2 /dev/sda2 /dev/sdb2 (if you've created EFI partitions there or just sda1/sdb1 if not).
  7. Create LVM PVs out of newly created arrays: pvcreate /dev/md1 && pvcreate /dev/md2.
  8. Create VGs & LVs on top of the PV. If you still want to combine them, you create 2 PVs and later adding both to same VG.
  9. Make sure to mount EFI partitions & install a proper bootloader on each of the drives. Like this for 1st NVMe drive: mount /dev/nvme0n1p1 /boot/efi && grub-install /dev/nvme0n1.

Note that you can't have a RAID array for EFI partitions. Well... not really, there are some tricks, but I don't think they worth it because there is nothing unrecoverable storage on those. It's just the small binary so that the EFI "bios" can boot your bootloader. Even in case it fails, you can still boot your system of some sort of Live image (like SuperGRUBdisk) and reinstall it with grub-install again.

Related Topic