I think you've got it right. Make sure you understand and heed the warnings regarding growing RAID 5 in man 8 mdadm
.
Personally if I were growing an LVM volume, I would not be growing an existing RAID array to do it. I'd create another RAID array, create a new physvol from it, and add it to the same volume group. This is a much safer operation (doesn't involve rewriting the whole RAID5 array across the new set of disks) and keeps the size of your arrays down.
mdadm doesn't recognize partitions, the Linux kernel does. A software RAID array doesn't need to know or care what type of partitions the disk uses, because it just uses the block devices that the kernel provides for the partitions. I'm using mdadm arrays on GPT disks on several computers and they work fine.
The partition layout you described doesn't make sense:
/dev/sda
/dev/sda1 <- GPT type partition
/dev/sda1 <- exists within the GPT part, member of md127
/dev/sda2 <- exists within the GPT part, empty
/dev/sdb
/dev/sdb1 <- GPT type partition
/dev/sdb1 <- exists within the GPT part, member of md127
In particular, it looks like you're saying that sda2
is located within sda1
. Partitions don't exist within other partitions, and GPT is a characteristic of the whole-disk device, not a partition. I think what you actually mean is:
/dev/sda <- GPT disk
/dev/sda1 <- member of md127
/dev/sda2 <- empty
/dev/sdb <- GPT disk
/dev/sdb1 <- member of md127
However, your blkid
output says that /dev/sda1
currently contains an Ext4 filesystem, not a RAID superblock — it's not a member of md127
. It's not clear how that filesystem got there, since you said that you were using it as a RAID component, but since your story is long and full of twists, I suspect there may have been points where things happened that you didn't realize had happened. My suggestion at this point is:
- Assemble the array in degraded mode using just
/dev/sdb1
. Check that it contains your data; if not, check whether /dev/sda1
somehow contains an intact filesystem with your data, otherwise I hope you have a backup.
- Make a backup of all your data, if you don't have one already.
- Completely wipe
/dev/sda
: dd if=/dev/zero of=/dev/sda bs=1M
. Then use gdisk
to recreate the partition(s).
- Create a new degraded array using only a partition on
sda
. Make a filesystem in it, and copy your data into it.
- Disassemble the array that's using
sdb1
, and completely wipe /dev/sdb
: dd if=/dev/zero of=/dev/sdb bs=1M
. Then use gdisk
to recreate the partition.
- Add
/dev/sdb1
to the new array and let it sync.
As for installing GRUB, it depends on whether your machine supports EFI (and whether you're using it for booting). If you're using EFI, you need to make an EFI system partition somewhere; it should be roughly 100MB, formatted FAT32. Then you'd install the EFI version of GRUB. I won't go into too much detail on this; EFI booting is a topic for a separate question.
If you're not using EFI to boot, you need to make a "BIOS Boot" partition somewhere on the disk that you'll be installing GRUB on. (This is partition type code ef02
in gdisk
.) The partition can be tiny; 1MB is plenty. GRUB will use this to store the boot code that it would have written to sectors 1 through 62 on an MBR disk. (On an MBR disk, those sectors are typically unallocated since the first partition typically begins at sector 63, but on a GPT disk, the partition table is located in that area.) GRUB should automatically notice that the disk you're installing it to contains a BIOS Boot partition, and put its boot code there instead of in sectors 1-62.
Best Answer
You can do this without any problems...
I'm assuming /dev/sdb is a separate HP Smart Array Logical Drive.
Don't use any partitioning for this setup... Just create the filesystem on the block device:
When you want to expand at a later date, add disks and expand the HP logical drive using the
hpssacli
or Smart Storage Administrator tools.You can rescan the device to get the new size with:
Confirm the device size change with
dmesg|tail
.At that point, you can run
xfs_growfs /mountpoint
(not device name) and the filesystem will grow online!