Software-RAID with ext4 but without 64bit support

ext4mdraidsoftware-raid

last year I set up a Software-RAID5 with 5x3TB yielding 12TB of usable capacity. Just today, needing more storage, I have finished growing the RAID to two more 3TB disks:

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid5 sdd1[7] sde1[6] sdb1[4] sda1[5] sdc1[2] sdg1[1] sdf1[0]
      17580801024 blocks super 1.2 level 5, 512k chunk, algorithm 2 [7/7] [UUUUUUU]

unused devices: <none>

This means that I should now have approximately 6x3TB = 18TB available on /dev/md0. resize2fs, called without a size parameter, now told me that the new size is not possible in 32 bit mode. Some research showed that this is a common problem and not easily solvable without heavy tinkering which I am not willing to do.

tune2fs confirmed that the 64bit-flag was indeed missing 🙁 although in the config files auto_64-bit_support = 1 is set (and should also have been set when the filesystem was created). But there is no use in whining about something I can't change afterwards.

Sadly, full backup and restore is not an option (I know, there should exist a backup of all the data but there is only enough money to backup the really important part of it).

I then tried to resize the filesystem to 16TB with resize2fs -S 128 /dev/md0 16T which seemed to work but came back with an error telling me that there is not enough space on the device and advising me to run e2fsck -fy /dev/md0 – strange thing. My heart pounded like crazy until that check came back okay! Telling it to resize to 15T worked, though.

I think we can live with around 15TB for some more months but having approximately 3TB hanging around with no use is something I really don't like. My question now is, how I can put these 3TB to use. My directions of research were

  • Converting to btrfs, which seems to support filesystems larger than 16TB and is possible without a backup/restore cycle – but different sources say that this is not reliable and should not be used in production.
  • Partitioning /dev/md0 to create a second filesystem on the remaining 3TB – seems to be impossible (partition table type loop)
  • Setting up LVM – is this even possible without reformatting?

but none of these "solutions" were sufficiently well documented/tested or were not an option as stated above so I am now stuck with a /dev/md0 of 18TB containing an ext4 filesystem with only 15TB and 3TB of free space. Does anybody have an idea what else I could try/do/consider?

Best Answer

I've run into the exact same SNAFU on a CentOS 6 machine. My kernel has 64-bit support, but the file system wasn't originally formatted with the 64-bit flag set. No >16TB support. :-( I can confirm that tune2fs will be no help here. You cannot convert an ext2/3/4 file system to 64-bit.

I'm lucky though in that I'm using LVM on top of my MD array, so I've gone about adding space to my array in a somewhat roundabout fashion by creating another LV with the spare space in my MD array, and formatting that using the 64-bit option. I then move some data from the old to the new file system, then shrink the old file system, re-size (shrink old, grow new) the LVM volume groups, grow the 64-bit file system, and repeat (several times). It's not ideal, but my recommendation is to re-partition the md array and do it that way. It is possible. (GParted will be very helpful here)

To address your directions of research, from my sysadmin experience:

  • Converting to BTRFS is probably not an option. As the docs suggest, BTRFS is still in development (even 3.12 which is included in kernel 3.1) and shouldn't be used to store critical (and not backed-up) data. While there's nothing to suggest your file system would spontaneously corrupt itself, it's a little riskier than using ext4.

  • Partitioning is the way to go, and is made a lot easier with tools like GParted. It's even easier on top of LVM ...

  • You can install LVM and you could try a third party tool called Blocks to convert your md array (block device) to LVM volume & storage group. That'll make re-partitioning and re-sizing a little easier. While you won't need to convert your root file system, that tool may help putting LVM into the mix. I would err on the side of not going down this route in case of something becoming FUBAR. I would practice beforehand on a test bench.

Perhaps just stick with re-partitioning (and a manual format of your new partition with mkfs.ext4 -O 64bit .... ) with GParted and move your data across manually. Check to make sure your files are all < 3TB in size otherwise you'll need external storage in the mix as well.