Ubuntu – Can’t resize2fs – combination of flex_bg and !resize_inode

ext4mdadmtune2fsUbuntu

I recently set up my first software raid with mdadm and after adding more disks to the raid I am unable to resize the filesystem to the full size of the raid. I created a single (~16TB) filesystem on /dev/md0 via:

mkfs.ext4 -v -b 4096 -t huge -E stride=128,stripe-width=256 /dev/md0

I then waited painfully for a couple of days as all of the data from the old raid copied over to the new raid; I moved over the disks and grew the raid and then finally I:

resize2fs -p /dev/md0

Which informs me that

resize2fs 1.42 (29-Nov-2011)
resize2fs: /dev/md0: The combination of flex_bg and !resize_inode features is not supported by resize2fs

I lack any understanding of exactly what these two features are for or why the combination is troublesome, so against my better judgement I tried to add resize_inode:

tune2fs -O +resize_inode /dev/md0

But I got shot down:

Setting filesystem feature 'resize_inode' not supported.

And I'm not brave enough to try to remove flex_bg as I really don't want to do anything that might put my data at risk. I'm running Ubuntu 12.04 with the 3.5.1 kernel:

Linux critter 3.5.1-030501-generic #201208091310 SMP Thu Aug 9 17:11:48 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux

I tested resize2fs again with v1.42.5 (the latest available release) to no avail. So, to be clear, my question is: how can I resize this ext4 filesystem to the size of the raid (without recreating it, preferably)?

Edit: here's some filesystem information that might be helpful.

tune2fs -l /dev/md0
tune2fs 1.42 (29-Nov-2011)
Filesystem volume name:   <none>
Last mounted on:          /media/Bigger
Filesystem UUID:          baecfa03-74c1-42ad-8e19-3b823f05f502
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      has_journal ext_attr dir_index filetype extent 64bit flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
Filesystem flags:         signed_directory_hash 
Default mount options:    user_xattr acl
Filesystem state:         clean
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              274700288
Block count:              4395202560
Reserved block count:     219760128
Free blocks:              247712956
Free inodes:              274636266
First block:              0
Block size:               4096
Fragment size:            4096
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         2048
Inode blocks per group:   128
RAID stride:              128
RAID stripe width:        768
Flex block group size:    16
Filesystem created:       Fri Aug 17 02:54:50 2012
Last mount time:          Mon Aug 20 02:21:51 2012
Last write time:          Mon Aug 20 02:25:07 2012
Mount count:              3
Maximum mount count:      -1
Last checked:             Fri Aug 17 02:54:50 2012
Check interval:           0 (<none>)
Lifetime writes:          16 TB
Reserved blocks uid:      0 (user root)
Reserved blocks gid:      0 (group root)
First inode:              11
Inode size:           256
Required extra isize:     28
Desired extra isize:      28
Journal inode:            8
Default directory hash:   half_md4
Directory Hash Seed:      b357ba49-60b1-4c55-837f-a70c8285a8f5
Journal backup:           inode blocks

Best Answer

This might help you -- http://www.spinics.net/lists/linux-ext4/msg27511.html

Take backups before you do anything since what you are doing seems very risky with ext4.

See this -- https://ext4.wiki.kernel.org/index.php/Ext4_Howto

WARNING: It is NOT recommended to resize the inodes using resize2fs with 
e2fsprogs 1.41.0 or later, as this is known to corrupt some filesystems. 

Upto 16TB seems doable with a 64bit ext4 file system but the state of the tools seem to be in a flux. This is a very good read -- http://blog.ronnyegner-consulting.de/2011/08/18/ext4-and-the-16-tb-limit-now-solved/

Unless you hear from a ext4 file system developer here, you may want to ask this question on the ext4 mailing lists.