Linux – Using Linux LVM, can I change the number of stripes and “rebalance” the logical volume

linuxlvmraid10

I created a RAID10 by adding two RAID1 md devices as physical volumes to a volume group. Unfortunately it looks like I forgot to specify the number of stripes when I created the logical volumes (it was late):

PV         VG     Fmt  Attr PSize   PFree  
/dev/md312 volume lvm2 a-   927.01G 291.01G
/dev/md334 volume lvm2 a-   927.01G 927.01G

I know that I can move all the data of a logical volume from one physical volume to another with pvmove. It also looks like lvextend supports an -i switch to change the number of stripes. Is there any way to combine these two, ie. change the number of stripes and "rebalance" the data over the stripes based on the allocation policy?

According to this mail by Ross Walker from March 2010 it isn't possible but maybe this has changed since then.

Best Answer

pvmove is very slow. You will be propably faster if you recreate your layout during a small downtime.

If no downtime is possible I would recreate md334 as striped mirror with degraded raid1 disks as underlying disks (i.e. use md for Raid 10 - not LVM). Then do your pvmove to md334, get rid of md312, wipe their disks md-signatures and add the then free two disks to your two degraded raid1s (and come back to full redundancy).

I am not sure if you can stack md-devices, but I see no reason why that should not be possible. During the pvmove you won`t have redundancy.

Update 2011-08-17: I just tested the procedure with CentOS 5.6 - it works. Here are the results:

cat /proc/mdstat

Personalities : [raid1] [raid0]

md10 : active raid0 md3[1] md1[0] 1792 blocks 64k chunks

md3 : active raid1 loop0[1] loop1[0] 960 blocks [2/2] [UU]

md1 : active raid1 loop2[1] loop3[0] 960 blocks [2/2] [UU]

To simulate your setup I first setup /dev/md0 with a mirror consisting of loop0 and loop2. I setup an VG with md0 as disk. Then I createad an LV within that VG, create a filesystem in the LV and mounted it, wrote some file to it.

Then I setup /dev/md1 and md3 as degraded raid1 devices consisting of loop1 resp. loop3. After that I created a raid10 device my building a raid0 out of md1 and md3.

I added md10 to the VG. Then pvmove md0 to md10. Removed md0 from the VG. Stopped md0, wiped loop0 and loop2. Resized the degraded raid1 to they could use two devices. Hot added loop0 to md3 and loop2 to md1.

The filesystem was still mounted throughout the whole process.