pvmove is very slow. You will be propably faster if you recreate your layout during a small downtime.
If no downtime is possible I would recreate md334 as striped mirror with degraded raid1 disks as underlying disks (i.e. use md for Raid 10 - not LVM). Then do your pvmove to md334, get rid of md312, wipe their disks md-signatures and add the then free two disks to your two degraded raid1s (and come back to full redundancy).
I am not sure if you can stack md-devices, but I see no reason why that should not be possible. During the pvmove you won`t have redundancy.
Update 2011-08-17: I just tested the procedure with CentOS 5.6 - it works.
Here are the results:
cat /proc/mdstat
Personalities : [raid1] [raid0]
md10 : active raid0 md3[1] md1[0]
1792 blocks 64k chunks
md3 : active raid1 loop0[1] loop1[0]
960 blocks [2/2] [UU]
md1 : active raid1 loop2[1] loop3[0]
960 blocks [2/2] [UU]
To simulate your setup I first setup /dev/md0 with a mirror consisting of loop0 and loop2. I setup an VG with md0 as disk. Then I createad an LV within that VG, create a filesystem in the LV and mounted it, wrote some file to it.
Then I setup /dev/md1 and md3 as degraded raid1 devices consisting of loop1 resp. loop3.
After that I created a raid10 device my building a raid0 out of md1 and md3.
I added md10 to the VG. Then pvmove md0 to md10. Removed md0 from the VG. Stopped md0, wiped loop0 and loop2. Resized the degraded raid1 to they could use two devices. Hot added loop0 to md3 and loop2 to md1.
The filesystem was still mounted throughout the whole process.
Every time you perform an operation with LVM, by default, the previous metadata is archived in /etc/lvm/archive
. You can use vgcfgrestore
to restore it, or grab the extends by hand (harder, but lvcreate(8)
should cover it).
Edit:
And to make it as easy as possible, I should add that you can find the last backup before your destructive operation by looking at descriptions:
# grep description /etc/lvm/archive/vg01_*
/etc/lvm/archive/vg01_00001.vg:description = "Created before executing 'lvremove -f /dev/vg01/foo'"
/etc/lvm/archive/vg01_00002.vg:description = "Created before executing 'lvremove -f /dev/vg01/bar'"
/etc/lvm/archive/vg01_00003.vg:description = "Created before executing 'lvremove -f /dev/vg01/baz'"
Edit:
The normal
allocation policy (default one) will allocate a stripe from the first free PE when there is enough room to do so. If you want to confirm where the LV was allocated, you can look in the archive files, those are perfectly readable by humans.
Best Answer
It is also possible to build a mirror first using another single PV, and split of the striped volume afterwards. This requires a free PV but then again I assume pvmove needs this as well.
If you have a volume
lvsplit
using PV'ssda1
andsdb1
for example, andsdc1
is a (temporary) PV with enough free extents, you can do this:This will create a mirror from your striped volume, using sdc1 to build the mirror
...let the mirror build...
This will convert the mirror back to a single volume taking out the extents on PV
/dev/sda1
and/dev/sdb1
, leaving sdc1 as the only PV for your now lineair LV.You can then
pvmove
from sdc1 to another PV, or use the mirror technique instead of pvmove to migrate back tosda1
orsdb1
.