Linux – Any way to recover ext4 filesystems from a deleted LVM logical volume

data-recoveryext4linuxlvm

The other day I had a proper brain fart moment while expanding a disk on a Linux guest under Vmware. I stretched the Vmware disk file to the desired size and then I did what I usually do on Linux guests without LVM: I deleted the LVM partition and recreated it, starting in the same spot as the old one, but extended to the new size of the disk. (Which will be followed by fsck and resize2fs.)

And then I realized that LVM doesn't behave the same way as ext2/3/4 on raw partitions… After restoring the Linux guest from the most recent backup (taken only five hours earlier, luckily) I'm now curious on how I could have recovered from the following scenario. It's after all virtually guaranteed that I'll be a dumb ass in the future as well.

Virtual Linux guest with one disk, partitioned into one /boot (primary) partition (/dev/sda1) of 256MB, and the rest in a logical, extended partition (/dev/sda5).

/dev/sda5 is then setup as a physical volume with pvcreate, and one volume group (vgroup00) created on top of it with the usual vgcreate command. vgroup00 is then split into two logical volumes root and swap, which are used for / and swap, logically. / is an ext4 file system.

Since I had backups of the broken guest I was able to recreate the volume group with vgcfgrestore from the backup LVM setup found under /etc/lvm/backup, with the same UUID for the physical volume and all that. After running this I had two logical volumes with the same size as earlier, with 4GB free space where I had stretched the disk.

However, when I tried to run "fsck /dev/mapper/vgroup00-root" it complained about a broken superblock. I tried to locate backup superblocks by running "mke2fs -n /dev/mapper/vgroup00-root" but none of those worked either. Then I tried to run TestDisk but when I asked it to find superblocks it only gave an error about not being able to open the file system due to a broken file system.

So, with the default allocation policy for LVM2 in Ubuntu Server 10.04 64-bit, is it possible that the logical volumes are allocated from the end of the volume group? That would definitely explain why the restored logical volumes didn't contain the expected data. Could I have recovered by recreating /dev/sda5 with exactly the same size and disk position as earlier? Are there any other tools I could have used to find and recover the file system? (And clearly, the question is not whether or not I should have done this in a different way from the start, I know that. This is a question about what to do when shit has already hit the fan.)

Best Answer

Every time you perform an operation with LVM, by default, the previous metadata is archived in /etc/lvm/archive. You can use vgcfgrestore to restore it, or grab the extends by hand (harder, but lvcreate(8) should cover it).

Edit:

And to make it as easy as possible, I should add that you can find the last backup before your destructive operation by looking at descriptions:

# grep description /etc/lvm/archive/vg01_*
/etc/lvm/archive/vg01_00001.vg:description = "Created before executing 'lvremove -f /dev/vg01/foo'"
/etc/lvm/archive/vg01_00002.vg:description = "Created before executing 'lvremove -f /dev/vg01/bar'"
/etc/lvm/archive/vg01_00003.vg:description = "Created before executing 'lvremove -f /dev/vg01/baz'"

Edit:

The normal allocation policy (default one) will allocate a stripe from the first free PE when there is enough room to do so. If you want to confirm where the LV was allocated, you can look in the archive files, those are perfectly readable by humans.