After a power failure, Ubuntu 10.04 Server hard drive is no longer bootable. I tried using boot-repair
but it couldn't locate an operating system.
I ran gdisk to verify where the lvm partition was and that it was still in tact. Here is the output:
GPT fdisk (gdisk) version 0.6.14
Partition table scan:
MBR: protective
BSD: not present
APM: not present
GPT: present
Found valid GPT with protective MBR; using GPT.
Command (? for help): p
Disk /dev/sdb: 3907029168 sectors, 1.8 TiB
Logical sector size: 512 bytes
Disk identifier (GUID): 3A0E99EE-74F9-41F5-81A0-7B7D7235DE8E
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 3907029134
Partitions will be aligned on 2048-sector boundaries
Total free space is 2157 sectors (1.1 MiB)
Number Start (sector) End (sector) Size Code Name
1 2048 4095 1024.0 KiB EF02
2 4096 503807 244.0 MiB EF00
3 503808 3907028991 1.8 TiB 8E00
Command (? for help): i
Partition number (1-3): 3
Partition GUID code: E6D6D379-F507-44C2-A23C-238F2A3DF928 (Linux LVM)
Partition unique GUID: 4F35492A-C6DD-4E31-9D53-8C88A74A1B48
First sector: 503808 (at 246.0 MiB)
Last sector: 3907028991 (at 1.8 TiB)
Partition size: 3906525184 sectors (1.8 TiB)
Attribute flags: 0000000000000000
Partition name:
So, it's still there and apparently in tact, so I went on to do vgscan:
:/# vgscan
Reading all physical volumes. This may take a while...
Found volume group "ubuntu" using metadata type lvm2
So, I did :/# vgchange -ay ubuntu followed by :/# lvs and got:
LV VG Attr LSize Origin Snap% Move Log Copy% Convert
root ubuntu -wi-ao 4.40g
swap_1 ubuntu -wi-a- 260.00m
The thing is, there should be another VG in there almost 1.8TB in size but it isn't showing.
So.. is there any way to recover a LV that isn't showing for lvs? I need to recover 1 important file in there that was created after the last backup was made.
:/# vgdisplay
--- Volume group ---
VG Name ubuntu
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 3
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 1.82 TiB
PE Size 4.00 MiB
Total PE 476870
Alloc PE / Size 1191 / 4.65 GiB
Free PE / Size 475679 / 1.81 TiB
VG UUID r3Z9Io-bWk7-i7wp-9QGZ-mF3o-ucQs-SdsaGW
Best Answer
I'm unclear if you're now missing a single 1.8TB LV or a 1.8TB PV + VG + LV. If the LV was located in a different VG, then best try to located the missing disk, starting from pvscan. You might for example just have a lvm filter or cache issue. Many distros try to make you hurt a lot by adding lvm.conf to the initrd, but not telling you you'll need to rebuild it if you change it later on. You can run pvscan -vvv and see if it says anything about "ignored by filtering".
If you just lost the LV from the config, then the question is how that happened and if it's still readable. So a test dd from the disk into /dev/null might be a good starting point, i.e. see if you can read "roundabout where the old LV was located"
Normally though I'd first make the system boot successfully again, by commenting out the affected LV in fstab.
As a last-est resort you can find the last LVM config backups on your disk. One can read it using something like dd + strings + grep -A1000 "LVM". But you're not that lost yet :) Based on that config one can re-create the older states of the LVM config to disk. But before doing that you have to be clear about whats corrupted and I would test the whole thing on a copy of the affected disk, not on the original one.