I recently had a problem with an LVM volume because it's ran out of CoW space. I added more space to the volume and restarted it, now it shows CoW is only 66% full. It seems LVM created a snapshot with data it couldn't write when CoW was full, but I don't want to use snapshots, so I tried to merge it back to the origin:
# lvconvert --merge shared/spark03.ofd.com.sel-disk1
Internal error: #internal LVs (5) != #LVs (3) + #snapshots (1) + #internal LVs (2) in VG shared
Unable to merge LV "spark03.ofd.com.sel-disk1" into its origin.
My logical volumes currently looks like this:
# lvscan -a
ACTIVE '/dev/shared/kkm03.ofd.com.sel-disk1' [80.00 GiB] inherit
ACTIVE '/dev/shared/mirror01.inf.com.sel-disk1' [150.00 GiB] inherit
ACTIVE Snapshot '/dev/shared/spark03.ofd.com.sel-disk1' [30.10 GiB] inherit
ACTIVE Original '/dev/shared/spark03.ofd.com.sel-disk1_vorigin' [100.00 GiB] inherit
ACTIVE '/dev/system/root' [7.45 GiB] inherit
ACTIVE '/dev/system/tmp' [976.00 MiB] inherit
ACTIVE '/dev/system/swap' [488.00 MiB] inherit
ACTIVE '/dev/system/var' [7.88 GiB] inherit
I was thinking about just removing the snapshot, but I dont see /dev/shared/spark03.ofd.com.sel-disk1_vorigin in the filesystem, so I'm not sure if I'll be able to recover it.
How can I fix it?
Thanks
Best Answer
In LVM terms, the
merge
means you want to revert back to your previous logical volume state, losing all changes done from when the snapshot was taken.From what I understand, it is not what you want. Rather, you want to delete the old snapshot to recover space. This means you should issue a
lvremove shared --name spark03.ofd.com.sel-disk1
Obviously, triple-check each command before to issue the wrong one! And take regular backups...