LVM: Logical volumes mapped size much bigger than filesystem used disk space

filesystemslvmproxmoxthin-provisioning

In Proxmox we use LVM to create discs (logical volumes) for our VMs (LVM thin provisioning).

Recently we found out our volume group is almost full even when all VM discs are almost empty.

Problem is that some LVM partitions show much bigger mapped size that real size of data stored on VM disk as reported by df.

For example we have VM with 100 GB logical volume. Disc usage inside VM shows only 3.2 GB of space is used:

#> df
Filesystem                         Size  Used Avail Use% Mounted on
/dev/mapper/pve-vm--3011--disk--1   99G  3.2G   91G   4% /
...

but logical volume on host shows it use 39.8 GB:

  #> lvdisplay
  --- Logical volume ---
  LV Path                /dev/pve/vm-3011-disk-1
  LV Name                vm-3011-disk-1
  VG Name                pve
  LV UUID                oleKd5-O2o4-c4CE-5vzn-bXRC-TXwF-lzApmW
  LV Write Access        read/write
  LV Creation host, time carol, 2016-08-16 09:03:54 +0200
  LV Pool name           vm-hdd
  LV Status              available
  # open                 1
  LV Size                100.00 GiB
  Mapped size            39.83%
  Current LE             25600
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           251:28
  ...

So in fact more than ten times more space is used than really needed.

Any idea what is the reason? I suspects that LVM keeps allocated all extents ever touched by filesystem in VM. Is there any way to prevent this or to claim unused space back?

Best Answer

Figured it out. You have to use fstrim.

I was able to reclaim unused space by running this inside VM:

    fstrim -v /

Based on solution described in section Using fstrim to increase free space in a thin pool LV here: http://man7.org/linux/man-pages/man7/lvmthin.7.html