What does iostat show a different utilization for the ‘dm’ device

iostatperformanceproxmox

I installed an M.2 drive into a Proxmox host this morning and monitored the disks with iostat -x 1. Why is iostat showing dm-7 is at 98% utilization, while the M.2 LVM PV (nvme0n1p1) shows only 6.8?

I would expect, since dm-7 is a symlink the the host M.2 LVM slice, I would see the same utilization on both nvme0n1* and dm-7. Why is this not the case?

UPDATE2:
I'm not sure this is an issue anymore. I'm running bonnie++ to test the M.2 performance and noticed the output in iostat is consistent during the bonnie run. Maybe what I saw at first has something to do with RAM/caching or the way I/O is otherwise reported/timed by iostat. It seems fine now. Still curious why the two devices would be off so far however so leaving this open.

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
nvme0n1           0.00     0.00    0.00  348.00     0.00 89088.00   512.00   202.68  488.70    0.00  488.70   2.85  99.20
sda               0.00    60.00    0.00   12.00     0.00   288.00    48.00     0.00    0.00    0.00    0.00   0.00   0.00
dm-0              0.00     0.00    0.00   72.00     0.00   288.00     8.00     0.00    0.00    0.00    0.00   0.00   0.00
dm-1              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
dm-2              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
dm-3              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
dm-4              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
dm-6              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
dm-7              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
dm-8              0.00     0.00    0.00   41.00     0.00 83968.00  4096.00    26.30  546.54    0.00  546.54  24.39 100.00

iostat output:

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
nvme0n1           0.00   133.00    0.00  292.00     0.00  6272.00    42.96     0.04    5.95    0.00    5.95   0.15   4.40
nvme0n1p1         0.00   133.00    0.00  240.00     0.00  6272.00    52.27     0.70    5.33    0.00    5.33   0.28   6.80
...
dm-6              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
dm-7              0.00     0.00    0.00  367.00     0.00  6272.00    34.18     1.72    4.70    0.00    4.70   2.67  98.00

What is dm-7?

[root@myhost ~/bin]
# dir /dev/disk/by-id/ | grep dm-7
...
lrwxrwxrwx 1 root root  10 Sep 17 08:58 dm-name-nvme1-vm--100--disk--0 -> ../../dm-7

What is nvme1-vm-100-disk-0?

[root@myhost ~/bin]
# lvs | grep nvme1
  vm-100-disk-0                          nvme1 -wi-ao---- 20.00g

Update – adding request by @Arlion for output:

# lvs; pvs; vgs; ls -al /dev/mapper/; mount
  LV                               VG    Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  bonnie_test                      nvme1 -wi-ao---- 210.00g                                                    
  vm-100-disk-0                    nvme1 -wi-ao----  20.00g                                                    
  data                             pve   twi-aotz--   1.34t             1.32   1.03                            
  root                             pve   -wi-ao----  96.00g                                                    
  snap_vm-100-disk-1_b4_first_boot pve   Vri---tz-k  80.00g data                                               
  swap                             pve   -wi-ao----   8.00g                                                    
  vm-100-disk-1                    pve   Vwi-aotz--  80.00g data        21.43      


  PV             VG    Fmt  Attr PSize   PFree 
  /dev/nvme0n1p1 nvme1 lvm2 a--  238.47g  8.47g
  /dev/sda3      pve   lvm2 a--    1.46t 15.82g



  VG    #PV #LV #SN Attr   VSize   VFree 
  nvme1   1   2   0 wz--n- 238.47g  8.47g
  pve     1   5   0 wz--n-   1.46t 15.82g



total 0
drwxr-xr-x  2 root root     240 Sep 17 12:44 .
drwxr-xr-x 21 root root    4360 Sep 17 12:44 ..
crw-------  1 root root 10, 236 Sep 17 08:36 control
lrwxrwxrwx  1 root root       7 Sep 17 13:07 nvme1-bonnie_test -> ../dm-8
lrwxrwxrwx  1 root root       7 Sep 17 08:58 nvme1-vm--100--disk--0 -> ../dm-7
lrwxrwxrwx  1 root root       7 Sep 17 08:36 pve-data -> ../dm-5
lrwxrwxrwx  1 root root       7 Sep 17 08:36 pve-data_tdata -> ../dm-3
lrwxrwxrwx  1 root root       7 Sep 17 08:36 pve-data_tmeta -> ../dm-2
lrwxrwxrwx  1 root root       7 Sep 17 08:36 pve-data-tpool -> ../dm-4
lrwxrwxrwx  1 root root       7 Sep 17 08:36 pve-root -> ../dm-1
lrwxrwxrwx  1 root root       7 Sep 17 08:36 pve-swap -> ../dm-0
lrwxrwxrwx  1 root root       7 Sep 17 08:36 pve-vm--100--disk--1 -> ../dm-6



sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,relatime)
udev on /dev type devtmpfs (rw,nosuid,relatime,size=32811264k,nr_inodes=8202816,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=6566444k,mode=755)
/dev/mapper/pve-root on / type ext4 (rw,relatime,errors=remount-ro,data=ordered)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/rdma type cgroup (rw,nosuid,nodev,noexec,relatime,rdma)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=27,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=28136)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
mqueue on /dev/mqueue type mqueue (rw,relatime)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
sunrpc on /run/rpc_pipefs type rpc_pipefs (rw,relatime)
configfs on /sys/kernel/config type configfs (rw,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
lxcfs on /var/lib/lxcfs type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
/dev/fuse on /etc/pve type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other)
tmpfs on /run/user/9418 type tmpfs (rw,nosuid,nodev,relatime,size=6566440k,mode=700,uid=9418,gid=56003)
/dev/mapper/nvme1-bonnie_test on /mnt/bonnie type ext4 (rw,relatime,data=ordered)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime)

Best Answer

 LV                               VG    Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  vm-100-disk-0                    nvme1 -wi-ao----  20.00g                                                                                                  
  vm-100-disk-1                    pve   Vwi-aotz--  80.00g data        21.43  

lrwxrwxrwx  1 root root       7 Sep 17 08:58 nvme1-vm--100--disk--0 -> ../dm-7
lrwxrwxrwx  1 root root       7 Sep 17 08:36 pve-vm--100--disk--1 -> ../dm-6

PV             VG    Fmt  Attr PSize   PFree 
/dev/nvme0n1p1 nvme1 lvm2 a--  238.47g  8.47g
/dev/sda3      pve   lvm2 a--    1.46t 15.82g


  VG    #PV #LV #SN Attr   VSize   VFree 
  nvme1   1   2   0 wz--n- 238.47g  8.47g
  pve     1   5   0 wz--n-   1.46t 15.82g

Your hypervisor is providing two different disks from two different resource pools. dm-7 is coming from the fully allocated drive, /dev/nvme0n1p1 and dm-6 is coming from the barely allocated drive, pve.

Doing the bonnie++, you are only testing the nvme1 set of drives.

I would do iotop and install netdata to see if there is something abnormal causing 100% utilization of disk. This could be something simple as the disk is simply 100% utilized, can not perform more io. Or There is another issue beneath this.