Debian – slow read on openzfs/Linux via NFSv3 on Debian 10

debiannfsreadwritezfs

because of my unanswered question : qemu snapshot exclude device
i decided to use NFSv3 for the VM to handle user data.
Because of slow performance of BTRFS after maintance-tasks i use now zfs Raid1 Version: buster-backports 0.8.3-1 on the Debian Host.

When I copy data on the host there is no performance problem.

BUT: the performance via NFS is exorbitant slow; in the beginning for both write and read with 10 and 40 MB/s. After some Tuning (i think it was NFS with async) i got the writes to ~80 MB/s. Thats enough for me.
The reads stayed at 20 MB/s per device, yet.

Any ideas what to test? I'm new to zfs and NFS.

Host: Debian 10
VM: Debian 10

NFS:
Host: /exports/ordner 192.168.4.0/24(rw,no_subtree_check)
client: .....nfs local_lock=all,vers=3,rw,user,intr,retry=1,async,nodev,auto,nosuid,noexec,retrans=1,noatime,nodiratime

ZFS dataset:

Volume with:
….create -o ashift=12 zfs-pool ….mirror
sync=default

zfs set compression=off zfs-pool
zfs set xattr=sa zfs-pool
zfs set dnodesize=auto zfs-pool/vol
zfs set recordsize=1M zfs-pool/vol
zfs set atime=off zfs-pool/vol

zfs-mod-tune:

options zfs zfs_prefetch_disable=1
options zfs_vdev_async_read_max_active=1
options zfs_vdev_sync_read_max_active=128 (also 1 tested)
options zfs_vdev_sync_read_min_active=1

Can u give an advice?

Best Answer

You can get better performance if synchronous requests are disabled: zfs set sync=disabled tank/nfs_share.

  • From zfs manpage: disabled disables synchronous requests. File system transactions are only committed to stable storage periodically. This option will give the highest performance. However, it is very dangerous as ZFS would be ignoring the synchronous transaction demands of applications such as databases or NFS. Administrators should only use this option when the risks are understood.

Bear in mind that disabling sync could lead to data corruption.

Another option would be:

  • Use a SLOG [1] (Secondary Log) in really fast and separate devices, like SSD disks (NVMe is ideal, SATA is acceptable).

On your tests, I've observed that maximum number of active threads for asynchronous operations were set to 1. That is too low, hence could lead to bad read performance.

I need some details about your system (disk information for the ZFS pool, system memory and CPU).

Here is a suggestion that you could use and tune for your system. It works pretty well for my 12-core system (at /etc/modprobe.d/zfs.conf):

options zfs zfs_vdev_async_read_max_active=30
options zfs zfs_vdev_async_read_min_active=10
options zfs zfs_vdev_async_write_min_active=30
options zfs zfs_vdev_async_write_max_active=10
options zfs zfs_vdev_scrub_min_active=10
options zfs zfs_vdev_scrub_max_active=20

[1] https://jrs-s.net/2019/05/02/zfs-sync-async-zil-slog/