Linux – Disabling cache on ZFSonLinux

linuxzfs

Using ZFS to take advantage of some of the available options and to manage volumes, not the RAID. I have a single logical device (HW RAID) added to a zpool.

The ZFS ARC does not seem to perform as well as my HW RAID cache so I was trying to disable it to see if I could produce the similar results to the benchmarks run on the HW RAID device, but the performance suffers on the ZFS volumes.

I tried disabling the primarycache and secondarycache but it actually hurt performance, it didn't resolve to using the HW RAID cache as I expected. So I'm at a loss. Is it impossible to use my HW RAID cache with ZFS? Maybe primarycache and secondarycache aren't the right parameters to be modifying.

Configuration
HP P410i RAID10 with Writeback BBU Cache.

Zpool with single logical device from the RAID

Created a test sparse zvol for testing device speeds (/dev/zd0)

Update to This Question
The lack of performance was caused by ZFS overhead. When ZoL's ARC (primarycache) is disabled, there is extreme overhead at this time, especially on random writes. I'm not sure if this is specific to ZoL or ZFS in general. I recommend at least leaving primarycache=metadata if you are looking to reduce ARC size but maintain performance of your disks.

Best Answer

I use ZFS with hardware RAID and take advantage of the HW RAID controller's flash-backed write cache (instead of a ZIL device) and leverage the ZFS ARC cache for reads.

ZFS best practices with hardware RAID

Why do you feel ZFS is not performing well? Can you share your zfs get all pool/filesystem output as well as the benchmarks you speak of? It's likely just a tuning problem.

Edit:

The defaults on ZFS on Linux are not great. You need some tuning.

Please read through the workflow I posted at: Transparent compression filesystem in conjunction with ext4

They key parts are ashift value and volblocksize for a zvol.

Also, you'll need to modify your /etc/modprobe.d/zfs.conf

Example:

# zfs.conf for an SSD-based pool and 96GB RAM
options zfs zfs_arc_max=45000000000
options zfs zfs_vdev_scrub_min_active=48
options zfs zfs_vdev_scrub_max_active=128
options zfs zfs_vdev_sync_write_min_active=64
options zfs zfs_vdev_sync_write_max_active=128
options zfs zfs_vdev_sync_read_min_active=64
options zfs zfs_vdev_sync_read_max_active=128
options zfs zfs_vdev_async_read_min_active=64
options zfs zfs_vdev_async_read_max_active=128
options zfs zfs_top_maxinflight=160

Edit:

Well, my advice would be to use lz4 compression always, use the volblocksize value of 128k, limit ARC to about 40% of RAM or less, tweak the values in the zfs.conf I posted to taste (probably reduce all values by 50% if you're using 10k SAS disks) and enable the tuned-adm framework with tuned-adm profile enterprise-storage.