Does the max-80%-use target suggested for ZFS for performance reasons apply to SSD-backed pools

ssdstoragezfszfsonlinux

The Solaris ZFS Best Practices Guide recommends keeping ZFS pool utilization below 80% for best performance:

  • Keep pool space under 80% utilization to maintain pool performance. Currently, pool performance can degrade when a pool is very full and file systems are updated frequently, such as on a busy mail server. Full pools might cause a performance penalty, but no other issues. If the primary workload is immutable files (write once, never remove), then you can keep a pool in the 95-96% utilization range. Keep in mind that even with mostly static content in the 95-96% range, write, read, and resilvering performance might suffer.

A common suggestion for how to implement this seems to be to make a file system or volume that is not used to store any data, but which has a size reservation of about 20% of pool capacity.

I can absolutely see, with ZFS' copy-on-write behavior, how this would help with rotational storage, because rotational storage tends to be fairly heavily IOPS-constrained so giving the file system room to make large contiguous allocations makes a lot of sense (even if they wouldn't be used as such all the time).

However, I'm not sure the 80% target makes as much sense with solid state storage, which besides being a good bit more expensive per gigabyte doesn't have anywhere near the IOPS constraints of rotational storage.

Should SSD-backed ZFS pools be restricted to less than about 80% capacity utilization for performance reasons just like HDD-backed pools, or can SSD-backed pools be allowed to fill up more without significant adverse impact on I/O performance?

Best Answer

I'd say yes.

My rule is to stay under 87% on SSD-only pools when using drives that haven't been heavily over-provisioned.

The SSD use case introduces the drive endurance component, while the random write latency is less of an issue that with spinning disks.

Either way, regardless of disk choice, why would you intentionally plan to run your workloads at a high capacity level? All copy-on-write file systems warn/advise against it, so I'd still avoid going that high if it can be avoided.