NFS IO priority on ZFS/Solaris

ionexentanfssolariszfs

I've got a Nexenta/ZFS NAS that I'm using as an NFS backing store for a small VMware vSphere farm. At the moment I have 9x1TB disks all mirrored (last disk is configured as a write log device).

The disk performance is pretty good for my needs over NFS. However, one thing I've noticed is that if I do any IO-heavy operations on the NAS directly, VM performance slows to a crawl. An example would be copying 1TB of data between two different ZFS filesystems within the same zpool.

Is there any good way I can ensure that IO requests performed by the NFS daemon are prioritized over other IO operations on disk? In an ideal world, I'd have my VMs backing onto a completely separate zpool, so that they're unaffected by load on the ZFS filesystems. However, I'm wondering if there's a good way of doing it with a single zpool.

Linux has ionice, so if I were using that I could prefix a mv command with an ionice if I were to move a lot of data at a low IO priority. However, I don't think that's available on the Solaris kernel though.

Any suggestions?

Best Answer

AFAIK, to ZFS all I/O is I/O. What I mean by that is that it won't differentiate between your local operations and the ones NFS is asking ZFS to do.

You could play with the scheduling classes to somehow slow down your userland process that is copying all this data locally.

BTW, your dedicated 1TB disk for write log device won't help you at all, unless that specific disk is much faster than the rest (eg. SATA 7200 vs SAS 15k). We usually use SSDs for log/cache devices, or nothing at all.