Nfs – Different IOPs measured on server and storage

iopsnfsstorage

I have this strange behavior that I can not explain to myself – hopyfully someone here can.

We received some server (hardware) and mounted an NFS drive. We plan to use these servers as Splunk indexer but – since Splunk does not suggest NFS as storage – we wanted to do some performance tests before.

So I ran Bonnie++ and got really bad results (around 300 IOP/s) but the storage guys tell me that on their side they see around 1200 IOP/s which would be fine. How is this possible – what can I do to get this performance on the server ?

Best Answer

http://veerapen.blogspot.com/2011/09/tuning-redhat-enterprise-linux-rhel-54.html

In short:

Configuring the Linux scheduler on systems with hardware RAID and changing the default from [cfq] to [noop] gives I/O improvements.

Use the nfsstat command, to calculate percentage of reads/writes. Set the RAID controller cache ratio to match.

For heavy workloads you will need to increase the number of NFS server threads.

Configure the nfs threads to write without delay to the disk using the no_delay option.

Tell the Linux kernel to flush as quickly as possible so that writes are kept as small as possible. In the Linux kernel, dirty pages writeback frequency can be controlled by two parameters.

For faster disk writes, use the filesystem data=journal option and prevent updates to file access times which in itself results in additional data written to the disk. This mode is the fastest when data needs to be read from and written to disk at the same time where it outperforms all other modes