Linux – Determining cause of high NFS/IO utilization without iotop

iolinuxnfsperformancestorage

I have a server that is doing an NFSv4 export for user's home directories. There are roughly 25 users (mostly developers/analysts) and about 40 servers mounting the home directory export. Performance is miserable, with users often seeing multi-second lags for simple commands (like ls, or writing a small text file). Sometimes the home directory mount completely hangs for minutes, with users getting "permission denied" errors.

The hardware is a Dell R510 with dual E5620 CPUs and 8 GB RAM. There are eight 15k 2.5” 600 GB drives (Seagate ST3600057SS) configured in hardware RAID-6 with a single hot spare. RAID controller is a Dell PERC H700 w/512MB cache (Linux sees this as a LSI MegaSAS 9260). OS is CentOS 5.6, home directory partition is ext3, with options “rw,data=journal,usrquota”.

I have the HW RAID configured to present two virtual disks to the OS: /dev/sda for the OS (boot, root and swap partitions), and /dev/sdb for the home directories.

What I find curious, and suspicious, is that the sda device often has very high utilization, even though it only contains the OS. I would expect this virtual drive to be idle almost all the time. The system is not swapping, according to "free" and "vmstat". Why would there be major load on this device?

Here is a 30-second snapshot from iostat:

Time: 09:37:28 AM
Device:         rrqm/s   wrqm/s   r/s   w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await svctm  %util
sda               0.00    44.09  0.03 107.76     0.13   607.40    11.27     0.89    8.27   7.27  78.35
sda1              0.00     0.00  0.00  0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sda2              0.00    44.09  0.03 107.76     0.13   607.40    11.27     0.89    8.27   7.27  78.35
sdb               0.00  2616.53  0.67 157.88     2.80 11098.83   140.04     8.57   54.08   4.21  66.68
sdb1              0.00  2616.53  0.67 157.88     2.80 11098.83   140.04     8.57   54.08   4.21  66.68
dm-0              0.00     0.00  0.03 151.82     0.13   607.26     8.00     1.25    8.23   5.16  78.35
dm-1              0.00     0.00  0.00  0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
dm-2              0.00     0.00  0.67 2774.84     2.80 11099.37     8.00   474.30  170.89   0.24  66.84
dm-3              0.00     0.00  0.67 2774.84     2.80 11099.37     8.00   474.30  170.89   0.24  66.84

Looks like iotop is the ideal tool to use to sniff out these kinds of issues. But I'm on CentOS 5.6, which doesn't have a new enough kernel to support that program.

I looked at Determining which process is causing heavy disk I/O?, and besides iotop, one of the suggestions said to do a "echo 1 > /proc/sys/vm/block_dump". I did that (after directing kernel messages to tempfs). In about 13 minutes I had about 700k reads or writes, roughly half from kjournald and the other half from nfsd:

# egrep " kernel: .*(READ|WRITE)" messages | wc -l
768439
# egrep " kernel: kjournald.*(READ|WRITE)" messages | wc -l
403615
# egrep " kernel: nfsd.*(READ|WRITE)" messages | wc -l
314028

For what it's worth, for the last hour, utilization has constantly been over 90% for the home directory drive. My 30-second iostat keeps showing output like this:

Time: 09:36:30 PM
Device:         rrqm/s   wrqm/s   r/s   w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00     6.46  0.20 11.33     0.80    71.71    12.58     0.24   20.53  14.37  16.56
sda1              0.00     0.00  0.00  0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sda2              0.00     6.46  0.20 11.33     0.80    71.71    12.58     0.24   20.53  14.37  16.56
sdb             137.29     7.00 549.92  3.80 22817.19    43.19    82.57     3.02    5.45   1.74  96.32
sdb1            137.29     7.00 549.92  3.80 22817.19    43.19    82.57     3.02    5.45   1.74  96.32
dm-0              0.00     0.00  0.20 17.76     0.80    71.04     8.00     0.38   21.21   9.22  16.57
dm-1              0.00     0.00  0.00  0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
dm-2              0.00     0.00 687.47 10.80 22817.19    43.19    65.48     4.62    6.61   1.43  99.81
dm-3              0.00     0.00 687.47 10.80 22817.19    43.19    65.48     4.62    6.61   1.43  99.82

Best Answer

The easier approach is to upgrade your OS packages.

CentOS 5.7 can definitely use iotop. Red Hat backported per process IO accounting into the 2.6.18-144 kernels, and I began to see iotop working somewhere in 2011 via RPMForge packages. Red Hat made iotop part of the standard OS in 2012. On a 5.7 system...

[root@Tantalalicious ~]# cat /etc/issue
CentOS release 5.7 (Final)
Kernel \r on an \m

[root@Tantalalicious ~]# uname -a
Linux Tantalalicious 2.6.18-274.7.1.el5 #1 SMP Thu Oct 20 16:21:01 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux

[root@Tantalalicious ~]# iotop
Total DISK READ: 25.54 M/s | Total DISK WRITE: 87.03 K/s
  TID  PRIO USER<     DISK READ  DISK WRITE  SWAPIN      IO    COMMAND                                              
31441 be/4 465         0.00 B/s    0.00 B/s  0.00 %  0.00 % -bash
31540 be/4 465         0.00 B/s    0.00 B/s  0.00 %  0.00 % dbc
22587 be/4 admin       0.00 B/s    0.00 B/s  0.00 %  0.00 % sh
22588 be/4 admin       0.00 B/s    0.00 B/s  0.00 %  0.00 % sh

Don't think this is a cop-out of an answer... But at this point, there's no reason to be running the older OS. EL 5.8 has stabilized, fixed a ton of bugs and gives you access to the profiling tool that you need (iotop). I'm assuming you've already modified your Linux I/O elevators and tuned the hardware RAID controller by now.

Related Topic