Linux – ps returning unreasonably high values for %cpu and cputime

linuxps

We've had an alert from Nagios on one of our servers that we have a runaway process. Logging in and running top doesn't show anything bad happening, but when I look at the output of ps I see something odd:

oxygen@mail-1:~$ ps -e -o %cpu,comm,cputime --sort %cpu | tail
 0.2 amavisd         00:00:11
 0.2 zmlogger        00:00:54
 0.2 zmstat-allprocs 03:44:19
 0.2 amavisd         00:00:07
 0.2 amavisd         00:00:14
 0.3 amavisd         00:00:08
 0.3 top             00:00:05
 0.5 amavisd         00:00:04
 8.1 mysqld          3-23:07:17
 7413 java            1184016091-02:47:13

%cpu and cputime don't look reasonable. Any ideas as to why this might be the case?

oxygen@mail-1:~$ ps --version
procps version 3.2.8
oxygen@mail-1:~$ uname -a
Linux mail-1 2.6.32-35-server #78-Ubuntu SMP Tue Oct 11 16:26:12 UTC 2011 x86_64 GNU/Linux

EDIT: Response to comments below:

Yes, good guess this is a Zimbra server.

Load averages are fairly high, this server is disk-bound:

top - 09:55:06 up 71 days,  3:23,  1 user,  load average: 4.03, 3.82, 3.60
Tasks: 301 total,   1 running, 300 sleeping,   0 stopped,   0 zombie
Cpu(s): 10.7%us,  1.7%sy,  0.0%ni, 59.3%id, 27.5%wa,  0.0%hi,  0.7%si,  0.0%st
Mem:   8192360k total,  7867364k used,   324996k free,   171704k buffers
Swap:  1953784k total,   950944k used,  1002840k free,  1619948k cached

pstree output as below

 oxygen@mail-1:~$ pstree
 init─┬─amavisd───10*[amavisd]
      ├─atd
      ├─clamd───{clamd}
      ├─cron
      ├─6*[getty]
      ├─ha_logd───ha_logd
      ├─heartbeat───3*[heartbeat]
      ├─hpasmxld───8*[{hpasmxld}]
      ├─httpd─┬─4*[httpd]
      │       └─sh───rotatelogs
      ├─httpd─┬─6*[httpd]
      │       └─2*[sh───rotatelogs]
      ├─irqbalance
      ├─master─┬─anvil
      │        ├─3*[cleanup]
      │        ├─2*[lmtp]
      │        ├─pickup
      │        ├─2*[proxymap]
      │        ├─qmgr
      │        ├─showq
      │        ├─3*[smtp]
      │        ├─6*[smtpd]
      │        ├─tlsmgr
      │        └─2*[trivial-rewrite]
      ├─miniserv.pl
      ├─mysqld_safe───mysqld───37*[{mysqld}]
      ├─named───10*[{named}]
      ├─nginx───nginx
      ├─nrpe
      ├─ntpd
      ├─nullmailer-send
      ├─openhpid───3*[{openhpid}]
      ├─perl───zmlogger───zmlogger
      ├─rsyslogd───3*[{rsyslogd}]
      ├─saslauthd───4*[saslauthd]
      ├─screen───2*[bash]
      ├─slapd───9*[{slapd}]
      ├─snmpd
      ├─sshd───sshd───sshd───bash───pstree
      ├─swatch───perl
      ├─udevd───2*[udevd]
      ├─upstart-udev-br
      ├─zmconfigdctl─┬─java───19*[{java}]
      │              └─sleep
      ├─zmmailboxdmgr───java───166*[{java}]
      ├─zmstat-allprocs
      ├─zmstat-convertd
      ├─zmstat-cpu
      ├─zmstat-df
      ├─zmstat-fd───zmstat-fd
      ├─2*[zmstat-io───iostat]
      ├─zmstat-mtaqueue
      ├─zmstat-mysql
      ├─zmstat-proc
      └─zmstat-vm───vmstat

For what it's worth, it seems more like an overflow bug within ps than anything else. I can't think of any other way java would manage to consume 3 million years of cputime in 79 days!

Best Answer

I've just had a similar problem to this but with almost all processes showing extremely large CPU times immediately after a reboot.

You can make sure that it's not a problem with overflows in ps by looking at /proc/$PID/stat for the utime and stime columns (columns 14 and 15 on my kernel, check the proc(5) man page to see if yours are different), e.g.:

cut -d' ' -f14,15 /proc/$PID/stat

These are numbers of clock ticks, and if it's not a problem with ps, they will be extremely large values.

I'm running Scientific Linux 6.4 (a RHEL based distro) and saw the problem with kernel-2.6.32-358.6.2.el6.x86_64.

I fixed it by installing a newer version of the kernel (kernel-2.6.32-431.el6.x86_64) and rebooting.

I see that similar problems have been reported for various distributions:

So upgrading is probably the best way to fix the problem.

Related Topic