Anyone know how to fix issues with omsa on red hat 5.1 that reports “No Controllers found”

dell-openmanagedell-poweredgerhel5

I've got a Red Hat 5.1 server 64-bit Dell 2950 with a PERC 5/i controller that until recently was working fine.

On it I have an NRPE command check_openmange that started returning errors:

/usr/local/nagios/libexec/check_openmanage
Storage Error! No controllers found
Problem running 'omreport chassis memory': Error: Memory object not found
Problem running 'omreport chassis fans': Error! No fan probes found on this system.
Problem running 'omreport chassis temps': Error! No temperature probes found on this system.
Problem running 'omreport chassis volts': Error! No voltage probes found on this system.

Obviously these components exist as the system is up and running. I can access the web interface for Dell Open Manage and it reports everything is green.

Check openmange uses the omreport tool and this generates the above error directly:

[root@lynx tmp]# omreport storage controller
No controllers found

I've found a number of threads online relating to issues with OMSA and 64-bit RHEL 5 and CentOS 5 where they suggest running the 32-bit software on 64-bit systems:

However I'm already running the 32-bit software:

Installed Packages
Name   : srvadmin-storage
Arch   : i386
Version: 6.5.0
Release: 1.201.2.el5
Size   : 8.4 M
Repo   : installed
Summary: Storage Management accessors package, 3.5.0

Moreover most of these posts seem related to a PERC 4 and mine is a PERC 5. This check and report was stable until recently and has had production load on it for a number of months which makes me hesitant to take these steps. I have not however found any good indication of why this behavior changed.

Has anyone experienced this issue with PERC 5?

Does anyone have further thoughts on diagnosis steps or solutions?

Best Answer

I assume you've done the basic troubleshooting steps of restarting OMSA (service dataeng restart) and making sure IPMI is loaded:

service dataeng stop
service dsm_sa_ipmi start
service dataeng start

One common non-obvious cause of this problem is system semaphore exhaustion. Check your system logs; if you see something like this:

Server Administrator (Shared Library): Data Engine EventID: 0  A semaphore set has to be created but the system limit for the maximum number of semaphore sets has been exceeded

then you're running out of semaphores.

You can run ipcs -s to list all of the semaphores currently allocated on your system and then use ipcrm -s <id> to remove a semaphore (if you're reasonably sure it's no longer needed). You might also want to track down the program that created them (using information from ipcs -s -i <id>) to make sure it's not leaking semaphores. In my experience, though, most leaks come from programs that were interrupted (by segfaults or similar) before they could run their cleanup code.

If your system really needs all of the semaphores currently allocated, you can increase the number of semaphores available. Run sysctl -a | grep kernel.sem to see what the current settings are. The final number is the number of semaphores available on the system (normally 128). Copy that line into /etc/sysctl.conf, change the final number to a larger value, save it, and run sysctl -p to load the new settings.