Linux Server: Too Many Open Files

kernellinuxlinux-kernellog-files

I have an application running on Linux Server. But server suddenly stopped responding but when I ping from other machine to server it allowed to ping but was not allowing to login into the server. Not sure why?

Scenario:

  • Application and server both were running fine.
  • At particular time, it seems it was not responding as we were not able to login into the server but we were able to ping the server from other machine.
  • As it was not allowing us to login into the server, we re-booted the server and then we were able to login.

Now the problem is, we are not sure why this happened. We tried to look into /var/log/messages file and found below suspicious message:

kernel: VFS: file-max limit 100000 reached

Could anyone please help us on, how we can check what/which processes took these many descriptors? Is there any log files where we can find which process has taken so many descriptors open ?

Please let me know if any further information required on this.

Thanks

Best Answer

The message indicates that the machine running out of file handles, check the current value and try increasing the value.

List the current max-limit
#cat  /proc/sys/fs/file-max

To increase the number of files (for whole system), add the following line to your sysctl.conf (/etc/sysctl.conf)

fs.file-max = 131072

Run below command to re-read the configuration file

sysctl -p 

The value in file-max denotes the maximum number of file-handles that the Linux kernel will allocate. When you get lots of error messages about running out of file handles, you might want to increase this limit. Attempts to allocate more file descriptors than file-max are reported with error, "VFS: file-max limit reached.

I don't think there is log where we have information on which process have opened max file-descriptors