NGINX Keeps Crashing, Seems to be because of too many open files

debianfilesnginxopen

As stated in the title, my Nginx server seems to be crashing constantly although the reason is unknown.

I do have some hints from my error log which may lead to the issue.

I have tried to increase the open file limit, which has some affect, but to no serious avail.

2015/09/29 17:18:01 [crit] 20560#0: accept4() failed (24: Too many open files)
2015/09/29 17:18:01 [crit] 20560#0: accept4() failed (24: Too many open files)

I have tried to increase the limit, but I see this in my error log also

2015/09/29 17:18:02 [alert] 20632#0: setrlimit(RLIMIT_NOFILE, 300000000) failed (1: Operation not permitted)
2015/09/29 17:18:02 [alert] 20633#0: setrlimit(RLIMIT_NOFILE, 300000000) failed (1: Operation not permitted)
2015/09/29 17:18:02 [alert] 20560#0: recvmsg() truncated data

How do I give permission to increase the file limit?

Also, is this even the reason my server is crashing?

Thank you!

Just checked some more data, I have edited my files, but for some reason when checking the hard limit it says 4096?

root@nalsec:~# sysctl -p
net.ipv4.ip_forward = 1
net.ipv6.conf.all.autoconf = 0
net.ipv6.conf.default.autoconf = 0
net.ipv6.conf.eth0.autoconf = 0
net.ipv6.conf.all.accept_ra = 0
net.ipv6.conf.default.accept_ra = 0
net.ipv6.conf.eth0.accept_ra = 0
fs.file-max = 2500000000000000000
root@nalsec:~# ulimit -Hn
4096

Which contrasts with my fs.file-max

I tried this, and it says I don't have permission (I am root)

root@nalsec:~# ulimit -Hn 1000000000
-bash: ulimit: open files: cannot modify limit: Operation not permitted

I have edited this file already to no avail nano /etc/security/limits.conf

#@faculty        hard    nproc           50
#ftp             hard    nproc           0
#ftp             -       chroot          /ftp
#@student        -       maxlogins       4

# End of file
nginx       soft    nofile  10240000000000000000000
nginx       hard    nofile  10240000000000000000000
*         hard    nofile      10240000000000000000000000
*         soft    nofile      10240000000000000000000000
root      hard    nofile      10240000000000000000000000
root      soft    nofile      10240000000000000000000000
www-data soft nofile 1024000000000000000
www-data hard nofile 1024000000000000000

Best Answer

fs.file-max is a system-wide limit on the total number of file descriptors that can be open on the system. It has no impact on the per-process limit.

To increase the file descriptor limit for individual processes, it's easiest to do it via limits.conf:

# cat /etc/security/limits.d/nofile.conf
* soft nofile 10000
* hard nofile 1000000

That'll give all processes 10,000 file descriptors by default, with the ability to request an upgrade to 1,000,000 (via setrlimit) if they want.