I currently have the following entry in /etc/security/limits.d/90-nproc.conf
based on Mongo DB's recommended ulimit settings.
mongod soft nofile 64000
mongod soft nproc 64000
I need to massively increase the number of permitted file descriptors to – say – 999999
. In normal situations only a few thousand files are likely to be accessed per day. The reason for the large number of files is the way the WiredTiger storage engine works. It uses several files per collection, of which I have many thousands.
Are there any negative impacts to this situation?
Is it detrimental to system performance to have a huge number of file descriptors open, but largely unused?
Best Answer
nproc
is the number of processes limit. If you mean files you wantnofiles
.It is merely a maximum limit counter. Increasing the limit doesn't use resources until that many things are in use on the system.
64k is a lot of processes. Your performance may suffer depending on how much memory and CPU you have. The Linux scheduler overhead may become significant with this many tasks.
Also adjust file descriptors if necessary, as mentioned in mongodb reference on ulimit. File descriptors are relatively lightweight, and you need one for every open file and socket, but 64k is still a lot.