My team recently had an issue with ulimit being set too low on our Apache servers. We talked about increasing the limit to some arbitrary number, but we couldn't think of any reason not to just globally set it to unlimited.
Are there some good reasons not to do this?
We realise there could use more resources, but it's easier to deal with a resource usage probably than an application crash or a "out of fd" problem.
Best Answer
ulimit
is an interface to set all kinds of limits on a Linux system, not just open file descriptor limits:The reason you actually would set limits on resources is because resources are finite. Setting limits basically protects your system from becoming unresponsive at the cost of limiting specific users. Even if you only have a single application to care about, you still need to ensure it does not bring the system to a halt so you even can't login via SSH to fix matters any more.
In the specific case of open file descriptors, the overhead of asynchronous operations using select() or poll() calls increases significantly if the file descriptor table is too large (which does not happen under usual conditions but might happen easily if one of your processes is leaking handles). This will hurt overall performance for asynchronous I/O system-wide.