Currently you have pretty low load according to the bandwidth utilization. There is a lot of possible bottlenecks, to name few:
Network related
As the number of connections grows, you can hit worker_connections
limit of an Nginx worker process. The racyclist's description is pretty good, I'll just add few cents to it. Actually the more workers you have, the more possiblity you can hit worker_connections
of one particular worker. The reason for that is Nginx master process cannot guarantee even distribution of connections between the workers -- some of them can process requests faster than others and thus the limit can be exceeded finally.
My advice is to use as few workers as possible with large number of worker_connections
. However you will have to increase the number of workers if you have IO (see later). Use nginx's status
module to watch the number of sockets it uses.
You shall likely hit OS (Linux or FreeBSD) limit on the number of per-process open file descriptors. Nginx will use descriptors not only for incoming requests, but for outgoing connections to backends as well. Initially this limit is set to the very low value (e.g. 1024). Nginx will complain in its error.log
on this event.
If you are using iptables
and its conntrack module (Linux), you shall exceed the size of conntrack
table as well. Watch out dmesg
or /var/log/messages
. Increase this limit as necessary.
Some very good optimized applications utilize 100% bandwidth. My bet is that you shall face previous problem(s) before.
IO related
In fact, a Nginx worker blocks on IO. Thus if your site is serving static content, you will need to increase the number of Nginx workers to account for IO blocking. It's hard to give recipes here, as they vary a lot depending on the number and size of files, type of load, available memory, etc.
If you are proxying connections to some backend through Nginx, you should take into account that it creates temporary files to store the backend's answer and in the case of high traffic this can result in substantial load on the filessystem. Watch for messages in Nginx's error.log
and tune proxy_buffers
(or fastcgi_buffers
) accordingly.
If you have some background IO (e.g. MySQL), it will affect static files serving as well. Watch for IO wait%
You seem to have a few misconceptions which I feel needs to be addressed.
First of all, mod_php is only marginally faster, all my tests have shown that the difference is so minuscule that it's not worth factoring in. I also doubt that the security aspect is relevant to you as you seem to be looking at a dedicated server and mod_php really only has an advantage in a shared environment - in fact, in a dedicated environment php-fpm will have the advantage as PHP and your web server now runs as different processes, and that's not even factoring in the awesome logging options in php-fpm such as slow log.
If the world was black and white I'd say go with a pure nginx setup and compile php with php-fpm. More realistically if you already have Apache working then make nginx a reverse proxy to apache and you might save a few hours of setup time and the difference in performance will be tiny.
But lets assume the world is black and white for a second, because this makes for far more awesome setups. You do nginx + php-fpm for your web server. To solve the uploads you use the upload module and upload progress module for nginx. This means that your web server accepts the upload and passes the file path onto PHP when it's done, so that the file doesn't need to be streamed between nginx and PHP via fastcgi protocol, sweet. (I have this in a live setup and it's working great, btw!)
For user downloading you use nginxs x-send-file-like feature called x-accel-redirect, essentially you do your authentication in PHP and set a header which nginx picks up on and starts transfer that file. PHP ends execution and your web server is handling the transfer, sweet! (Again, I have this in a live setup and it's working great)
For distributing files across servers or other long running operations we realize that PHP isn't really best suited for this, so we install gearman, which is a job server that can distribute jobs between workers on different servers, these workers can be written in any language. Therefore you can create a distribute worker and spawn 5 of them using a total of 200 KB of memory instead of the 100 MB PHP would use. Sweet. (I also have this running live, so it's all actually possible)
In case you haven't picked up on it yet, I think many of your problems aren't related to your web server at all, you just think that way because Apache forces it to be related to your web server due to it's structure, often there are far better tools for the job than PHP and PHP is a language which knows this and provides excellent options to off-loading work without ever leaving PHP.
I'd highly recommend nginx, but I also think you should look at other options for your other problems, if you have a scaling or performance problem then feel free to write me. I don't know if you can send messages through here but otherwise write me at martin@bbtn.us as I don't stalk server fault for anything not tagged with nginx. :)
Best Answer
No. Increase the amount of connections a worker may handle. 1024 is much lower than Nginx can handle.
If you need to limit the amount concurrent of concurrent requests passed to a back end then you need a 3rd party module like: https://github.com/ry/nginx-ey-balancer