Nginx – Identifying bottleneck — Nginx + PHP-FPM + Ubuntu

nginxphp-fpmUbuntu

I have an Ubuntu server setup running PHP-FPM and Nginx — I did some stress testing, and experienced a substantial slow-down. I used loadImpact with 250 users. I found that during the testing PHP was not slowing down at all — page rendering time never wavered from .04 seconds (I assume this is because of the APC cache).

The actual transmission of the assets is what was taking so long. Not sure if this is a network limitation, or an Nginx issue — I'm assuming it's Nginx b/c the server is a cloud server on Rackspace, and I assume their network is pretty robust (maybe that's a stupid assumption…).

Running "top" from the command line showed one nginx process running at any given time, and I think that's the bottleneck – Also, the CPU was hardly being used at all. It's worth noting I'm using a 512MB RAM cloud server, but RAM usage is very low too, so I'm quite certain I just don't have Nginx well configured. I've pasted my conf below…

I'm very new to this, so apologies in advance if I didn't provide enough info.

user www-data;
worker_processes  4; #using a quad-core VPS server

error_log  /var/log/nginx/error.log;
pid        /var/run/nginx.pid;

events {
    worker_connections  1024;
}

http {
 include       /etc/nginx/mime.types;
 default_type  application/octet-stream;

 access_log  /var/log/nginx/access.log;

 sendfile        on;
 tcp_nopush     on;

 #keepalive_timeout  0;
 keepalive_timeout  3;
 tcp_nodelay        on;

 gzip  on;
 gzip_comp_level 1;
 gzip_proxied any;
 gzip_types text/plain text/html text/css application/x-javascript text/xml application/x$

 include /etc/nginx/conf.d/*.conf;
 include /etc/nginx/sites-enabled/*;

        server {
  listen *:80;
  server_name     ***.com;

        location / {
                root   /var/www/nginx-default;
                index index.php;
                auth_basic "Restricted";
                auth_basic_user_file /etc/nginx/htpass;
        }



        location ~ \.php$ {
                fastcgi_pass    127.0.0.1:9000;
                fastcgi_index   index.php;
                fastcgi_param   SCRIPT_FILENAME /var/www/nginx-default$fastcgi_script_na$
                include         fastcgi_params;
        }



}

and this is the virtual host config:

server {

 listen   80;
 server_name dev.***.com;

 access_log /***/access.log;
 error_log /***/error.log;
 client_max_body_size 4M;

 location / {

  root   /***;
  index  index.php;

  # if file exists return it right away
                if (-f $request_filename) {
                        break;
                }

                # otherwise rewrite the fucker
                if (!-e $request_filename) {
                        rewrite ^/(.+)$ /index.php?url=$1 last;
                        break;
                }


  location ~ \.php$ {
       fastcgi_pass    127.0.0.1:9000;
       fastcgi_index   index.php;
       fastcgi_param   SCRIPT_FILENAME /***$fastcgi_script_name;
       include         fastcgi_params;
  }

 }



}

Thanks in advance for any suggestions!

Best Answer

How do you arrive at your 0.04s rendering time? You might want to want to remove PHP from the equation entirely an benchmark against static files only.

There are a few things you can do to improve your configuration, though. First of all, your gzip compression level can be increased quite safely, the CPU impact is minimal but you can gain a smaller data size by increasing it to 5. I found increasing it further than that rarely yields any benefit.

You can also quite easily increase your keep-alive timeout, the default value is 75. This probably won't help raw throughput much but if a client's browser allows it then it will seem snappier.

After that you might want to use an open-file cache. I use the following configuration:

open_file_cache max=5000 inactive=20s;
open_file_cache_valid    30s;
open_file_cache_min_uses 2;
open_file_cache_errors   on;

You can find the documentation on that here: http://wiki.nginx.org/NginxHttpCoreModule#open_file_cache

Since you have no buffers specified it might be that the buffers aren't large enough and thus the response is stored on temporarily on the disk. You can see if iostat 1 shows a large iowait time.

The interesting buffers are

  • client_body_buffer_size
  • output_buffer

The client body buffer is documented here: http://wiki.nginx.org/NginxHttpCoreModule#client_body_buffer_size

The output buffer is not documented in the wiki but it takes two arguments, the first one is the amount of buffers and the second one is the size of each of them. The output buffer is primarily used when sendfile is off or if you gzip data before sending it to the user, so you'll definitely want to make sure the data fits.