Relying on Nginx as the Only Webserver for PHP/MySQL

MySQLnginxPHPweb-server

Can you rely on Nginx to be your only webserver. I know in terms of performance it works well, but how does it do in terms of security. I know Apache is stable and has ModSecurity. This is not the case for Nginx.

I am going to use Nginx as only webserver, and only for dynamic content. All my static content is delivered by a CDN.

Best Answer

nginx runs dynamic content by 'reverse proxying' to a fastcgi server. the php-cgi package in most distributions include fast-cgi mode, where php will startup a small fcgi server you can connect nginx too.

This separation lets you do clever things:

  1. Most dynamic languages (php, perl, ruby,python) have a way to run fcgi applications
  2. You can run dynamic content as different accounts. Or, even under chrooted paths. On a VPS I manage for a few friends, every user has their own FCGI server running their own accounts. If their software is comprimised, the attacker can only get as far as that user account.
  3. It encourages an easy scale-out path for most applications. nginx on any given server can probably handle more static load than dynamic load. You can add multiple hosts to an upstream section on nginx, and just keep adding backends as needed (scaling your database + filesystem is left up to you, however)
  4. Using multiple ports with an upstream section, with a single host, permits you to restart the wep app without incurring any downtime: (1) Start php-fcgi on another port (2) stop php-fcgi on the original port. nginx will automatically redirect requests from one port to the other
  5. Better Memory utilization. With Apache/Mod_Security/Mod_php each apache process has all those modules loaded in memory. While there is some copy-on-write memory shared between processes, once each process changes a page, that memory page is copied. By seperating these tasks, nginx can use a fairly tiny memory footprint, you can setup a dedicated web application firewall (ips/ids device, dedicated reverse proxy server), and you can manage the memory policy on your php application, all seperatly.

Updated: Per comments below, here are some links:

fastcgi_pass param - This is how you instruct nginx to pass a request to a fastcgi server. FastCGI works by passing variables (that intentionally look like CGI environment variables), but allow you to communicate any arbitrary data from the front end to the backend. In the debian distribution (and in the source distribution too, iirc) there is a fastcgi.conf file that includes all the default paramaters most toolkits need to get off the ground

Upstream Module - The Upstream module allows you to define multiple upstream servers that can be other web servers, fastcgi servers, or what not. The fastcgi_pass module includes a short example that uses upstream. Note that on a single-host system, you can even use unix domain sockets, and incur no TCP/IP overhead!

PHPFCGI Example - This outlines a complete sample configuration. I personally am a fan of daemontools (or runit, if your not a djb fan), and have written very simple wrappers to run php-fcgi under process supervision (which will restart if it abnormally terminates), but the script provided on that page is a SysV-style script you can toss in /etc/init.d/ and add the appropriate links in /etc/rcX.d/. In the script on that page, there are a few variables you can tweak to adjust the environment that your fcgi application runs as.

Virtual Hosting is facilitated with 'server' sections:

server { 
     server_name www.host.com host.com other_aliases;
     ... 
}
server { 
     server_name www.host2.com host2.com other_aliases;
     ...
}

See The section on server_name for additional details, the rest of that page has a lot of information on how the corehttp module can be configured.

In terms of security, Igor (lead developer) takes security seriously, and frequently participates on their very active mailing list. Here's a list of acknowledged security problems , and here's a list to their mailing list archive