serving static content -- huge set of large images. Minimal features need, just as fast as possible.
nginx is the current favored choice. LigHTTPd still works fine, but is less actively developed nowadays. LiteSpeed is also a good choice, and may be the best if you want commercial support or a nicer GUI. All of these are very fast, raw speed will not be a meaningful competitive differentiator within this group of webservers.
dispatching dynamic content plugins -- think a web server that does on-the-fly watermarking or image transcoding. I'm looking for the fastest, lowest-overhead way of dispatching this.
Hmnn, a custom extension module to nginx is the lowest overhead option. But writing modules in C / C++ is seriously time-consuming. OP says "any language acceptable", well, if that is so then nginx with a C extension, or maybe Apache with a C extension to benefit from Apache's richer set of modules and documentation.
But realistically, who writes C code today for non-mass-market products? I would consider Python code with Tornado, or a similar event-driven webserver in a high-level language to be a better match.
nginx runs dynamic content by 'reverse proxying' to a fastcgi server. the php-cgi package in most distributions include fast-cgi mode, where php will startup a small fcgi server you can connect nginx too.
This separation lets you do clever things:
- Most dynamic languages (php, perl, ruby,python) have a way to run fcgi applications
- You can run dynamic content as different accounts. Or, even under chrooted paths. On a VPS I manage for a few friends, every user has their own FCGI server running their own accounts. If their software is comprimised, the attacker can only get as far as that user account.
- It encourages an easy scale-out path for most applications. nginx on any given server can probably handle more static load than dynamic load. You can add multiple hosts to an upstream section on nginx, and just keep adding backends as needed (scaling your database + filesystem is left up to you, however)
- Using multiple ports with an upstream section, with a single host, permits you to restart the wep app without incurring any downtime: (1) Start php-fcgi on another port (2) stop php-fcgi on the original port. nginx will automatically redirect requests from one port to the other
- Better Memory utilization. With Apache/Mod_Security/Mod_php each apache process has all those modules loaded in memory. While there is some copy-on-write memory shared between processes, once each process changes a page, that memory page is copied. By seperating these tasks, nginx can use a fairly tiny memory footprint, you can setup a dedicated web application firewall (ips/ids device, dedicated reverse proxy server), and you can manage the memory policy on your php application, all seperatly.
Updated: Per comments below, here are some links:
fastcgi_pass param - This is how you instruct nginx to pass a request to a fastcgi server. FastCGI works by passing variables (that intentionally look like CGI environment variables), but allow you to communicate any arbitrary data from the front end to the backend. In the debian distribution (and in the source distribution too, iirc) there is a fastcgi.conf file that includes all the default paramaters most toolkits need to get off the ground
Upstream Module - The Upstream module allows you to define multiple upstream servers that can be other web servers, fastcgi servers, or what not. The fastcgi_pass module includes a short example that uses upstream. Note that on a single-host system, you can even use unix domain sockets, and incur no TCP/IP overhead!
PHPFCGI Example - This outlines a complete sample configuration. I personally am a fan of daemontools (or runit, if your not a djb fan), and have written very simple wrappers to run php-fcgi under process supervision (which will restart if it abnormally terminates), but the script provided on that page is a SysV-style script you can toss in /etc/init.d/ and add the appropriate links in /etc/rcX.d/. In the script on that page, there are a few variables you can tweak to adjust the environment that your fcgi application runs as.
Virtual Hosting is facilitated with 'server' sections:
server {
server_name www.host.com host.com other_aliases;
...
}
server {
server_name www.host2.com host2.com other_aliases;
...
}
See The section on server_name for additional details, the rest of that page has a lot of information on how the corehttp module can be configured.
In terms of security, Igor (lead developer) takes security seriously, and frequently participates on their very active mailing list. Here's a list of acknowledged security problems , and here's a list to their mailing list archive
Best Answer
Apache is a good base with mod-php - and adding APC for byte-code caching, and some variable caching will help immensely, in fact, it's the most obvious thing you can do to speed up PHP scripts run-times (also, use Yslow to speed the HTML front-end and make sure that the database is optimised).
There are a few suggestions I'd add though, such as avoiding serving the images and other static content from Apache. I've got a separate (sub-)domain with a dedicated image server (I use thttpd, but nginx is also entirely suitable). Serving the images from an entirely separate domain name (or a CDN) would be ever better though.
NginX also has the advantage of being able to act as a proxy, which has it dealing with inbound connections, and then spoon-feeding the results back out - which means that the back-end producer processes of Apache2/Mod_php can work at full local-network speeds, rather than having to wait for the web-browser clients to catch up.
Varnish can perform additional work beyond what Nginx can do, but I don't know it so well - it might be you could just one one, or the other, but it's unlikely to have to use both.