On my CentOS system varnishd is /usr/sbin/varnishd
. Check that your PATH has /usr/sbin
in it.
echo $PATH
/usr/local/bin:/usr/bin:/sbin:/bin
export PATH=$PATH:/usr/sbin
which varnishd
/usr/sbin/varnishd
I suspect your varnish cache is not caching anywhere near enough of the hits
here's what I would do in your situation:
Lower php max children to 100 or even 50 (if varnish does its job properly you don't need them)
also remove the max requests line to allow the php processes not to respawn too quickly and thus prevent APC from being cleared too quickly which is also bad
also IF is not good according to nginx - http://wiki.nginx.org/IfIsEvil
I would change this line:
if (!-e $request_filename) {
rewrite ^(.+)$ /index.php?q=$1 last;
}
to:
try_files $uri $uri/ /index.php?$args;
If your version of nginx supports it (pretty certain if your nginx version is > 0.7.51 then it supports it)
you should also look at inserting the w3tc nginx rules direct into your vhost file to enabled proper disk enhanced caching of pages (which is faster than APC caching with nginx)
Take a look at the following varnish vcl which I use for sites - you will need to read through and edit a few things for your website - it also assumes that its only WP sites on the server and only 1 site on the server, it can easily be modified for more sites (take a look at the cookie section)
generic vcl: https://gist.github.com/b7332971a848bcb7ecef
With this config I would argue to remove fastcgi_cache to prevent any possible issues with a cache-chain occurring whereby trying locate any stray stale cache entries is more difficult
also tell w3tc that varnish is at 127.0.0.1 and it will purge it for you ;)
I deployed this to a server on Wednesday evening (with a few domain specific modifications) that was handling 2500 active site visitors it reduced load to less than 1 and the approx number of running php children was around 10-20 (this number does depend on number of logged in users and other factors like cookies) this server did have much more ram but the principle is the same, you should be able to easily handle the number of visitors you get at peaks
Best Answer
Yes, it's normal. One handles normal connections, the other handles the admin.
You can check which is which by using
netstat
:As you can see one binds to the admin port (6082) on the loopback interface, while the other binds to the main port (80 in my case and in most).