You really shouldn't put it on NFS. That will make your NFS server a single point of failure.
If your NFS is unreachable (maybe it crashes, maybe there's a network problem..), then your whole site is offline.
Instead, store your PHP in some central location - anywhere. It doesn't matter where.
Then when you need to push your files to a new server, just copy then to the new server.
You will have to make sure your users do not update the PHP on the web servers directly.
They should update them in the central location.
For example, let's say you have web1
and web2
as your web servers, and your PHP files are kept in /home/foo/php
and /home/bar/php
. This is currently your authoritative location and probably needs to change.
Set up something like /var/code/foo/php
and /var/code/bar/php
, and tell your users to update there. When they do an update, run a script which copies the files to /home/foo/php
and /home/bar/php` on each web server.
Part of the difficulty you're having now, and will continue to have, is that you're storing data in users home directories.
You should consider moving to a proper version control system like Mercurial, Git or SVN. Store all of the data in the same location, and then check out those wherever you want.
An example of this:
Use a server (let's call it dev1
) and install SVN. Create an new project in SVN (let's call it php_project1
). Tell your users to check out svn://dev1/php_project1/userN
, and to edit and commit the files to SVN.
Then on your web servers, you can do something like:
cd /home/user1/php
svn checkout svn://dev1/php_project1/user1
And any time you need to update your web servers, you just run svn update
in the home directory.
There's a lot more I could go in to. Welcome to managing a web farm.
Reverse proxying is slower and generally worse, however, some reasons to use it is to maintain (some) compatibility with .htaccess files (which you would have to write (and that isn't always practical) is you use a pure nginx setup) or if you require specific apache modules. (Some may argue that is you have these requirements, it is easier to just use apache.)
- PHP-FPM with nginx would be the preferred solution - you get the fast static file serving of nginx and good PHP performance without adding additional overhead, proxying, or the (typically) significant memory usage of apache.
- nginx+PHP-FPM is (typically) faster and uses less memory. Nginx + Apache + FastCGI/FPM will still serve static files fast, but will have additional overhead on the dynamic files (not as bad as mod_php, but worse than if you eliminate apache).
- You will need a bit of both - nginx would need to know how to deal with the paths (e.g. for serving static files, denying access to .htaccess, etc) and apache will need to know how to handle the files. In some cases, if your .htaccess file doesn't pertain to static files (so every request that needs rewriting is going to go to apache), then it may be acceptable to simply deny access to certain locations, and have apache do the rest via .htaccess - it doesn't seem ideal, will cost a bit in performance, and its reliability is questionable - but it could work on a simple setup).
If you can, go with the straight nginx+PHP-FPM setup. If you can't, while there may be some merits to reverse-proxying, think through the repercussions, especially if you are dependent on .htaccess files.
Best Answer
Your main Nginx should act as a reverse-proxy and forward HTTP requests to the respective web server of each app. If the main reverse-proxy has file-level access to the app's jails, you better use UNIX sockets to communicate with its web server, but in your case you have no choice but to use TCP.
When using TCP, make sure to set the
keepalive
parameter to maintain a number of open connections at all times, so that you don't have to open and close a connection on each request for better performance. The parameter's argument is the number of connections to keep open, something like 10 seems enough.In your jails, the web server in there should use UNIX sockets to communicate with its PHP-FPM for better performance (TCP has more overhead than an UNIX socket, so use the latter wherever possible).
Finally I see no major security issues in having the main reverse-proxy communicating directly with the in-jail PHP-FPMs, but that would mean you should also configure the main reverse-proxy according to the in-jail PHP-FPM. That's something I'd rather avoid, I would prefer the jails to be self-contained and expose a single HTTP endpoint on a default port, and have the in-jail Nginx handle all the PHP-FPM stuff. If there's something you need to change in regards to PHP-FPM, you just do it in the jail without touching your main Nginx reverse proxy.
Also I suggest you try an even lighter web server for the jails like Lighttpd since you really don't need much features in there and even though Lighty's configuration syntax is absolutely horrible it shouldn't be a problem.
About your last comment
The keep-alive parameter I mentioned should be set in the
upstream
block of the main Nginx reverse proxy and only affects the reverse-proxy <-> in-jail server communication and has nothing to do with HTTP keep-alive between the clients and your server. For keepalive between browsers and servers, it should be done on the last endpoint on your side, which is the reverse-proxy. On the other hand, cache-control headers are app-dependent (as different apps may need different settings) and should be set individually in the app's jails. Try to put as much settings as possible in each app's jail, and only modify the reverse-proxy's configuration in extreme cases like connection-level settings (HTTP keepalive, TLS, etc).