You really shouldn't put it on NFS. That will make your NFS server a single point of failure.
If your NFS is unreachable (maybe it crashes, maybe there's a network problem..), then your whole site is offline.
Instead, store your PHP in some central location - anywhere. It doesn't matter where.
Then when you need to push your files to a new server, just copy then to the new server.
You will have to make sure your users do not update the PHP on the web servers directly.
They should update them in the central location.
For example, let's say you have web1
and web2
as your web servers, and your PHP files are kept in /home/foo/php
and /home/bar/php
. This is currently your authoritative location and probably needs to change.
Set up something like /var/code/foo/php
and /var/code/bar/php
, and tell your users to update there. When they do an update, run a script which copies the files to /home/foo/php
and /home/bar/php` on each web server.
Part of the difficulty you're having now, and will continue to have, is that you're storing data in users home directories.
You should consider moving to a proper version control system like Mercurial, Git or SVN. Store all of the data in the same location, and then check out those wherever you want.
An example of this:
Use a server (let's call it dev1
) and install SVN. Create an new project in SVN (let's call it php_project1
). Tell your users to check out svn://dev1/php_project1/userN
, and to edit and commit the files to SVN.
Then on your web servers, you can do something like:
cd /home/user1/php
svn checkout svn://dev1/php_project1/user1
And any time you need to update your web servers, you just run svn update
in the home directory.
There's a lot more I could go in to. Welcome to managing a web farm.
Can VMware ESXi handle multiple vCPUs better than VMware Server 1 could over three years ago?
Yes, they're totally seperate product lines, and ESXi can handle multiple vCPUs very well. I'm running a 5 node vmware farm right now with a mix of machines on each box, each with 1, 2 and in some cases 4 processors as needed and it works very well indeed.
Which will perform better, a single large webserver VM with 2 - 4 vCPUs, or 4 VMs with a single vCPU, in a load-balanced configuration?
That's less of a VMWare question and more of a application/website testing question. VMWare (or any other modern bare metal virtual machine) can do either way - there might be a cost to having lots of vCPUs attached to one guest but this might be offset by the resource costs of having multiple guests running and by performance issues (or improvements, for that matter) your web application might realise from a more distributed approach.
On that last issue, its really a matter of testing and architectural preferences rather than the one true way, but my general feeling is that if you can distribute the application over more than one virtual machine easily, its better to build a few smaller virtual machines than one big one. Gives you more flexibility and helps performance in the long run as you can move things about if you have to. In other words, scaling outwards rather than upwards
With physical machines I think we think of scaling upwards, putting as much into one OS "instance" as possible to get the most out of the cost of the licence and of the hardware, but with virtual machines those equations change a bit - if you can get more than one virtual OS instance for the price of a licence then this might change the way your server infrastructure is designed.
edit
http://blog.peacon.co.uk/understanding-the-vcpu/ is an article Iain mentioned while discussing this question in chat that explains in some detail why its important to think about vCPU allocation when creating your VMs.
Best Answer
I'll outline a couple of options, to show alternatives to the live code via NFS approach.
The "best" strategy depends on your chosen code deployment strategy, possibly one of these:
The "best" strategy also depends on the availability requirements for the application:
Out of the above options, I would personally choose the package deployment approach. However, for web requests, sharing a small number of files over NFS can work well: The latency introduced by NFS is small compared to the Internet. But before you do this, consider the disadvantages:
Because of these possible snags (IO bound fileserver, failure of the webserver), I'd also suggest periodically sync'ing the webservers with the application. Pushing changes to several machines should only take a couple of seconds. If required, you could setup some logic like: if (time() > 23:59:00) {use software in dir B} else {use software in dir A}). This could be useful if all machines must should run the same software version, e.g. if you've just changed a database schema.
A couple of seconds delay during deployment really isn't too bad. A developer working on a live system would certainly notice the delay, but then developers shouldn't edit live systems anyway.