Php – Share common web application code to many web servers

load balancingPHPweb-server

What is the best way to share common web application code to many web servers?

For example, I have a directory containing web application written in PHP.
Now, I have multiple machines that will serve same content, each runs a webserver, load balanced using TCP/IP balancer.

I haven't tried NFS yet for this scenario. But according to Slow NFS transfer performance of small files, transfer performance will be slow, because of many [small] files involved.

EDIT: I want to store all files centrally in a single location, so update to the code will be instaneously read by the webservers.

Best Answer

I'll outline a couple of options, to show alternatives to the live code via NFS approach.

The "best" strategy depends on your chosen code deployment strategy, possibly one of these:

  • Editing on live system - developer changes/updates application in one central place, live machines then pick up the change (which sounds like your scenario)
  • Code exported from source control - developer makes changes to a source control system, then runs a program to export the code to all machines in the load balancing pool (or waits for the machines perform the update themselves)
  • Code deployed via package system - developer makes changes to a source control system and creates software packages (e.g. via APT or YUM) stored in a repository. Developer runs a program to force the load balanced machines to install the new software from the repository.

The "best" strategy also depends on the availability requirements for the application:

  • Theoretical 100% uptime - the application is still available while software updates are made (works with your suggested approach, but watch for atomicity problems: during an update, your PHP application may include_once() an mixture of old and new files)
  • Scheduled maintenance accepted - the application is briefly taken offline while software updates are made (no atomicity problems)

Out of the above options, I would personally choose the package deployment approach. However, for web requests, sharing a small number of files over NFS can work well: The latency introduced by NFS is small compared to the Internet. But before you do this, consider the disadvantages:

  • Consider which system resources are being balanced: If you are using multiple machines to balance CPU usage, then the setup may work well. However, if you're using multiple machines to balance IO, then shifting it onto one machine via NFS (or otherwise) may not be sensible.
  • Consider failure of the fileserver: If the machine holding the files dies, it may take your whole application offline (as the webservers are unable to read the files). In this case, the webservers would tend to lock up waiting, for NFS to recover.

Because of these possible snags (IO bound fileserver, failure of the webserver), I'd also suggest periodically sync'ing the webservers with the application. Pushing changes to several machines should only take a couple of seconds. If required, you could setup some logic like: if (time() > 23:59:00) {use software in dir B} else {use software in dir A}). This could be useful if all machines must should run the same software version, e.g. if you've just changed a database schema.

A couple of seconds delay during deployment really isn't too bad. A developer working on a live system would certainly notice the delay, but then developers shouldn't edit live systems anyway.