I hate to say it, but you appear to be asking the wrong question.
It's not about stopping Apache from bringing down your server, it's about having your webserver serve more queries per second - enough so that you don't have a problem. A part of the answer to the reframed question is then limiting Apache so that it does not crash at high loads.
For the second part of that, Apache has some limits you can set - MaxClients being an important configuration. This limits how many children it's allowed to run. If you can take load off Apache for long-running processes (large files being downloaded for example), that's another slot in Apache to be able to serve PHP. If the file downloads have to be verified by the PHP layer, they can still do that, and pass back out to a more optimised webserver for the static content, such as with NginX sendfile
Meanwhile, forking Apache every on every single request for the slowest way to run PHP - as a CGI (whatever apache MPM you may be using) - is also having the machine spend large amounts of time not running your code. mod_php is significantly more optimised.
PHP can do huge amounts of traffic when Apache and the PHP layer are appropriately optimised. Yesterday, 11th Dec 2010, for example, the pair of PHP servers that I run did almost 19 Million hits in the 24hr period, and most of that in the 7am-8pm time-period.
There are plenty of other questions here, and articles elsewhere about optimising Apache and PHP, I think you need to read them first, before blaming Linux/Apache & PHP.
To begin with, the "canonical" reference for KVM hypervisor tuning is still IBM's excellent Best Practices for KVM which I suggest you go through point-by-point.
Some things you will almost certainly want to do, after carefully testing with your intended workload:
Use virtio drivers in your Windows guests. You should already be doing this; if you aren't, this will give you a very noticeable speedup. Linux guests should automatically use virtio from installation, though if you are virtualizing very old Linux systems, double check them.
Dump BFS. It was designed for low-latency desktop loads on low-end hardware and its author admits that it will "not scale to massive hardware". Doesn't inspire confidence.
Dump BFQ/CFQ. Virtually everyone gets the highest performance with the deadline I/O scheduler, and while you should test, you likely won't be an exception.
Make sure kernel samepage merging is running, and tune it appropriately. This can significantly reduce memory requirements on your hypervisor, especially when multiple guests run the same OS.
When using local storage, use raw block devices, such as LVM logical volumes, rather than image files. This removes a layer of abstraction from disk I/O.
There are many other things covered in IBM's guide I referred to earlier, but these should give you the most bang for your buck.
Best Answer
I would really like to see you use RHEL/CentOS 6.3. Version 6.2 of EL was short-lived, and many of the bugfixes and enhancements were targeted for the release of newer point-release kernels. Red Hat/CentOS make this extremely clear, since there are NO updates for 6.2, and the packages are only available in the vault archive.
Either way, the tool you should make use of is the tuned and tuned-adm framework. Some of this is detailed here, in this question, with a more storage-focused answer here.
tuned-adm
allows you to apply profiles to the system on-the-fly. Enabling a profile with:tuned-adm profile enterprise-storage
, for instance, will apply the changes in the last column in the graph below; including remounting the filesystems withnobarrier
and changing the I/O scheduler to deadline across the available block devices. Unfortunately, thevirtual-guest
profile only comes in EL 6.3 or newer... :( Another reason to upgrade...In the end,
tuned
is a daemon, so it can be stopped/restarted on the fly. Just reload/reapply the service when a new FS is mounted. It will take care of the rest. You can also create your own profile withsysctl.conf
and other performance settings...