Tracking down WHAT is causing the problem can be a pain in the ass. The first thing I'd do if I had a problem like that is reduce MaxRequestsPerChild
to an aggresively low number (~100-200) and see if that makes a difference. If it does, then you probably have code that is leaking memory in a loop somewhere and you'll want to run a code audit.
Another thing to look at is Apache's fullstatus, see if you can find out what particular request is causing the memory leak. Get the PIDs on your suspected processes and run an strace on them.
IIS6 CGI is typically 1 request -> 1 process. 15 concurrent PHP-CGI processes is likely due to 15 concurrent PHP-CGI requests. Or you've got a high hang-rate for the PHP processes, and they're just not exiting properly.
On Windows, process startup isn't as cheap as on *nix (I'm told); Windows threads are lightweight and can be simply spun up within processes, but starting a process is expensive.
Starting a new process for each incoming request can range from "expensive" to "disastrous". It could simply be that your load is increased when you see 200 concurrent processes - i.e. you have 200 outstanding requests "in flight". At some point, performance will drop to where new work is coming in faster than the old work can possibly be completed, and if you're restarting the server to cope with that, you're just punishing the users. Who may well make another request straight away to retry it.
If your processes are loitering, your app might have a hanging bug too. But that's by the by.
Anyhoo, this is all a long-winded way of getting to: Have you tried FastCGI?
http://learn.iis.net/page.aspx/247/using-fastcgi-to-host-php-applications-on-iis-60/
FastCGI on IIS allows reuse of an existing pool of non-exiting processes, so instead of one request starting, processing, and exiting a new process, each request is distributed to a pool of worker processes running (in this case) PHP-CGI.
Each PHP-CGI instance is kept alive while 1000+ requests are pumped through it, and then it's allowed to quit, and a new one is started in its place. From memory, there's a group of processes that do this concurrently to handle simultaneous requests (could be 4, 5 or 10 by default, configurable), and the performance should be (much) better.
Best Answer
If you can get to the server console while this is happening, you'll often be able to tell from Task Manager.
True story! But there's one tweak: turn on the view of the Command Line. This shows the arguments passed to the target process, from which you could typically infer the site/page/consumer.
Grabbing a process dump (or series of process dumps) from any errant high-CPU process(es) should also include the command line parameters passed to it (visible to the debugger).
If you need a snapshot of processes and related parameters in flight from the command line,
wmic process
looks like it gets it, as long as WMIC was in 2008.