This server is being used in
SharePoint (WSS 3.0) development. It
is running IIS 7 set up with at least
one Application Pool per developer.
Each developer has one or more web
applications, each set up in an
AppPool assigned to that developers
credentials. We're running Visual
Studio 2008 SP1 and SQL Server 2008.
The SQL Server data is on a separate
virtual disk from the OS.
I've seen up to 8 developers on the
system at once. The server is
configured with 2GB of RAM, and more
RAM is not trivial at this time, due
to limitations of the host computer. I
hope that this will get corrected if I
can present enough reason to correct
it, in the form of: this is how much
time was wasted.
Without even looking at the hardware requirements of SharePoint 2007, having up to 8 developers with their own app pools. I assume that SQL Server is one another VM, but if it's one the same Win2k8 machine, there's no question of what the issue is.
AppPools for VisualStudio development (versions from 2005-2008) can easily grow to 150MB-250MB per AppPool pending numerous factors. 250mb x 8devs = 2gb used. Let's not forget about the memory the OS needs by itself and possibly SQL Server. In a nutshell: You simply don't have enough RAM. Load up on the RAM, as much as you can get. I wouldn't be surprised if it solved a majority of the "slowness" of the server at least from the background perspective and possibly the foreground.
FYI: Microsoft recommends 4GB minimum for SharePoint 2007 Application Servers (link) but in reality, anything Microsoft eats up resources as much as possible. Now in the same URL, they mention a 2GB minimum for standalone SharePoint servers but if developers are directly working on that server with their own AppPool, it's clear that IIS is eating up all available memory for the AppPools/Sites. Don't go with the minimum RAM recommended. Try to double, triple, quadruple RAM requirements if possible (budget permitting).
Edit: You mention that the SQL Server data is on a separate virtual disk, but is SQL Server installed on the same SharePoint server? How much free storage is available on the SharePoint server as well? These additional factors can easily consume server resources and should not be neglected.
Edited again: Since you mentioned (and I failed to read) that "RAM is not trivial at this time, due to limitations of the host computer", there's only one real solution: get a new host computer. 2GB is not enough to run SharePoint/SQL/IIS/whatever period. Sorry to say, but IMHO, a machine running this should have a minimum of 8GB RAM.
Edited after OP edit:
But my question is more like: are
there any performance counters or
tools that can tell me the amount of
time spent waiting for page reads and
writes? Any that can tell me amount of
time spent waiting on queued disk I/O?
I haven't run into your exact situation, but there's a good article (link) from Microsoft TechNet on the basics of the monitor metrics. I'm not sure if time spent waiting on queued disk I/O is the best way to get what you want (I'm assuming management backing).
It's possible to get performance
counters like "Pages input per
second", but it's harder to say what
values of that counter is "too much".
It would be better for my purpose if
there were "counters" that can say how
much time is being spent because of
the pages input per second.
An article from Windows Networking (link) on Server 2003 explains those counters better than I could. FTA:
The Memory\Pages/sec counter
indicates the number of paging
operations to disk during the
measuring interval, and this is the
primary counter to watch for
indication of possible insufficient
RAM to meet your server's needs. A
good idea here is to configure a
perfmon alert that triggers when the
number of pages per second exceeds 50
per paging disk on your system.
Another key counter to watch here is
Memory\Available Bytes, and if this counter is greater than 10% of
the actual RAM in your machine then
you probably have more than enough RAM
and don't need to worry.
You should do two things with the
Memory\Available Bytes counter: create a performance log for this
counter and monitor it regularly to
see if any downward trend develops,
and set an alert to trigger if it
drops below 2% of the installed RAM.
If a downward trend does develop, you
can monitor
Process(instance)\Working Set for each process instance to determine
which process is consuming larger and
larger amounts of RAM.
Process(instance)\Working Set measures the size of the working set
for each process, which indicates the
number of allocated pages the process
can address without generating a page
fault. A related counter is
Memory\Cache Bytes, which measures the working set for the system i.e.
the number of allocated pages kernel
threads can address without generating
a page fault.
Finally, another corroborating
indicator of insufficient RAM is
Memory\Transition Faults/sec, which measures how often recently
trimmed page on the standby list are
re-referenced. If this counter slowly
starts to rise over time then it could
also indicating you're reaching a
point where you no longer have enough
RAM for your server to function well.
So I'd say this explanation really addresses your metrics and understanding of what the metrics actually mean to you. Performance Monitor is a bit tricky especially if you don't fully understand what the counters mean in the grand scheme of things. I usually find myself reading about counters as it's easy to forget their "real world" meaning.
Best Answer
Each process will get a 4GB address-space (somewhat less, actually, but close enough). Scaling 32-bit applications by running multiple processes on a 64-bit platform is a perfectly viable scaling strategy. So long as you can run multiple app. pools out of process you'll get benefits.