The key issue is "how much data is being compressed?".
If you are running a massive DB query that takes a noticeable number of seconds to run, and the resulting page is a few tens of Kb long, then the expense of compressing the data will be completely dwarfed by the expense of the SQL work to the point where there is no point even thinking about it. A modern CPU is going to compress tens or hundreds of Kb pretty much instantly compared to any chunky DB query.
Another factor in favour of compression is that, correctly configured, static pages are not re-compressed on every request and objects that won't benefit (image files and others that are pre-compressed) are not compressed by the web server at all. Only dynamic likely-to-be-compressible content need be gzipped on each request.
Generally speaking, unless you have specific reason not to compress, I recommend doing so. The CPU cost is generally small unless you are running the web server on a low-power device (like a domestic router for instance) for some reason. One reason not to compress is scripts that use "long poll" techniques to emulate server push efficiently or scripts that drip-feed content to the browser for progress indication - the buffering implied by dynamic compression can cause such requests to time-out on the client side, but with careful configuration you can add them to the "don't compress" list while still compressing everything else. Another reason to consider not using dynamic compressions is that it does add a little latency to each dynamic request, though for most web applications this difference is completely negligible compared to the bandwidth savings.
A side note on CPU load due to SQL queries: this implies that your working data-set for these queries is small enough to fit in RAM (otherwise your performance would be I/O bound rather than CPU bound), which is a GoodThing(tm). The high CPU load could just be due to the shear number of concurrent queries as you suspect, but it could also be that some of them are table-scanning objects that are in SQL's allocated RAM and/or the OS's cache (or they are otherwise doing their work the long way around) so it might be worth logging long running queries and checking to see if there are any indexing improvements or other optimisations you can use to reduce the working set they operate over.
Answer would by typical: FusionReactor is your friend.
Last time I had such problem with Jrun caused by endless loop in a buggy piece of code, FR saved me.
Best Answer
How many App pools? You can start by moving your websites into separate app pools and then using task manager + command line iisapp to match up which app pool matches which task. That will help you identify which web it is to start with.
Once you have that identified grab the microsoft IIS debugging tool: http://www.microsoft.com/downloads/details.aspx?FamilyID=28bd5941-c458-46f1-b24d-f60151d875a3&DisplayLang=en
Then:
Using IIS Debug Diagnostics to troubleshoot Worker Process CPU usage in II6
Failed request tracing in IIS7 can help track down many performance issues with websites, but we still have a broad customer base on IIS6. Troubleshooting performance issues in IIS6 has been quite difficult until Microsoft released a set of tools that gave greater insight into analyzing a stack trace.
The IIS Debug Diagnostics Tool can help track down CPU and memory issues from a worker process. Microsoft has a nice kb article that goes over the basics as well: http://support.microsoft.com/kb/919791.
Install the IIS Debug Diagnostics locally on the system.
Open the Debug Diagnostics Tool under Start > Programs > IIS Diagnostics > Debug Diagnostics Tool > Debug Diagnostics Tool.
Click Tools > Options And Settings > Performance Log tab. Select the Enable Performance Counter Data Logging option. Click OK.
Use task manager to find the PID of the worker process.
Select the Processes tab and find the process in the list.
Right-click on the process and select Create Full Userdump. This will take a few minutes and a box will pop-up giving you the path to the dump file.
Select the Advanced Analysis tab and click the Add Data Files button. Browse to the dump file that was jump created and click OK.
Select Crash/Hang Analyzers from the Available Analysis Scripts box for CPU Performance and crash analysis. Click Start Analysis.
After a few minutes, a report should be generated containing stack trace information as well as information about any requests executing for longer than 90 seconds. Note that the memory dump with use a few hundred megabytes of space, so be sure to install the tool on a drive with sufficient debugging space. Also, if the box is under heavy load, you can create the user dump on the system, copy the file to your workstation, and perform the analysis locally.