Your initial line of attack with PowerShell is to use WMI. Unfortunately, the root\MicrosoftIisV2 namespace is set up with non-standard security settings which you can't change in PS (not in V1, at least, V2 may be different).
I would suggest looking at the IIS ADSI provider, specifically the IIsCompressionScheme object - http://msdn.microsoft.com/en-us/library/ms524574.aspx
You may be able to work with this in PowerShell by manipulating
$obj = [ADSI]"IIS://MachineName/W3SVC/Filters/Compression/Scheme"
However, ADSI is pretty evil, so you'll have a fairly steep learning curve.
The key issue is "how much data is being compressed?".
If you are running a massive DB query that takes a noticeable number of seconds to run, and the resulting page is a few tens of Kb long, then the expense of compressing the data will be completely dwarfed by the expense of the SQL work to the point where there is no point even thinking about it. A modern CPU is going to compress tens or hundreds of Kb pretty much instantly compared to any chunky DB query.
Another factor in favour of compression is that, correctly configured, static pages are not re-compressed on every request and objects that won't benefit (image files and others that are pre-compressed) are not compressed by the web server at all. Only dynamic likely-to-be-compressible content need be gzipped on each request.
Generally speaking, unless you have specific reason not to compress, I recommend doing so. The CPU cost is generally small unless you are running the web server on a low-power device (like a domestic router for instance) for some reason. One reason not to compress is scripts that use "long poll" techniques to emulate server push efficiently or scripts that drip-feed content to the browser for progress indication - the buffering implied by dynamic compression can cause such requests to time-out on the client side, but with careful configuration you can add them to the "don't compress" list while still compressing everything else. Another reason to consider not using dynamic compressions is that it does add a little latency to each dynamic request, though for most web applications this difference is completely negligible compared to the bandwidth savings.
A side note on CPU load due to SQL queries: this implies that your working data-set for these queries is small enough to fit in RAM (otherwise your performance would be I/O bound rather than CPU bound), which is a GoodThing(tm). The high CPU load could just be due to the shear number of concurrent queries as you suspect, but it could also be that some of them are table-scanning objects that are in SQL's allocated RAM and/or the OS's cache (or they are otherwise doing their work the long way around) so it might be worth logging long running queries and checking to see if there are any indexing improvements or other optimisations you can use to reduce the working set they operate over.
Best Answer
You have to enable the
GzipFilter
to make Jetty return compressed content. Have a look here on how to do that: http://blog.max.berger.name/2010/01/jetty-7-gzip-filter.htmlYou can also use the
gzip
init parameter to make Jetty search for compressed content. That means if the filefile.txt
is requested, Jetty will watch for a file namedfile.txt.gz
and returns that.