Tomcat 7 supports serving static files outside of the WAR. See here (point 4.) Have you tried it?
Can you confirm that it's the Tomcat that generates such a delay (for instance using Tomcat's access log)? It shouldn't be hard to correlate slowly downloaded file with entry in access log on the server side.
What are the HTTP threads doing during the test? It would be amazing to dump stack traces when this phenomenon happens. If you see that some resource takes excessively long time to download, just perform dump and find the guilty thread (with tuned access log you can print the thread name).
if for whatever reason tomcat does not handle your ajax requests fast, this reduces the number of requests that your apache can handle. Tomcat is configured to handle 400 requests in parallel, and there is also a default acceptCount
of 100. So your tomcat is able to eat up 500 requests - at least: jvm and platform dependant there may even more connection request queued.
worker.worker1.reply_timeout=120000
worker.worker1.socket_timeout=150000
..tells mod_jk to wait about 1.7 days (socket_timeout is in seconds) for socket operations and 2 minutes for single networks packets from tomcat. You should adjust these values, to let mod_jk return an error as early as possible if tomcat is slow.
Let's assume your ajax requests are typically processed within a second with outliers up to two seconds. After beeing processed, the response is sent back at once. Then one may set worker.worker1.reply_timeout=2500
, just half a second more. socket_timeout
may even be omitted, as it is just a rough value. socket_connect_timeout
, that defines how long it may take to connect from apache/mod_jk to tomcat should be added to worker.properties and set to a very low value, e.g. 100. as both sit on the same server. See The Apache Tomcat Connector -Reference for more details.
Every request, that goes from apache to tomcat counts for what you configured with MaxClients
in httpd.conf. The more requests are stuck in tomcat the less may be processed by apache for static content. If you shutdown tomcat in that situation, static content is delivered fast again, as it frees up resources for request processing in mod_jk and apache.
You have configured prefork.c
and worker.c
in httpd.conf at the same time. I guess prefork.c
is the active, as MaxClients
is set to 512 and this would match your observations and my interpretation.. ;-)
Telling mod_jk to give up earlier on long running requests to tomcat might help a lot, but you should also think about adjusting the number of client requests handled by apache (MaxClients
) and the number of requests that tomcat processes (<connector maxThreads=...
) in parallel. These numbers have to be balanced to what happens during normal operations. Some tracing of page loads may be helpful to see in what proportion these values should be. The absolute value depends on your servers specs, network situation, number of clients etc.
If the absolute number of possible parallel requests is to low, users will complain about slow page loads, while you won't see your server used to capacity. If it's far to high, it will use more memory than really needed, even slow down, and will not recover fast from problems with sub systems - e.g. the database. If apache sends out far more requests to tomcat as it would process in time, you would see some of them timing out while others are processed in acceptable time. Starting out with similar values at apache and tomcat is no bad idea, as long as the timeout settings ensure that a slow or unresponsive tomcat is not a millstone on apache's neck.
Best Answer
Class loading and user state creation are the two most obvious likely causes.