Java – Too many parking to wait threads

javamultithreading

I am analyzing an application hang, and through the Thread Dumps, I am having 90% of worker threads in this state:

"pool-3-thread-352" #13082 prio=5 os_prio=0 tid=0x00007ff6407fc800
nid=0x1e94 waiting on condition [0x00007ff5a53b4000]
java.lang.Thread.State: TIMED_WAITING (parking) at
sun.misc.Unsafe.park(Native Method)
– parking to wait for <0x000000044af6bcd0> (a java.util.concurrent.SynchronousQueue$TransferStack) at
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
at
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
at
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941)
at
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

"pool-21-thread-214" #13081 prio=5 os_prio=0 tid=0x0000000002e6a800
nid=0x1e92 waiting on condition [0x00007ff5a54b5000]
java.lang.Thread.State: TIMED_WAITING (parking) at
sun.misc.Unsafe.park(Native Method)
– parking to wait for <0x00000004ad95fba8> (a java.util.concurrent.SynchronousQueue$TransferStack) at
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
at
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
at
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941)
at
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

As per my understanding, these are basically request worker threads on a tomcat sever, waiting on a blocking queue until a request comes. When a request comes, one thread will get permit and will run to execute the request.

So if no tasks are available these threads will wait (park) on the queue. When a task is available, one worker thread will get permit and become a running thread. It will execute the task.

But these threads still can cause issue if too many threads in the thread pool are created and they will be eating up resource.

Zero Deadlocks found, but still the app hanging, with almost Exceptions everywhere of type:

javax.ws.rs.ProcessingException: RESTEASY004655: Unable to invoke request
    at org.jboss.resteasy.client.jaxrs.engines.ApacheHttpClient4Engine.invoke(ApacheHttpClient4Engine.java:287)
    at com.agfa.orbis.core.client.service.rest.ClientHttpEngineWrapper.invoke(ClientHttpEngineWrapper.java:59)
    at org.jboss.resteasy.client.jaxrs.internal.ClientInvocation.invoke(ClientInvocation.java:436)
    at org.jboss.resteasy.client.jaxrs.internal.ClientInvocationBuilder.get(ClientInvocationBuilder.java:159)
    at com.agfa.hap.crs.commons.client.rest.RestClient.getResponse(RestClient.java:238)
    at com.agfa.hap.crs.commons.client.rest.RestClient.get(RestClient.java:70)
    at com.agfa.hap.crs.alertsystem.client.orbis.ForwardedUserAlertsMonitor.getSharedAlertState(ForwardedUserAlertsMonitor.java:88)
    at com.agfa.hap.crs.alertsystem.client.orbis.ForwardedUserAlertsMonitor.getCurrentAlertState(ForwardedUserAlertsMonitor.java:79)
    at com.agfa.hap.crs.alertsystem.client.orbis.AbstractAlertMonitor.requestMonitorUpdate(AbstractAlertMonitor.java:275)
    at com.agfa.hap.crs.alertsystem.client.orbis.AbstractAlertMonitor$10.execute(AbstractAlertMonitor.java:823)
    at com.agfa.hap.crs.alertsystem.client.orbis.AbstractAlertMonitor$Task.call(AbstractAlertMonitor.java:952)
    at com.agfa.hap.crs.alertsystem.client.orbis.AbstractAlertMonitor$Task.call(AbstractAlertMonitor.java:942)
    at com.agfa.hap.crs.alertsystem.client.orbis.AbstractAlertMonitor$TaskWrapper.call(AbstractAlertMonitor.java:925)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake
    at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:992)
    at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1375)
    at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1403)
    at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1387)
    at org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:535)
    at org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:403)
    at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:177)
    at org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:304)
    at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:611)
    at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:446)
    at org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:863)
    at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
    at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:57)
    at org.jboss.resteasy.client.jaxrs.engines.ApacheHttpClient4Engine.invoke(ApacheHttpClient4Engine.java:283)
    ... 16 more
Caused by: java.io.EOFException: SSL peer shut down incorrectly
    at sun.security.ssl.InputRecord.read(InputRecord.java:505)
    at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:973)
    ... 29 more

Am looking to link these exceptions through the threads activity !!!
Any idea why the connection is closed incorrectly ?!!!!

Best Answer

These Threads are waiting for something to happen. As you wrote:

these are basically request worker threads on a tomcat sever, waiting on a blocking queue until a request comes

As far as I understand, this happens under low load. So a too big ThreadPool will not be a problem. If you're really worried about it, you can configure a maxIdleTime for ThreadPools. So Tomcat is going to kill the old idle threads - until the ThreadPool reaches the minSpareThreads.

This is the thread pool documentation for Tomcat 8.

This is the thread pool documentation for Tomcat 7.

This is the thread pool documentation for Tomcat 6.

Related Topic