Does Web Socket block thread while keeping connection as regular http connection does

multithreadingspringtomcatwebsockets

Let's take Spring web socket (with tomcat) for example.

Does Web Socket blocks thread while keeping the connection between server and client? (For example, connection can last 2-3 hours).

(
Does web socket uses thread the same way as regular http request does (e.g., blocks/owns thread while performing the requests)
)

Let's say server configured to have 200 threads in the thread pool and uses blocking io.

Does it mean that if we have 200 open long-living websocket connections, the server will not be able to handle other regular http requests or web socket connections while those 200 connections are open?

From tomcat docs (https://tomcat.apache.org/tomcat-8.5-doc/config/http.html):

maxThreads

The maximum number of request processing threads to be created by this
Connector, which therefore determines the maximum number of
simultaneous requests that can be handled. If not specified, this
attribute is set to 200. If an executor is associated with this
connector, this attribute is ignored as the connector will execute
tasks using the executor rather than an internal thread pool. Note
that if an executor is configured any value set for this attribute
will be recorded correctly but it will be reported (e.g. via JMX) as
-1 to make clear that it is not used.

So this means that if we have 200 long-living web sockets, the server will not able to accept any requests anymore?

Then, if some web site has huge amount of users, and it needs to server at least 10000 users (which open WebSocket) simultaneously, does this means that it needs 50 servers only for those 10 000 users?

What about non-blocking io? (Netty, Akka http)

Best Answer

I created a server with a limited number of max threads before. The solution is to put a cap on the lifetime of the open connections and/or the number of requests that a connection will run before being closed by the server. The client then simply gets back in line to request another connection. This can only work if your client requests are independent and do not require a long lived connection (which should be the case). I used blocking I/O, but had a timeout on receiving request data.

In my case, I allowed an HTTP connection (handled by a newly spawned thread) to process up to 10 requests and live up to 2 seconds (final request runs to completion, of course), then the thread finishes. This ensures fairness. I used a counting semaphore to limit the number of open connections / threads. I also provided a means for multiple server processes so that in case a server process crashed (which didn't happen), requests would simply go to another process until the failed one had restarted. I could update the software live that way, sending a hangup signal to tell the server to restart.

I had a way of monitoring connection status across all of the servers and it worked very smoothly and well. Unix did all the heavy lifting, I just had to learn about and take advantage of what it provided. This was back in 1999.

Related Topic