Linux – Will TCP automatically close the socket in these cases

linuxsocketstcp

We are developing a network application on linux using C/S, recently we found that the write/read on some open and connected sockets may fail when there are large amount of UDP and TCP data conveying over the bandwith, it looks like the sockets are closed by some unknown reason.

Here comes the questions, please tell me if the TCP will automatically close the sockets or not.

  1. Suppose that there are a sender and a receiver, the sender sends lots of data to the receiver via a TCP non-blocking socket. And suppose there are a lot traffic on the bandwith by the application itself and other applications.
    If the bandwith are totally deployed that the sender don't have any chance to send out the data, then will the TCP automatically close the socket in some time later? if yes, how much is the time value?

  2. Suppose that the bandwith in question 1 is not fully deployed, and the sender can deliver data to the receiver successfully. But if the receiver doesn't read the data and in some time later the buffer will fill, then will TCP automatically close the socket in hours?

Any help would be greatly appreciated!!

Best Answer

I have never heard of a TCP socket closing itself automatically so I doubt that is the case. In the event that the sender can't send out any data, it would just wait and try again. The only reason a socket would close is if the sender attempts to send data several time, can't, and then explicitly closes the socket. In the event that the sender has enough bandwidth to send out the data but the receiver lacks the bandwidth to receive it, the protocol itself will resend the data and ensure that it arrived properly (see Wikipedia).

As for #2, also from Wikipedia (as ott mentioned):

When a receiver advertises a window size of 0, the sender stops sending data and starts the persist timer. The persist timer is used to protect TCP from a deadlock situation that could arise if a subsequent window size update from the receiver is lost, and the sender cannot send more data until receiving a new window size update from the receiver. When the persist timer expires, the TCP sender attempts recovery by sending a small packet so that the receiver responds by sending another acknowledgement containing the new window size.

Because TCP has this system in place for determining how much data to send, I assume that a TCP connection could always accept one update packet. In the situation where the buffer is still full, the receiver would continue broadcasting a window size of 0. Unless a piece of software is explicitly closing a socket after X repeated "0 window sizes," there is no reason for the socket to ever close.