Is it dangerous to change the value of /proc/sys/net/ipv4/tcp_tw_reuse

linux-networkingsocketvirtual-machines

We have a couple of production systems that were recently converted into virtual machines.
There is an application of ours that frequently accesses a MySQL database, and for each query it creates a connection, queries, and disconnects that connection.

It is not the appropriate way to query (I know), but we have constraints that we can't seem to get around. Anyway, the issue is this: while the machine was a physical host, the program ran fine. Once converted to a virtual machine, we noticed intermittent connection issues to the database.
There were, at one point, 24000+ socket connections in TIME_WAIT (on the physical host, the most I saw was 17000 – not good, but not causing problems).

I would like these connections to be reused, so that we don't see that connection problem, and so:

Questions:

Is it ok to set the value of tcp_tw_reuse to 1? What are the obvious dangers? Is there any reason I should never do it?

Also, is there any other way to get the system (RHEL/CentOS) to prevent so many connections from going into TIME_WAIT, or getting them to be reused?

Lastly, what would changing tcp_tw_recycle do, and would that help me?

In advance, thanks!

Best Answer

You can safely reduce the time down, but you may run into issues with inproperly closed connections on networks with packet loss or jitter. I wouldn't start tuning at 1 second, start at 15-30 and work your way down.

Also, you really need to fix your application.

RFC 1185 has a good explanation in section 3.2:

When a TCP connection is closed, a delay of 2*MSL in TIME-WAIT state ties up the socket pair for 4 minutes (see Section 3.5 of [Postel81]. Applications built upon TCP that close one connection and open a new one (e.g., an FTP data transfer connection using Stream mode) must choose a new socket pair each time. This delay serves two different purposes:

 (a)  Implement the full-duplex reliable close handshake of TCP. 

      The proper time to delay the final close step is not really 
      related to the MSL; it depends instead upon the RTO for the 
      FIN segments and therefore upon the RTT of the path.* 
      Although there is no formal upper-bound on RTT, common 
      network engineering practice makes an RTT greater than 1 
      minute very unlikely.  Thus, the 4 minute delay in TIME-WAIT 
      state works satisfactorily to provide a reliable full-duplex 
      TCP close.  Note again that this is independent of MSL 
      enforcement and network speed. 

      The TIME-WAIT state could cause an indirect performance 
      problem if an application needed to repeatedly close one 
      connection and open another at a very high frequency, since 
      the number of available TCP ports on a host is less than 
      2**16.  However, high network speeds are not the major 
      contributor to this problem; the RTT is the limiting factor 
      in how quickly connections can be opened and closed. 
      Therefore, this problem will no worse at high transfer 
      speeds. 

 (b)  Allow old duplicate segements to expire. 

      Suppose that a host keeps a cache of the last timestamp 
      received from each remote host.  This can be used to reject 
      old duplicate segments from earlier incarnations of the 

*Note: It could be argued that the side that is sending a FIN knows what degree of reliability it needs, and therefore it should be able to determine the length of the TIME-WAIT delay for the FIN's recipient. This could be accomplished with an appropriate TCP option in FIN segments.

      connection, if the timestamp clock can be guaranteed to have 
      ticked at least once since the old conennection was open. 
      This requires that the TIME-WAIT delay plus the RTT together 
      must be at least one tick of the sender's timestamp clock. 

      Note that this is a variant on the mechanism proposed by 
      Garlick, Rom, and Postel (see the appendix), which required 
      each host to maintain connection records containing the 
      highest sequence numbers on every connection.  Using 
      timestamps instead, it is only necessary to keep one quantity 
      per remote host, regardless of the number of simultaneous 
      connections to that host.