Tuning Linux TCP for large number of TIME_WAIT

performanceperformance-tuningtcp

It looks like we have a bunch of tcp connections hanging around on a busy webserver, this is the output from ss -s:

Total: 366 (kernel 1037)
TCP:   72108 (estab 130, closed 71964, orphaned 0, synrecv 0, timewait 71962/0), ports 46158

Transport Total     IP        IPv6
*     1037      -         -
RAW   0         0         0
UDP   12        8         4
TCP   144       111       33
INET      156       119       37
FRAG      0         0         0

How do I best tune the TCP settings on this server prevent problems/maximize performance? I have just recently increased net/ipv4/ip_local_port_range from the default to "1024 65000"

Best Answer

TCP/IP stack on linux is already very optimized and typically nothing is needed. For instance, setting local port range to get a few extra ports is almost certainly not needed.

In terms of time/wait being bad, it is just part of using tcp. If you really want to have less ports in that state, change tcp_fin_timeout or tcp_keepalive values. Although you really shouldn't change those values unless you really need to for some reason.

In terms of running out of ports, each port is keyed on a source and destination port. You are likely not going to run out of source/destination pairs unless you are doing something like nat.


In response to your comment about connections being dropped when using memcached; You can increase the # of worker threads and backlog queue length. The problem is more likely to be with memcached than the number of ports available.

Related Topic