TCP vs UDP – Packet Forwarding Differences Explained

layer4routingswitchingtcpudp

Recently I had a discussion about differences between TCP and UDP, and the other person insisted that there's a difference in packet forwarding: The packets in a TCP connection follow an established path, so that in a diamond configuration where a host has two paths to another host a connection will use only one path. Moreover, if a path is broken the TCP connections on it will not use the other path, they will time out instead. UDP traffic is not affected in any way.

This goes against what I've learned about packet forwarding in general, but I haven't been able to confirm or deny it. Is it true? Why would switches give TCP connections this special treatment, doesn't it make the network less reliable in general?

Best Answer

Actually there is some truth in what the other person was saying, though it is largely false.

Is it true (that packets in a TCP connection follow an established path)?

Yes, in general, all packets in a TCP stream will follow the same path through the network - even in the presence of a "diamond" network, all packets in the same stream will be routed down the same side of the diamond. However, it's not true that if that side of the diamond goes down, the TCP stream will time out - in that case it will be rerouted and its path will change.

Why would switches give TCP connections this special treatment?

Here I'll answer why switches and routers will always try to send packets in a stream along the same side of a "diamond", hopefully giving some insight into where your interlocutor got their (flawed) understanding of the situation. The brief answer is "performance".

Background: While it's true that TCP can handle reordered packets successfully, in practice reordering tends to cause significant performance issues. For instance, if the other end sees packets in the order "1 2 4 3", it will notice when it sees packet 4 that it has not yet seen packet 3, and request for the other end to resend it - and it will then receive packet 3 twice (the one originally sent plus the resend). This results in bandwith being wasted (by sending packets twice unnecessarily), and also in reduced connection performance, since the sender will assume the "packet loss" it thinks it has seen is due to congestion, so it will slow down its sending.

Therefore most quality switches and routers will go to quite some lengths to avoid packet reordering, and as part of this will try to send all packets from the same TCP stream along the same path (in case one path has higher latency than the other).

Contrary to what others have said, this happens even for core routers. Where there are multiple links that are being load-balanced over, the router will try to send packets from the same TCP stream over the same link. Although this might seem to require the tracking of a huge amount of state, in fact it can be done without tracking all of the streams: the router takes the identifiers of the stream (source/destination IP address, and sometimes source/destination port), and combines them (hashes them) into a single number. This number is used to select which link to send the packet on. (To give an example of one vendor's implementation of this feature, search for ip cef load-sharing algorithm.)

Doesn't it make the network less reliable in general?

Yes, of course it would, if networks did exactly what your interlocutor described - so they don't do that. The behaviour the other person talked about where a path is dropped and TCP connections using that path time out is not typical: though it may be possible to configure some products to behave like this, in general both TCP and UDP will start to use the alternative route straight away.

When a link goes down, the hashes will be redistributed across the remaining links. This will result in TCP streams changing their paths - which may sometimes result in some reordering - but this is generally acceptable as it happens occasionally, rather than on every packet.

Other relevant information

The discussion was originally about the difference between handling of TCP and UDP packets.

In fact, all of the above handing is done for UDP packets too: that is, UDP packets from the same "connection" will also typically always follow the same path. This is desirable since UDP packets are often used for realtime media streams such as phone calls - if many of your voice packets arrive out of order, this can result in terrible audio quality (since for a phone call large buffering is highly undesirable, so the received will typically drop packets that arrive out of sequence).