Tcp – the relationship between throughput and latency

latencytcpthroughput

Looking at the structure of a TCP packet, such as discussed here: What is the internal structure of a mobile phone call packet/datagram?, this indicates that a TCP packet size is 192 bytes excluding data.

From an answer on StackOverflow (https://stackoverflow.com/questions/2613734/maximum-packet-size-for-a-tcp-connection), I see that the size of the data is variable with a hard maximum of 64KB, but in practice can't go beyond 1500B (according to one of the answers) – which I assume is on a reasonably good network.

Then on Wikipida (https://en.wikipedia.org/wiki/Maximum_segment_size) I see that for practical reasons TCP data packets limited to smaller sizes will avoid IP fragmentation – which I guess is desirable for poorer network qualities.

If I had a mobile device on a network with a high latency (400ms), is there a way to estimate the number of TCP packets that will be required to transmit an HTTP payload of 1MB?

My understanding of networks is then that you could calculate the time it takes for all the packets to be delivered over such a connection.

i.e. 1000 packets * 400ms latency (i.e. 400ms RTT – I understand latency to usually refer to RTT) = at least 400 seconds to deliver the HTTP payload.

But this doesn't take into account throughput.. If that calculation (at least in principle) is correct, how would increasing throughput (i.e. upgrading a connection from 1mbps to 10mbps) effect the delivery of a 1MB HTTP payload?

Would it allow for increasing the TCP segment size? And, if so, is this done automatically via the TCP application?

Best Answer

i.e. 1000 packets * 400ms latency (i.e. 400ms RTT - I understand latency to usually refer to RTT) = at least 400 seconds to deliver the HTTP payload.

Well that would only be true if every single TCP segment were to be ACK'd. (anyone remeber TFTP? Across a 64k WAN link?... those were the days of patience...)

Key concept here is the bandwith * delay product ("BDP", sometimes also referred to as "bytes in flight") and TCP window size (and scaling thereof). TCP speakers should take that into account.

https://www.switch.ch/network/tools/tcp_throughput/ is one (certainly one of many) web based tool to play around with bandwidth, delay, loss factors.

Here's a screenshot of the results with 1Mbps, 400ms RTT, a reduced MSS (maximum TCP Payload size) of 1300 bytes. The calculation yields a BDP of ~50kBytes.

switch.ch throughput calculator, 400ms, 1Mbit/s

Here's the same with 10Mbits, the BDP is at ~500kBytes:

switch.ch throughput calculator, 400ms, 10Mbit/s

400ms is a pretty "long" pipe, and by playing with the numbers, you'll see that the unscaled TCP Window size of 64kByte is by far not enough to reach 10Mbit/s of throughput.

To answer your question: At 400ms of RTT, TCP window scaling (assuming max 64k default when not scaled) is not quite needed to "fill a 1Mpbs pipe".

For a 10Mpbs pipe at 400ms RTT, the TCP speakers MUST implement TCP window scaling.

I may not be a programmer, but the the contemporary TCP stacks and applications I have come across in the recent years (by virtue of observing their behaviour in countless Wireshark traces) all do TCP window scaling by default.