Tcp – How to determine the throughput of the TCP connection

tcpthroughput

enter image description here

APP 1 sends data to APP 2 over a TCP connection through an IP network. The one-way delay is 10ms. The TCP connection bandwidth is 100 Mbps. The system starts up with a WINDOW set at 4,000 bytes.Thereafter, APP 2 will advertise to the sender APP 1 a WINDOW of 4,000 bytes on top of an acknowledge. APP 2 maintains a buffer of 16,000 bytes, and it can consume the received data at a rate of 200,000 bytes/sec. How do I calculate its throughput?

RCV buffer size / RTT = Max TCP throughput

RTT = 2 * 10ms = 20 * 10^-3

The WINDOW size can be increased to 16,000 bytes. So 16,000 bytes is the RCV buffer size?

Therefore, throughput = (16,000 * 8)/(20 * 10^-3) / 8 = 800,000 bytes/sec.

However, this result is larger than the consumption rate of APP 2. It seems so odd. If 16,000 is replaced by 4,000, the answer is 200,000. Does this seem normal?

Best Answer

Yes, this is completely normal. And you use the minimum value of the calculations that you have done above. Think about how long it would take to move a box of paper down the hall - now imagine that you had to do it one page at a time. However, increasing the receive window is not always the answer. Sometimes the client makes small layer 7 requests (SMB does this a lot). So, a 64K TCP window won't help if client is only asking for 4KB blocks waiting for it to arrive then asking for the next one. You may also need to consider TCP slow-start in the equation.

Ok, now imagine you can carry multiple pages down the hall at a time, but you don't want to bring too many at a time. So you start with 3, if you don't drop any then next time you bring , then 8, then 13... Until you drop one, then you reduce by 50% and start ramping up again. That is TCP slow-start. And the count is called the CWND.

But none of that makes any difference because the win/rtt bw=800kb/s but the APP2 consumption rate is only 200 so that is the limiting factor.

Related Topic