How bandwidth is allocated to concurrent users

bandwidthnetworking

Let's say a server is connected to internet through limited bandwidth, and more than 1 user try to download a file from that server simultaneously.

If we ignore the bandwidth limitation at the user side, may I know how the server side bandwidth will be allocated to different users? If there are 2 concurrent users trying to download the same file, will the bandwidth be divided evenly among the users, so each user get 0.5 of the bandwidth?

I've tried the following setup:-

I've connected 2 client PC with Windows XP OS to a switch. From the switch, I connect it to a server PC through a fixed bandwidth of 2mbps. Then, I run iperf in all 3 PC at the same time. The client PC run iperf in client mode, and the server PC run iperf in server mode.

Both the client PC send data to the server PC at the same time.

Then, I found that the server PC get ~500kbps from client PC1 and ~1450kbps from client PC2.

Both the client PC are connected to the switch using 1gbps ethernet connection. Both are using the same type of cable. Both are using the same OS. The settings for iperf are also the same.

I don't understand why there is such a big difference between the bandwidth allocated to client PC1 and client PC2. I would like to know how bandwidth is allocated to concurrent users who are trying to access the server at the same time.

Thanks.

Best Answer

There is no one answer. For the most simplest of TCP services, each client will attempt to grab data as fast as it can, and the server will shovel it to the clients as fast as it is able. Given two clients of combined bandwidth exceeding the bandwidth of the server, both clients will probably download at speeds of roughly half the server's bandwidth.

There are a LOT of variables in this that make this not quite true in real life. If the TCP/IP stacks of the different clients are differently able to handle high streaming connections, that by itself can affect bandwidth even if the server has infinite bandwidth. Different operating systems or server programs handle streaming speed ramp-up differently. Latency has an effect on throughput, where large latency connections can be significantly slower than low latency connections even though both connections can stream (in absolute values) the same amount of data.

A case in point, downloading kernel source archives. I've got very fast bandwidth at work, in fact it exceeds my LAN speed so I can saturate my local 100Mb connection if I get the right server. Watching my network utilization chart while downloading large files I can see some servers start small, 100Kb/s, slowly ramp up to high values, 7Mb/s, then something happens and it all starts over again. Other servers will give me everything immediately when I start downloading.

Anyway, items that can cause actual bandwidth allocation to differ from absolute equality:

  • TCP/IP capabilities of the client and server relationship
  • TCP tuning parameters on either side, not just capabilities
  • Latency on the line
  • The application-level transfer protocols being used
  • The existence of hardware specifically designed for load balancing
  • Congestion between clients and the server itself

In regards to your test-cases, what likely happened is that one client was able to establish a higher datastream rate than the other, perhaps by getting there first. When the other stream started it was not allocated sufficient resources to gain full speed parity; the first stream got there first and got most of the resources. If the first stream ended the second would likely pick up speed. In this case, the speed experienced by the clients was determined by the Server OS, the application doing the streaming, and the TCP/IP stack of the Server. Also, if the network card supported it, the TCP Offload Engine of the network card, if present and enabled.

As I said, there are a lot of variables that go into it.

Slow-ramp bandwidth usage: Slow ramp bandwidth usage chart

Related Topic