Bandwidth Throttling – How It Works

bandwidthpeeringrouting

I'm trying to understand the basic mechanics of how throttling works. Here are two scenarios:

Scenario 1: A hotel offers a fast and slow connection option to its guests. Let's say the slow plan is 500Kbps. But the hotel's ISP connection is 1Gbps. If there is only one guest accessing the network and that guest is using the slow plan his data is presumably reaching the hotel router at a faster rate than the LAN is allowing it to be conveyed to the guest's device. To me this implies the packets are queuing up in a sense or being cached somewhere. Is this accurate? If so where does this queuing occur? Does it happen on the hotel network or is it more likely the ISP would be doing the throttling on the hotel's behalf

Scenario 2: If an ISP decides to throttle its users I would expect a similar queuing to occur at the edge of its network if the sum total of the permitted throughput on the ISP's own network is slower than the rate at which packets arrive from backbone providers or networks it is peered with.

For both scenarios I'm trying to wrap my head around how the queuing or caching (not sure what is the correct term) of packets works. I'm assuming that the packets are forced to sit in memory on networking equipment. But that must carry a cost and overhead that I've never heard mentioned when people talk about bandwidth throttling. And furthermore it looks to me like an external cost that the throttled network imposes on the adjacent unthrottled networks.

I'm not a network engineer so doubtless I am not grasping some concepts about how throttling works. I would be very grateful if someone would enlighten me.

Best Answer

Mostly, the packets get dropped. Some packets may get buffered in queues, but the queues aren't very big. Policing is probably what the hotel uses, and policing is simply dropping traffic in excess of the allowed bandwidth. With TCP (HTTP runs on TCP), the TCP will react to lost packets and slow down, shrinking the window size.

Even with queuing, you probably run into some variant RED (Random Early Detection), which randomly drops queued packets in order to prevent queues from filling, triggering TCP to slow down. If buffers fill and drop packets based on full buffers, different TCP flows can become synchronized (not good).

The ISP probably just limits the actual link speed (DSL and cable can limit the number of channels available to a customer). This is the easiest thing to do.