Router – Configure a Linux router for fair bandwidth allocation without unnecessary limiting it

bandwidthbandwidth-controllinux-networkingopenwrtrouter

I run a Linux router (more precisely OpenWRT) on an internet connection with very limited bandwidth, around 1 MBit/s downstream and some dozen kBit/s upstream.

There are several machines on the net that do low-bandwidth stuff, like playing web radio or sending measurement data. Other machines may start normal downloads for software updates occasionally.

Whenever a machine starts a download, the low-bandwidth stuff gets choppy. I suppose that the bandwidth of the stream gets reduced because there's another connection on the router, although it woudl "fit" nicely into the WAN b/w. This is a bit against my intuition, and I would like to configure the router to allocate bandwidth more fairly.

By "fairly" I mean:

Suppose there is 1 MBit/s downstream bandwidth, and 64 kBit/s of it are used. The next client that accesses the WAN should get at most (1 MBit – 64 kBit)/s bandwidth. If and only if the downstream bandwidth is all used up, the individual connections' bandwidth should be lowered, and it should be adapted so that connections are throttled proportionally to their size (the smaller, the less).

First of all, is my understanding of the problem correct? If so, what can I do to influence the router's bandwidth allocations? Note: I do not want to what is usually recommended in the literature, namely limit the bandwidth of each client to a fraction of the total bandwidth available. There's just too little WAN speed at my site to do that.

Best Answer

Your understanding of the problem is generally correct, but the kind of solution you propose is VERY complicated to implement. The questions of "What is a client?" and "What is a connection?" come up, and can be difficult to answer well.

The more typical bandwidth-limiting strategy is something like this:

  • Define the limit of the upstream bandwidth (say 1Mbit/sec)
  • Optionally Reserve some amount of that bandwidth for "system administration" (say 64Kb/sec)
  • Optionally "guarantee" a certain amount of bandwidth to a specific purpose (say 192Kb/sec for VOIP)
  • Allow everyone to use the remaining bandwidth (768Kbit/sec).

VOIP may use more than 192Kbit/sec, so we may let it borrow from the "everyone" pool (or vice-versa). When the "everyone" pool is saturated start dropping packets just like we would if the upstream link were REALLY saturated (using, e.g. Random Early Detection to pick your drop victims.

Typically this is done on UPSTREAM traffic. Downstream traffic can be limited the same way, but you can't avoid link saturation from downstream traffic (the packets still have to come to your firewall before the drop decisions can be made, so they're still traveling down your wire. The result is a surge of traffic followed by a natural decay as the TCP protocol senses "link saturation" congestion and the remote side backs off its send rate until the packet loss stops).

Also note that this doesn't guarantee "fairness" to the client machines, except to the extent that Random Early Detection will drop packets "randomly" (randomly enough that it won't always be client A that gets packets dropped when the link is saturated). What you're counting on is that the "random" drops will naturally shape traffic to an extent that you don't have to worry about one client being starved out while another hogs all the bandwidth.


An out-of-the-box solution in your particular case might be to limit the bandwidth available for updates (presumably these come from known subnets, so limit those), but this is still subject to the caveats I mentioned above.

Alternatively, if you have hardware available, you can distribute your updates from a local server (WSUS, Local apt mirror, etc.) -- This would let you schedule those updates to be pulled locally during off-hours when nobody is using your network, and ultimately would save you a lot of bandwidth transferring individual updates for each machine.
Since the updates are already local it doesn't matter when the individual client machines pick them up - they're not going out to the internet, so as long as you aren't saturating your local network (pretty tough!) you won't suffer significant performance issues. The downside of course is you need to invest time and hardware in setting up the update server.