Implementing HTB, NetEM, and TBF traffic control simultaneously

htbtctraffic-shaping

I am working on a bash utility that will use several aspects of the tc Linux command line utility to emulate various network conditions. I have successfully constructed several qdisc hierarchies, one each for HTB bandwidth control, NetEM delay and packet manipulation, and TBF rate control, as well as combined handlers for HTB-NetEM, and TBF-NetEM Where I am struggling is in combining the three into a single structure, for cases in which I need to control all of these factors on a single connection. This is what I have so far:

  sudo tc qdisc add dev $interface root handle 1:0 htb

  sudo tc class add dev $interface parent 1:0 classid 1:1 htb  #htb args

  sudo tc qdisc add dev $interface parent 1:1 handle 10:0 tbf  #tbf args

  sudo tc qdisc add dev $interface parent 10:1 handle 101:0 netem  #netem args

Because of my smaller scope cases, I know that the problem does not lie in the syntax of my inputs, but likely in the structure of my tc qdiscs and classes. When I attempt to run these commands together with rate and bandwidth shaping arguments (10 and 15 Mbit/s respectively) in both ethernet ports of my bridge, no change to the bandwidth of an iperf test is shown, in TCP or UDP. Any advice would be appreciated.

Here are my other working compound structures, in case they might help:

HTB and NetEM:

  sudo tc qdisc add dev $interface root handle 1: htb

  sudo tc class add dev $interface parent 1:0 classid 1:1 htb  #htb args

  sudo tc qdisc add dev $interface parent 1:1 handle 10:0 netem  #netem args

TBF and NetEM:

  sudo tc qdisc add dev $interface root handle 1:0 tbf  #tbf args

  sudo tc qdisc add dev $interface parent 1:1 handle 10:0 netem  #netem args

Best Answer

What you want is not HTB/TBF but HFSC.

http://man7.org/linux/man-pages/man7/tc-hfsc.7.html

You can attach netem to the leaf classes.

Here is a sample script to get you started..

#!/bin/bash
tc qdisc add dev veth1 parent root handle 1: hfsc default 11
tc class add dev veth1 parent 1: classid 1:1 hfsc sc rate 100mbit ul rate 100mbit
tc class add dev veth1 parent 1:1 classid 1:11 hfsc sc rate 50mbit
tc class add dev veth1 parent 1:1 classid 1:12 hfsc sc umax 1500 dmax 50ms rate 10mbit ul rate 10mbit
tc qdisc add dev veth1 parent 1:12 handle 12 netem delay 150ms
tc filter add dev veth1 parent 1: protocol ip u32 match ip sport 22 0xffff flowid 1:12

This creates a 100mbit class, 50mbit of which is in the default class (but can burst up to 100mbit) whilst the other class permits a realtime requirement so that 1500 byte packets must leave the queue within 50ms, the maximum rate of this class is 10mbit at all times.

Finally we added a leaf qdisc onto that class which actually delays packets leaving the queue by 150ms.

Traffic into the realtime class is selected on the basis of it having a source port 22 attribute (so all ssh traffic).

Related Topic