Best way to limit outgoing bandwidth in apache server (mirror)

apache-2.4Apache2

I will be setting up an apache foundation download mirror and open up to both public and private access. I would like to limit external access to about 650Mbps, but place no limits (preferably prioritize) on internal access. I would also like to serve clients as fast as possible when there is enough capacity to do so, but divide up bandwidth when there are many clients connected (hopefully evenly, but not necessary). On the side, the server will also be used for Ubuntu and Debian package mirroring. The webserver will only be serving static content and must use apache.

Currently configuration:

Apache version: Apache 2.4.18
OS: Ubuntu 16.04 LTS

CPU & RAM: preferably 2 core 4GB for now, but can expand to 4 core 32GB if needed
Apache Module: default

Options available ranked by ease of access:
– apache modules
– root access to the server running the apache server (open to software traffic shaping / rate limiting)
– very powerful and 99% idling Juniper EX4xxx series switch

Best Answer

Here is the working example based on tc and iptables.

Step 1:
Replace default pfifo_fast queue with PRIO queue.
PRIO queue is classfull queue and will allow us to attach filters later on to classify different types of traffic.

Check existing queue

tc -s qdisc ls dev eth0

Replace with PRIO. This will create 3 default bands.

tc qdisc add dev eth0 root handle 1: prio 

Which can be visualized as below

      1:   root qdisc
     / | \ 
   /   |   \
   /   |   \
 1:1  1:2  1:3    classes

And now let's add classful queues. We will attach them to bands 1:1, 1:2 and 1:3

tc qdisc add dev eth0 parent 1:1 handle 10: sfq
tc qdisc add dev eth0 parent 1:2 handle 20: tbf rate 20kbit buffer 1600 limit 3000
tc qdisc add dev eth0 parent 1:3 handle 30: sfq 

Which can be visualized as below

      1:   root qdisc
     / | \ 
   /   |   \
   /   |   \
 1:1  1:2  1:3    classes
  |    |    |
 10:  20:  30:    qdiscs    qdiscs
 sfq  tbf  sfq
  0    1    2     bands

Based on packet TOS traffic will go to

  • 1:10 - Interactive traffic
  • 1:20 - Interactive traffic
  • 1:30 - Bulk traffic

But this is not exactly what we want considering that a few applications will actually set TOS values
We want to classify traffic initiated from port 80 (sport=80) to go to 1:2 (which we shape and limit rate) and the rest of traffic to 1:1.
This way rest of the traffic won't have to wait for http traffic and will have a priority. Otherwise slow http traffic will block other non interactive traffic.

So how to do this?
We will mark our packets initiated from source port 80 with mark 2 via iptables and non http traffic with mark 1

iptables -t mangle -A OUTPUT -m tcp -p tcp --sport 80 -j MARK --set-mark 2          
iptables -t mangle -A OUTPUT -m tcp -p tcp ! --sport 80 -j MARK --set-mark 1        

And we will use tc filter which will route packets marked with a tag to a particular band

tc filter add dev eth0 protocol ip parent 1:0 prio 1 handle 2 fw flowid 1:2    ### Send traffic from source port 80 to tbf queue
tc filter add dev eth0 protocol ip parent 1:0 prio 2 handle 1 fw flowid 1:1    ### Send all other traffic to sfq queue 1:1

And now we are ready to test. I will initiate download of CentOS iso from http server and at the same time I will initiate sftp transfer of the same image from same server. On the below picture we can see that while sftp transfer is about 13MB/s, http transfer is limited to 20kbit/s.

enter image description here

With this example band 1:3 is not even used.
So perhaps it was better to use tbf on band 1:3 and have 1:1 and 1:2 with sfq and default prioroties. However this was just a quick test and should be enough to hopefully clarify a bit better convoluted tc documenation.

Resources used:

Related Topic