Cisco – QoS shaper, shapes traffic without dropping packets

ciscocisco-ioscongestiondropsqos

I have a problem with my QoS policy, or I should better say, I have a problem understanding how my policy manages to achieve what ask of it without dropping a packet!

Here is the scenario:

We have an GRE point-to-point Tunnel, Crypto-Map protected with IPsec, and we shape traffic going out of it to 10Mbps. We use HQF so I have a parent shaping policy applied outgoing on the tunnel interface and a child policy with 5 classes (default included). The path towards the other end of the tunnel can handle 100Mbps all the way, and CPUs on the edge routers are monitored and within normal operation levels, so there is nothing other than my shaper limiting traffic to 10Mbps. Policy Classes “A”, “C”, “D”, “Default” are set with “queue-limit 234” (Class “B” gets the default 64 packets) therefore the sum of the max-queues of all 5 classes equals less than 1000 packets which is the default “max-queue” value on the physical interface that sources the tunnel. Ping shows RTT through the Tunnel is around 12msec.

This is how I test my policy:

I FTP a large ISO file to a host on the other side of the tunnel. This first FTP matches class "C" for example and it exceeds the classe's assigned "weight". I notice that tunnel outgoing traffic gets shaped to 10 Mbps. “show policy-map” shows a few packets queued but no drops… With the tunnel still being full (10Mbps) I start a second FTP that matches Class “D”, large ISO file again, and after a while “show policy-map” shows that the two classes are traversed by traffic in proportion to the “weights” assigned to them with the bandwidth command. Tunnel traffic remains 10Mbps and still a few packets in queue but no drops… Even more, “normal” network traffic matching the other classes (not exceeding their assigned “weights”) still went through without a problem.

The question:

How does my policy manage to shape traffic to the requested size without dropping packets?

My (obviously wrong) theory:

What I would expect was that when TCP exceeded the available BW of a class, packets would pile up on the class queue until it got full and start tail-dropping. Then TCP eventually would have to retransmit the unacknowledged data and lower the window size until it tries to enlarge it again after a while and we end up with tail-drop like before and here we go again, as a result shaping traffic to 10Mpbs.

But where are my tail-drops??? What am I missing here?

The router is a Cisco 2821 with IOS c2800nm-advsecurityk9-mz.151-4.M12a.
Here is a piece of my config:

class-map match-any A
 match access-group name ACL_A
class-map match-any B
 match access-group name ACL_B
class-map match-any C
 match access-group name ACL_C
class-map match-any D
 match access-group name ACL_D
!
!
policy-map CHILD
 class A
  bandwidth percent 5
  queue-limit 234 packets
 class B
  bandwidth percent 5
 class C
  bandwidth percent 40
  queue-limit 234 packets
 class D
  bandwidth percent 40
  queue-limit 234 packets
 class class-default
  queue-limit 234 packets
  bandwidth percent 10
!
policy-map PARENT
 class class-default
  shape average 10000000
  service-policy CHILD
!
!
interface Tunnel1
 bandwidth 10000
 ip address x.x.x.x 255.255.255.252
 ip mtu 1400
 ip flow ingress
 ip flow egress
 load-interval 30
 qos pre-classify
 keepalive 3 3
 tunnel source GigabitEthernet0/1.4042
 tunnel destination z.z.z.z
 tunnel path-mtu-discovery
 service-policy output PARENT
!
!    
interface GigabitEthernet0/1
 no ip address
 load-interval 30
 duplex auto
 speed auto
!
!
interface GigabitEthernet0/1.4042
 bandwidth 10000
 encapsulation dot1Q 4042
 ip address w.w.w.w 255.255.255.252
crypto map XXXZZZ
!

Best Answer

How does my policy manage to shape traffic to the requested size without dropping packets? ... Tunnel traffic remains 10Mbps and still a few packets in queue but no drops…

Your C-class FTP and D-class FTPs are getting handled exactly as you asked for in your well-written CBWFQ policy.

The fact that you see some packets in the queues means that the policy is doing what you asked for; if you saw packet drops in this scenario, Cisco IOS would be at fault.

Since you're essentially delaying some TCP packets, the OS kernel's TCP RTT estimate gets a little higher when you have competing traffic for that 10Mbps PARENT class, but that's all that will happen in this scenario.

CBWFQ only goes active when there are packets in the queue; you asked for 40% of your queue bandwidth to be allocated to "C FTP" and 40% of the queue bandwidth to be allocated to "D FTP". Essentially you are slowing down the FTP sessions when there is competition for that same 10Mbps in the PARENT class shaper.

But where are my tail-drops??? What am I missing here?

You aren't sending enough traffic yet to exceed those 234-packet Class C and Class D queue-limits. Send a lot more TCP sessions and you'll start busting those queues and dropping packets.

[in comments] How is TCP slowing down if not dropping packets?

To know the exact reason why you're slowing traffic down, please post wireshark packet captures; the easiest place to get captures is on your ethernet switches. This avoids distortions caused by TCP Segmentation Offload, which is turned on by default in your NIC driver.