Cisco IOS – QoS Between Two Sites

ciscocisco-iosqos

I have a following topology where Internet service providers for both sites provide 10Mbps connection:

QoS for GRE tunnel keepalives

There is a GRE tunnel between the sites and I would like to ensure that this tunnel is up even in case the connection is congested. First measure is that LAN facing interfaces have CAR set to 9.5Mbps in both directions. However, I would like to ensure that GRE keepalive messages are handled as fast as possible once they reach the Fa0/0 interface Rx ring buffer. What are the methods to achieve this? I guess I should use some other Rx queue strategy than FIFO on Fa0/0 ports?

Best Answer

I'd recommend migrating from old-school CAR to modern config of CBWFQ. When you do that, you can nest QoS policies together to accomplish more complicated QoS scenarios. Per this Cisco doc, GRE keepalives are marked with CS6. The keepalive is also a part of the GRE packets, not the inner IP packets that are being sent (source).

class-map CM-CS6
 match cs6
!
policy-map PM-FA0/0-QUEUE-OUT
 class CM-CS6
  priority percent 5
!
policy-map PM-FA0/0-SHAPER-OUT
 class class-default
  shape average 9500000
  service-policy PM-FA0/0-QUEUE-OUT
!
interface tunnel 100
 qos pre-classify
!
interface Fa0/0
 service-policy output PM-FA0/0-SHAPER-OUT
!

First, we create a class-map to match our GRE keepalives. Note that this will actually over-match and includes all CS6 traffic. This likely includes other routing/bridging protocols. Not necessarily a bad thing, since the failure of the routing control-plane would bring the path down completely. You might consider also prioritizing other high-CS classes for control-plane or voice.

Next, we create a queuing policy-map for Fa0/0. This will control how each class of traffic is treated during congestion. In this case, the only class we define a policy for is CS6 (GRE keepalives and other control-plane traffic). We say that CS6 traffic is allocated 5% of the available bandwidth in a priority queue. That means that if there is CS6 traffic queued, it will always be sent on the link first (up to 5% of the link bandwidth). Any traffic above that 5% will be sent best-effort with the rest of the traffic. (See this doc for more information on priority vs bandwidth allocation.)

Now it's time to replace your ancient 1962 Pinto CAR! Create a policy-map for shaping on Fa0/0. We want this shaper to apply to all traffic, so we only define a policy for class-default. Shape to 9.5 Mbps. Then nest our queuing policy. This means that traffic will be sent FIFO as long as there is bandwidth available. As soon as traffic reaches 95Mbps, the router will start queuing packets and will dequeue them according to PM-FA0/0-QUEUE-OUT.

Next, we do a little future-proofing. By default, the service-policy on the interface will inspect already-encapsulated traffic. The router will copy the DSCP marking from the inner IP packet into the outer GRE header, so you'll still be able to read those. But if you ever implement more complicated class-maps (using access lists or NBAR), the router will be blind to the internal contents of the GRE packets. "qos pre-classify" makes a copy of the packet before encapsulation and uses that for QoS inspection rather than the GRE-encapsulated packet (more reading).

Finally, your complete QoS policy is applied to the physical interface (not the tunnel!). Applying the shaper ensures your traffic is queued at 9.5 Mbps, and that any queued traffic is dealt with according to the queue policy you setup.

Side note: There is no benefit in rate-limiting your traffic inbound. You are only attached to one other endpoint, and you know for a fact that its shaping to 9.5Mbps. Assuming there's not more to your topology, its safer to not try to control the inbound traffic.

Hope this helps!

Related Topic