Quality of Service: Questions about input and output service policies prioritizing the same traffic

qos

This question/lab was created with Cisco networking hardware. I have asked this question else where with no responses.

I am new to QoS, but I have spent the last few days researching and I've learned a ton. However, I am still having issues accomplishing what I want. I am trying to configure QoS to prioritize two classes of traffic over all other types of traffic (the first class will have more priority over the second class). To test this theory and my configurations I've set up two routers one switch and two PCs. Here is a picture of the topology I am using… (I am using real hardware, packet tracer is only demonstrating the topology)

enter image description here
My goal is to have all three PCs ping R2 at the same time, and for PC0 to have the easiest/fastest transmission. I set up a choke point between R1 and R2 so I could test and make sure my PC0 packets were the least likely to be dropped. I configured the R1-R2 link to 10mbps, mtu 500, and hold-queue to 10 packets for both 'in' and 'out'. I then had R1 ping R2 with large ICMP packets and R2 ping R1 also with large ICMP packets. The idea is R1's pings, R2's pings and PC2's pings are all routine traffic and should be the traffic that is dropped if the queue is full. PC0's pings should never/rarely be dropped.

Here's an example of my configuration…

R1 Configuration

ip access-list extended class1-out_acl
  permit ip 192.168.1.0 0.0.0.255 any
  deny ip any any
ip access-list extended class2-out_acl
  permit ip 192.168.2.0 0.0.0.255 any
  deny ip any any
ip access-list extended class1-in_acl
  permit ip any 192.168.1.0 0.0.0.255
  deny ip any any
ip access-list extended class2-in_acl
  permit ip any 192.168.2.0 0.0.0.255
  deny ip any any

class-map match-all class1-out
  match access-group name class1-out_acl
class-map match-all class2-out
  match access-group name class2-out_acl
class-map match-all class1-in
  match access-group name class1-in_acl
class-map match-all class2-in
  match access-group name class2-in_acl

policy-map QOS-OUT
  class class1-out
    priority percent 20
    set precedence 5
  class class2-out
    priority percent 20
    set precedence 3
  class class-default
    fair-queue
    random-detect
    set precedence 0

policy-map QOS-IN
  class class1-in
    police 2000000 400000 400000 conform-action transmit exceed-action drop violate-action drop
  class class2-in
    police 2000000 400000 400000 conform-action transmit exceed-action drop violate-action drop
  class class-default
    police 5000000 1000000 1000000 conform-action transmit exceed-action drop violate-action drop

control-plane
  service-policy input QOS-IN

interface GigabitEthernet0/1
  mtu 500
  speed 10
  hold-queue 10 in
  hold-queue 10 out
  service-policy output QOS-OUT

R2's configuration is exactly the same EXCEPT for the ACLs. The ACL for class1-out becomes the ACL for class1-in and vice-versa. The same is true for the class2-in and class2-out ACLs. This is done so the class1 traffic is given 'priority' round trip.

I set PC0 and PC3 to both ping R2 with 5000 size pings for 300 times (didn't use PC1 during this test). I set R1 to ping R2 with 5000 size pings about 15000 times and the same for R2 pinging R1.

During the test, both routers and both PC's were dropping packets (as I expected, slightly surprised PC0 was dropping though). When my test was complete I found that PC2 had the best results, dropping only 5% at a 9ms average, while PC0 dropped 10% with an average of 11ms.

I ran [show policy-map interface] and my class1 packets were being picked out of the flood, but it doesn't seem that they have any priority. Using this command, I also noticed ALL of the packets were being classified under IP Precedence 0 (routine). Which is strange, because I set class1 to 5.

As I mentioned earlier, I am new to QoS so I'm sure I have made a mistake somewhere. Also, could someone please explain a good method/policy to use when setting the 'police' values. I picked these numbers slightly arbitrarily.

Once I have sorted out any configuration errors, my underlying question is… is there a standard or policy I can follow to 'match' for my input and output service policies to provide the same level of QoS to the traffic classes. Referencing my example above, I'm trying to use a priority percent of 20% for class1 on the exit interface while also policing the input interface to 2mbps (20% of 10mbps) for class1. This is my attempt at 'a-lining' the link to use the same prioritization of traffic (both for input and output). I wish I could just use a percentage for both input and output service policies, but my research makes me think this is not possible.

Best Answer

TL;DR:

A. Dont' use Ping, use UDP.

B. Do classification and marking on the ingress side (matching ACLs,Protocols, whatever),

C. Do queuing/scheduling on the egress side, matching just the QoS bits in the IP header's ToS field.

A bit more verbose:

A) Use UDP, not Ping

To start with: ICMP or Ping, respectively, is a particularly unsuitable tool for testing the queuing/scheduling part of a QoS setup for two reasons:

  • Ping generates the same data stream in the reverse direction and you have almost no way to tell if a "ping loss" occured to the request or to the reply packet.
  • When looking for effects of queuing, Ping's reported RTTs are just as difficult to interprete - if there is increased latency because of buffering, you can't tell if that happened to the request or to the reply.

ICMP may do a decent job when testing the classification/marking part of a QoS setup.

Therefore, I suggest to pick a tool that generates unidirectional UDP streams, like iPerf (and certainly others) do, and i run the test cases for each direction individually. You may want to add a PC to the LAN behind R2 to serve as receiving system (with iPerf on UDP, the interesting results are shown on the receiver side, whereas with TCP, either side will show meaningful results).

UDP also has the advantage that you can easily work with ports, making the ACLs for the class-maps LOT easier to write (eg. port udp/5000 is default, 5005 gets precedence 5, port 5003 gets precedence 3) et cetera.

I'm not quite sure what you intended with your configuration - especially w/regards to the role of control plane policing, that is a completely different ballpark. I would advise to leave that aside; CoPP can cause great havoc if badly implemented on a production network.

There's three very important things to remember with QoS:

  • QoS is an unidirectional thing, and when measuring QoS, always be aware of the direction of the "interesting" traffic. Configurations may be symmetrical and even share the same class-maps/policy maps and ACLs. But the way QoS operates on a router is completely independent for each direction. For that reason, you want to have test scenarios of unidirectional character.

  • QoS Queuing/Scheduling isn't really happening until the interface (or the parent shaper) gets saturated.

  • QoS is a system of managed unfairness (not my quote, but for the sake of it, I can't remember where I got this from). Once a link gets saturated, EVERY traffic class WILL suffer to some extent. QoS just sets the rules of unfairness.

I'll give you some hints that follow my experience with QoS setups, picking up the bits as I grasped them from your post.

Usually, you want to perform classification/marking on the ingress interface, possibly together with some policing. Classification/Marking with a policy map bound to an egress interface is sometimes not even possible, or only in a limited fashion.

B) Ingress QoS

Let's start with the ingress side. (Please be aware that I'm composing this freehandedly, not from an actual device. There might be slight errors in syntax or feature support on the given platform)

  • define class-maps with their ACLs to identify traffic (1):

    class-map match-any CMAP_QOS_CLASS1, match access-group ACL_CLASS1-TRAFFIC class-map CMAP_QOS_CLASS2 match access-group ACL_CLASS2-TRAFFIC

  • define a policy-map called PM_QOS_INGRESS, containing the above class-maps

  • in that policy-map, just set the precedence or DSCP value, optionally police incoming traffic.

    policy-map PMAP_QOS_INGRESS class CMAP_QOS_CLASS1 set precedence 5 class CMAP_QUE_CLASS2 set precedence 3 class class-default set precedence 0

  • bind that policy-map to the interface where to-be-QoS'd traffic is coming from (2)

Now packets entering that router get their intended marking in the ToS byte.

C) Egress QoS

Now for the egress side.

  • define a set of class maps that have matching criteria for ToS byte markings (DSCP or precedence). Do not match on IP address (with an ACL), protocol or port, here.

    class-map match-any CMAP_QUE_CLASS1 match precedence 5 class-map match-any CMAP_QUE_CLASS2 match precedence 3

  • define a policy map that decides on what has to happen to each class (3):

    policy-map PMAP_QUE_EGRESS class CMAP_QUE_CLASS1 priority level 1 police <some upper limit you want to give to that class> class CMAP_QUE_CLASS2 priority level 2 policy <some upper limit you want to give to that class> class class-default random-detect precedence-based

  • bind that policy map to the interface where the to-be-qos'd traffic is leaving R1.

Then, start some traffic (not yet saturating the link or the upper limits of policing) from all three PCs. Use show policy map interface gig0/1 out on R1 to see if you get nonzero counters for the different classes.

Then start to increase bandwidth of the udp streams until they start to saturate your choke point. Look at the egress policy map again (this time looking for drop counters per class) and look at the stream receiver's stats for loss and jitter. Be sure to compare jitter for the priority classes vs. jitter of the default class.

CAVEAT You seem to have a Catalyst 2960 in there between your PCs and R1. Even if it's irrelevant in the current scenario (classification/marking happening on R1), I strongly recommend to ...

  • (either) make absolutely sure that the catalyst has mls qos disabled
  • (or, if mls qos is enabled) make sure that all switch ports accepting traffic with QoS flags (i.e. precendence or DSCP values in the ToS byte) are set to mls qos trust

Else, the catalyst will nullify the ToS byte of all IP packets (yes, even if it's only acting as Layer2 switch), and you might end up with unexpected effects.

(1) On Cisco Nexus platforms, class-maps and policy maps used for (ingress) classification and marking are of type qos, the ones for (egress) queuing and scheduling are of type queuing. I happened to like this because it gives clarity, and therefore I chose to reflect the class/policy maps intended use in its name, even on IOS. Your class/policy map naming strategy may be different, of course.

(2) on a campus network, classification and marking would be done on the access switch port where the PCs are connected to. For the sake of exercise, you may want to do this on the router interface towards the (source) PCs.

(3) I'll forego the example of a hierarchical Policy-Map with a parent and child policy map. If needed, I'll edit the answer.

Related Topic