Cisco – How to reduce latency on ASA 5510 WAN link

ciscocisco-asah-qosipv4latency

I have a Cisco ASA 5510 connecting our office via 2Mbit/s Internet. When Internet link fills up, latency to a well-connected host on Internet goes from 5-8 ms to 500-800 ms. With ping, it may look something like this (TCP performance seems to match this):

64 bytes from X: icmp_req=242 ttl=59 time=450 ms
64 bytes from X: icmp_req=243 ttl=59 time=458 ms
64 bytes from X: icmp_req=244 ttl=59 time=495 ms
64 bytes from X: icmp_req=245 ttl=59 time=186 ms
64 bytes from X: icmp_req=246 ttl=59 time=103 ms
64 bytes from X: icmp_req=247 ttl=59 time=5.18 ms
64 bytes from X: icmp_req=248 ttl=59 time=4.94 ms
64 bytes from X: icmp_req=249 ttl=59 time=4.65 ms
64 bytes from X: icmp_req=250 ttl=59 time=4.85 ms

I assume this is because large send queues on the 2 Mbit link. Since this is mainly office IT (from < 20 ppl), lateny is more important than throughput.

How can I measure how this latency is divided between outbound and inbound leg? If it turns out to be generated equally or mostly on inbound traffic, can I affect that (i.e. ingress throttling) or do I have to contact my ISP? How best to achieve this on ASA 8.2?

UPDATE: Short topolgy:

Ping node -[gbit]->
  ProCurve dist -[gbit]->
    ProCurve core -[gbit]->
      ASA -[100 mbit]->
        DSL -[copper]->
          upstream

ASA interface upstream:

asa# show interface Ethernet 0/3
Interface Ethernet0/3 "outside", is up, line protocol is up
Hardware is i82546GB rev03, BW 100 Mbps, DLY 100 usec
    Auto-Duplex(Full-duplex), Auto-Speed(100 Mbps)
    Input flow control is unsupported, output flow control is off
    Description: ### Upstream ###
    MAC address X, MTU 1500
    IP address X, subnet mask 255.255.255.252
    3480433364 packets input, 2625479988848 bytes, 0 no buffer
    Received 1010728 broadcasts, 0 runts, 11 giants
    11 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
    0 pause input, 0 resume input
    0 L2 decode drops
    3160567056 packets output, 1537515646562 bytes, 0 underruns
    0 pause output, 0 resume output
    0 output errors, 0 collisions, 15 interface resets
    0 late collisions, 0 deferred
    6 input reset drops, 0 output reset drops, 0 tx hangs
    input queue (blocks free curr/low): hardware (255/240)
    output queue (blocks free curr/low): hardware (255/109)
Traffic Statistics for "outside":
    3487933998 packets input, 2561681053821 bytes
    3160334264 packets output, 1477359288037 bytes
    57746233 packets dropped
  1 minute input rate 193 pkts/sec,  209755 bytes/sec
  1 minute output rate 122 pkts/sec,  12734 bytes/sec
  1 minute drop rate, 1 pkts/sec
  5 minute input rate 177 pkts/sec,  174578 bytes/sec
  5 minute output rate 131 pkts/sec,  17600 bytes/sec
  5 minute drop rate, 2 pkts/sec

Best Answer

How can I measure how this latency is divided between outbound and inbound leg?

You can find the congested ping direction with hping on linux / cygwin.

If it turns out to be generated equally or mostly on inbound traffic, can I affect that (i.e. ingress throttling) or do I have to contact my ISP?

You can do it either way, but the ISP method is better since it won't transmit across the DSL before controlling the traffic. However, there is nothing fundamentally wrong with controlling both directions on the ASA (as long as you implement it correctly). I agree with you that linux is not a good enterprise qos solution, since there are non-trivial supportability issues with anyone who has to maintain the iptables policies.

How best to achieve this on ASA 8.2?

First be sure you know how much bandwidth you have in the Tx / Rx direction. Note that DSL uses ATM, which can be a little tricky due to the ATM cell tax.

Then, use hierarchical priority queueing (aka hierarchical QoS, or HQoS) on the ASA; this is a sample policy:

class-map CLASS_VOICE
 match dscp ef
 exit
class-map CLASS_VOICE_SIGNAL
 match dscp af31
 exit
!
policy-map POLICY_PRIORITIZE_VOICE
 ! Give VOICE and VOICE SIGNAL priority
 class CLASS_VOICE
  priority
 class CLASS_VOICE_SIGNAL
  priority
 class class-default
policy-map POLICY_TRAFFIC_SHAPE_INSIDE
 ! Shape all traffic to slightly less than the DSL modem's ingress bandwidth
 ! I assume you have 2Mbps here, but please measure what you have
 class class-default
  shape average 2000000 16000
  service-policy POLICY_PRIORITIZE_VOICE
!
policy-map POLICY_TRAFFIC_SHAPE_OUTSIDE
 ! Shape all traffic to slightly less than the DSL modem's egress bandwidth
 ! I assume you have 512Kbps here, but please measure what you have
 class class-default
  shape average 512000
  service-policy POLICY_PRIORITIZE_VOICE
!
service-policy POLICY_TRAFFIC_SHAPE_INSIDE interface INSIDE
service-policy POLICY_TRAFFIC_SHAPE_OUTSIDE interface OUTSIDE

This example assumes you implement both Tx and Rx HQoS on the ASA (and that you only use two interfaces on your ASA). It also assumes you have already marked your traffic correctly. However, by the time you finish trying to mark traffic on your powerconnects, you might think it's easier to put a real Cisco router in-line to do the marking for you. If you put a router inline, it's usually better to do the qos on the router.