How can I measure how this latency is divided between outbound and inbound leg?
You can find the congested ping direction with hping on linux / cygwin.
If it turns out to be generated equally or mostly on inbound traffic, can I affect that (i.e. ingress throttling) or do I have to contact my ISP?
You can do it either way, but the ISP method is better since it won't transmit across the DSL before controlling the traffic. However, there is nothing fundamentally wrong with controlling both directions on the ASA (as long as you implement it correctly). I agree with you that linux is not a good enterprise qos solution, since there are non-trivial supportability issues with anyone who has to maintain the iptables
policies.
How best to achieve this on ASA 8.2?
First be sure you know how much bandwidth you have in the Tx / Rx direction. Note that DSL uses ATM, which can be a little tricky due to the ATM cell tax.
Then, use hierarchical priority queueing (aka hierarchical QoS, or HQoS) on the ASA; this is a sample policy:
class-map CLASS_VOICE
match dscp ef
exit
class-map CLASS_VOICE_SIGNAL
match dscp af31
exit
!
policy-map POLICY_PRIORITIZE_VOICE
! Give VOICE and VOICE SIGNAL priority
class CLASS_VOICE
priority
class CLASS_VOICE_SIGNAL
priority
class class-default
policy-map POLICY_TRAFFIC_SHAPE_INSIDE
! Shape all traffic to slightly less than the DSL modem's ingress bandwidth
! I assume you have 2Mbps here, but please measure what you have
class class-default
shape average 2000000 16000
service-policy POLICY_PRIORITIZE_VOICE
!
policy-map POLICY_TRAFFIC_SHAPE_OUTSIDE
! Shape all traffic to slightly less than the DSL modem's egress bandwidth
! I assume you have 512Kbps here, but please measure what you have
class class-default
shape average 512000
service-policy POLICY_PRIORITIZE_VOICE
!
service-policy POLICY_TRAFFIC_SHAPE_INSIDE interface INSIDE
service-policy POLICY_TRAFFIC_SHAPE_OUTSIDE interface OUTSIDE
This example assumes you implement both Tx and Rx HQoS on the ASA (and that you only use two interfaces on your ASA). It also assumes you have already marked your traffic correctly. However, by the time you finish trying to mark traffic on your powerconnects, you might think it's easier to put a real Cisco router in-line to do the marking for you. If you put a router inline, it's usually better to do the qos on the router.
Does increasing the bandwidth on a link from lets say 1mb to 30mb reduce the RTT?
In short, yes; you are changing serialization delay; at 1Mbps the serialization delay is non-trivial.
Compare the serialization delay for a 1500 Byte packet at 1Mbps and 30Mbps:
1500 Bytes * 8 bits/Byte / 1,000,000 bits/second = 12 milliseconds (at 1Mbps)
1500 Bytes * 8 bits/Byte / 30,000,000 bits/second = 0.4 milliseconds (at 30Mbps)
Remember also that those are unidirectional numbers; you should double them when considering RTT. Whether you care about 11.6 milliseconds difference in each direction at 1500 bytes is another question, but strictly speaking you can influence RTT with link speed.
Best Answer
First, the linked article contains several poor approximation and is not IMHO a good source.
Back to the question, insufficient bandwidth leads to link congestion, that means that the equipment buffers will be full and so some packets will be delayed, waiting for their turn to be sent from those buffers, thus increasing latency.