Could you confirm that in an infrastructure configured with "per packet" load balancing on the Routers (same route same metric), the usage of traceroute become unusable ? In this case the infrastucture should delivered traceroute results like random results for the same destination host ?
Traceroute – Inside Per Packet Load Balancing Infrastructure
ciscoload balancingroutingswitchingtraceroute
Related Solutions
I would ask for 'maximum-paths' (it's usually called ECMP in standards and documents, not ECLB). And if ECMP is non-starter, then fallback to your /25 plan.
Other acronyms that I couldn't immediately figure out were DIA (dedicated internet access?) and SOP (standard operating procedure?). I'm not sure if these are really so universal acronyms that they should be used in stackexchange without at least hovertext to resolve them.
... The load balancing works when I replace the MPLS by ATM switches.
This doesn't sound like a parallel comparison, because routers don't care whether you're forwarding through an ATM PVC or an MPLS LSP, they will load-balance in the same way, assuming the routers have the same configuration. Perhaps you've done something unusual with your ATM VCs, but that isn't the focus of this question.
However with MPLS routers, all the TCP and UDP traffic goes through R3 and R5, even if there is congestion, and as a consequence R4 and R6 are never used. The weirest thing is that when I traceroute R7 from R1, the packet go through R4 and R6.
This actually is not weird, this is how routing works. Routing happens on a per-hop basis; routers have no forward-looking wisdom to predict congestion along subsequent hops of a particular path. Routing protocols don't carry information about load in their metric calculations (except for EIGRP if you play games with the K values, but that could result in path instability when you do that).
That said, you can use a different algorithm and force load-balancing in this case, use ip load-sharing per-packet
on R2 (serial2/1 & serial 2/2) as well as R7 (serial 2/0 & 2/1):
interface Serial2/1
ip address 192.168.0.5 255.255.255.252
no shutdown
mpls label protocol ldp
mpls ip
mpls mtu 1512
ip route-cache cef
ip load-sharing per-packet
!
interface Serial2/2
ip address 192.168.0.9 255.255.255.252
no shutdown
mpls label protocol ldp
mpls ip
mpls mtu 1512
ip route-cache cef
ip load-sharing per-packet
Per-packet load-balancing forces the router to round-robin traffic between destinations. You should never use this is a production network without considering the consequences (most often, reordering packets... which makes VoIP / video, and sometimes TCP unhappy).
Per-packet load-balancing is great in the lab, and usually unwise in production, unless you're in a very specific situation and know what you're doing. Per-packet load-balancing also has very spotty platform support... some Cisco routers (such as the Catalyst 6500 Sup720) simply won't use per-packet load-balancing.
How do I do for the MPLS backbone to load-balance the traffic when risks of congestion occur? I have read that OSPF is supposed to do it by default, but in my case it does not?
As I mentioned above, you're misunderstanding how routing works... this is a huge topic, but I'll just summarize the most relevant info here...
How Cisco IOS load-balancing works by default
By default Cisco routers use per-source / per-destination hashes of IP addresses at the ingress Label Edge Router (LER) to decide which path to take. You can see this in action by performing show ip cef exact-route [src-ip] [dst-ip]
... notice the label [FOO] TAG adj
output below, which indicates the MPLS label allocated by LDP.
PE4#sh ip cef exact-route 192.168.4.1 192.168.0.5
192.168.4.1 -> 192.168.0.5 => label 154 TAG adj out of GigabitEthernet1/0/0,
addr 10.10.1.189
If I choose a different source or destination address, it's possible to see a different path chosen...
PE4#sh ip cef exact-route 192.168.4.1 192.168.0.12
192.168.4.1 -> 192.168.0.12 => label 152 TAG adj out of GigabitEthernet1/0/1,
addr 10.10.1.193
If you want to see the general load-balancing configuration on the router, use show ip cef [dst] internal
(borrowing output from this blog)... also note the chosen load-balance algorithm (per-destination sharing
).
R2#show ip cef 10.200.4.1 internal
10.200.4.1/32, epoch 0, flags rib only nolabel, rib defined all labels, RIB[B], refcount 5,
per-destination sharing
sources: RIB
feature space:
IPRM: 0x00018000
ifnums:
FastEthernet1/1(5): 192.168.23.3
FastEthernet2/0(6): 172.16.23.3
path 6896722C, path list 68991DF0, share 1/1, type recursive, for IPv4
recursive via 10.4.4.4[IPv4:Default], fib 6894F9B0, 1 terminal fib, v4:Default:10.4.4.4/32
path 689672A4, path list 68991E3C, share 0/1, type attached nexthop, for IPv4
MPLS short path extensions: MOI flags = 0x0 label 19
nexthop 172.16.23.3 FastEthernet2/0 label 19, adjacency IP adj out of FastEthernet2/0,
addr 172.16.23.3 686FC0A0
path 6896731C, path list 68991E3C, share 1/1, type attached nexthop, for IPv4
MPLS short path extensions: MOI flags = 0x0 label 19
nexthop 192.168.23.3 FastEthernet1/1 label 19, adjacency IP adj out of FastEthernet1/1,
addr 192.168.23.3 686FC360
output chain:
loadinfo 689C4DA0, per-session, 2 choices, flags 0003, 6 locks
flags: Per-session, for-rx-IPv4
16 hash buckets
< 0 > label 19 TAG adj out of FastEthernet2/0, addr 172.16.23.3 686FBF40
< 1 > label 19 TAG adj out of FastEthernet1/1, addr 192.168.23.3 686FC200
< 2 > label 19 TAG adj out of FastEthernet2/0, addr 172.16.23.3 686FBF40
< 3 > label 19 TAG adj out of FastEthernet1/1, addr 192.168.23.3 686FC200
< 4 > label 19 TAG adj out of FastEthernet2/0, addr 172.16.23.3 686FBF40
< 5 > label 19 TAG adj out of FastEthernet1/1, addr 192.168.23.3 686FC200
< 6 > label 19 TAG adj out of FastEthernet2/0, addr 172.16.23.3 686FBF40
< 7 > label 19 TAG adj out of FastEthernet1/1, addr 192.168.23.3 686FC200
< 8 > label 19 TAG adj out of FastEthernet2/0, addr 172.16.23.3 686FBF40
< 9 > label 19 TAG adj out of FastEthernet1/1, addr 192.168.23.3 686FC200
<10 > label 19 TAG adj out of FastEthernet2/0, addr 172.16.23.3 686FBF40
<11 > label 19 TAG adj out of FastEthernet1/1, addr 192.168.23.3 686FC200
<12 > label 19 TAG adj out of FastEthernet2/0, addr 172.16.23.3 686FBF40
<13 > label 19 TAG adj out of FastEthernet1/1, addr 192.168.23.3 686FC200
<14 > label 19 TAG adj out of FastEthernet2/0, addr 172.16.23.3 686FBF40
<15 > label 19 TAG adj out of FastEthernet1/1, addr 192.168.23.3 686FC200
Subblocks:
None
Cisco IOS with CEF Load-balancing, hashed on L4 ports...
In IOS 12.4, Cisco introduced another algorithm to load-balance traffic based on L4 ports... use ip cef load-sharing algorithm include-ports source destination
at the global configuration level.
Load-balancing with MPLS TE
MPLS TE introduces yet another twist into load-balancing, including the potential for offline optimization of paths; typically you buy an application to perform offline load optimization among various MPLS TE tunnels in the network. This too, is a very deep subject... I mention it only because you're asking about load-balancing with MPLS "automatically".
Best Answer
There is no way to trace the route of a packet in typical IP infrastructure. There was an experimental extension for it but it was never widely implemented.
So what traceroute actually does it send out probes with increasing TTL and look for the time-exceeded messages.
So in a per-packet load balanced infrstructure traceroute will be able to tell you how far along the path(s) each router it finds is but it cannot tell whether router(s) it finds at hop "n" and router(s) it finds at hop "n+1" are on the same path or different paths.
per-packet load balancing is rarely used through, most load balancing is done on a basis of "flows" because re-ordering packets within a flow often leads to poor performance. The flow is characterised by some combination of header fields, often source/destination IPs and source/destination ports.
In a per flow load balanced environment it is possible to get useful traceroutes but you have to be careful what traceroute implementation you use. To get a self-consistent trace the traceroute implementation must ensure that the relavent header fields are kept consistent such that all packets are seen as part of the same flow.
I recall a presentation (think it was a uknof video) which mentioned a traceroute monitoring tool that had the ability to trace with consistent source ports across multiple traces and report which paths had problems. It looks like the video was https://www.youtube.com/watch?v=jqaiXtBF4ug and the tool was "fbtracert"