Networking – How Does Layer 3 LACP Destination Address Hashing Work?

hp-procurvelacpnetworking

Based on an earlier question over a year ago (Multiplexed 1 Gbps Ethernet?), I went off and setup a new rack with a new ISP with LACP links all over the place. We need this because we have individual servers (one application, one IP) serving up thousands of client computers all over the Internet in excess of 1Gbps cumulative.

This LACP idea is supposed to let us break the 1Gbps barrier without spending a fortune on 10GoE switches and NICs. Unfortunately, I've run into some problems regarding with outbound traffic distribution. (This despite Kevin Kuphal's warning in the above linked question.)

The ISP's router is a Cisco of some sort. (I deduced that from the MAC address.) My switch is an HP ProCurve 2510G-24. And the servers are HP DL 380 G5s running Debian Lenny. One server is a hot standby. Our application cannot be clustered. Here is a simplified network diagram that includes all relevan network nodes with IPs, MACs and interfaces.

alt text

While it has all the detail it is a bit hard to work with and describe my problem. So, for simplicity's sake, here is a network diagram reduced to the nodes and physical links.

alt text

So I went off and installed my kit at the new rack and connected my ISP's cabling from their router. Both servers have an LACP link to my switch, and the switch is has an LACP link to the ISP router. Right from the start I realized that my LACP configuration was not correct: testing showed all traffic to and from each server was going over one physical GoE link exclusively between both server-to-switch and switch-to-router.

alt text

With some google searches and lots of RTMF time regarding linux NIC bonding, I discovered that I could control the NIC bonding by modifiying /etc/modules

# /etc/modules: kernel modules to load at boot time.
# mode=4 is for lacp
# xmit_hash_policy=1 means to use layer3+4(TCP/IP src/dst) & not default layer2 
bonding mode=4 miimon=100 max_bonds=2 xmit_hash_policy=1

loop

This got the traffic leaving my server over both NICs as expected. But the traffic was moving from the switch to router over only one physical link, still.

alt text

We need that traffic going over both physical links. After reading and rereading the 2510G-24's Management and Configuration Guide, I find:

[LACP uses] source-destination address
pairs (SA/DA) for distributing
outbound traffic over trunked links.
SA/DA (source address/destination
address) causes the switch to
distribute outbound traffic to the
links within the trunk group on the
basis of source/ destination address
pairs. That is, the switch sends
traffic from the same source address
to the same destination address
through the same trunked link, and
sends traffic from the same source
address to a different destination
address through a different link,
depending on the rotation of path
assignments among the links in the
trunk.

It seems that a bonded link presents only one MAC address, and therefore my server-to-router path is always going to be over one path from switch-to-router because the switch sees but one MAC (and not two–one from each port) for both LACP'd links.

Got it. But this is what I want:

alt text

A more expensive HP ProCurve switch is the 2910al uses level 3 source & destination addresses in it's hash. From the "Outbound Traffic Distribution Across Trunked Links" section of the ProCurve 2910al's Management and Configuration Guide:

The actual distribution of the traffic
through a trunk depends on a
calculation using bits from the Source
Address and Destination address. When
an IP address is available, the
calculation includes the last five
bits of the IP source address and IP
destination address, otherwise the MAC
addresses are used.

OK. So, for this to work the way I want it to, the destination address is the key since my source address is fixed. This leads on to my question:

How exactly & specifically does layer 3 LACP hashing work?

I need to know which destination address is used:

  • the client's IP, the end destination?
  • Or the router's IP, the next physical link transmission destination.

We've not gone off and bought a replacement switch yet. Please help me understand exactly if the layer 3 LACP destination address hashing is or is not what I need. Buying another useless switch is not an option.

Best Answer

What you're looking for is commonly called a "transmit hash policy" or "transmit hash algorithm". It controls the selection of a port from a group of aggregate ports with which to transmit a frame.

Getting my hands on the 802.3ad standard has proven difficult because I'm not willing to spend money on it. Having said that, I've been able to glean some information from a semi-official source that sheds some light on what you're looking for. Per this presentation from the 2007 Ottawa, ON, CA IEEE High Speed Study Group meeting the 802.3ad standard does not mandate particular algorithms for the "frame distributor":

This standard does not mandate any particular distribution algorithm(s); however, any distribution algorithm shall ensure that, when frames are received by a Frame Collector as specified in 43.2.3, the algorithm shall not cause a) Mis-ordering of frames that are part of any given conversation, or b) Duplication of frames. The above requirement to maintain frame ordering is met by ensuring that all frames that compose a given conversation are transmitted on a single link in the order that they are generated by the MAC Client; hence, this requirement does not involve the addition (or modification) of any information to the MAC frame, nor any buffering or processing on the part of the corresponding Frame Collector in order to re-order frames.

So, whatever algorithm a switch / NIC driver uses to distribute transmitted frames must adhere to the requirements as stated in that presentation (which, presumably, was quoting from the standard). There is no particular algorithm specified, only a compliant behavior defined.

Even though there's no algorithm specified, we can look at a particular implementation to get a feel for how such an algorithm might work. The Linux kernel "bonding" driver, for example, has an 802.3ad-compliant transmit hash policy that applies the function (see bonding.txt in the Documentation\networking directory of the kernel source):

Destination Port = ((<source IP> XOR <dest IP>) AND 0xFFFF) 
    XOR (<source MAC> XOR <destination MAC>)) MOD <ports in aggregate group>

This causes both the source and destination IP addresses, as well as the source and destination MAC addresses, to influence the port selection.

The destination IP address used in this type of hashing would be the address that's present in the frame. Take a second to think about that. The router's IP address, in an Ethernet frame header away from your server to the Internet, isn't encapsulated anywhere in such a frame. The router's MAC address is present in the header of such a frame, but the router's IP address isn't. The destination IP address encapsulated in the frame's payload will be the address of the Internet client making the request to your server.

A transmit hash policy that takes into account both source and destination IP addresses, assuming you have a widely varied pool of clients, should do pretty well for you. In general, more widely varied source and/or destination IP addresses in the traffic flowing across such an aggregated infrastructure will result in more efficient aggregation when a layer 3-based transmit hash policy is used.

Your diagrams show requests coming directly to the servers from the Internet, but it's worth pointing out what a proxy might do to the situation. If you're proxying client requests to your servers then, as chris speaks about in his answer then you may cause bottlenecks. If that proxy is making the request from its own source IP address, instead of from the Internet client's IP address, you'll have fewer possible "flows" in a strictly layer 3-based transmit hash policy.

A transmit hash policy could also take layer 4 information (TCP / UDP port numbers) into account, too, so long as it kept with the requirements in the 802.3ad standard. Such an algorithm is in the Linux kernel, as you reference in your question. Beware that the the documentation for that algorithm warns that, due to fragmentation, traffic may not necessarily flow along the same path and, as such, the algorithm isn't strictly 802.3ad-compliant.