Bonding Switch Uplinks for Double Bandwidth – How to

ieee-802.1axswitch

Traditional 802.3ad link aggregation only works when all of the links in the group are going to/from the same devices/switches. So you couldn't have a system with one half of a bonded link going into switch A and the other half going into switch B and expect LACP to work. I suppose STP (if enabled) should block one of them to prevent a loop. Is that correct?

I realise that bonding/trunking can only provide double bandwidth in specific circumstances, e.g. when there are separate communication flows. It wouldn't give double the bandwidth if both source and destination are the same.

What I'm looking for is a way of connecting 2 switches together with multiple links to provide N times the bandwidth between them.
I guess with LACP on the uplink ports between the 2 switches the bandwidth would still be limited to each discrete traffic flow:

  • so traffic flows from switch-port A-3 to B-9 and B-4 to A-8 would each be able to hit close to 1Gbps (supposing there are 2 links in LACP)
  • but A-6 to B-3 would not be able to exceed 1Gbps

    1. Is my understanding above all correct?

    2. Are there any vendor-specific implementations/extensions that can allow a single physical server to run LACP across 2 switches?
      I suppose this is where stackable switches come in? Multiple physical switches configured as a single logical switch?

    3. Are there any vendor-specific implementations/extensions that can increase the bandwidth of a single traffic flow by simultaneously using multiple links?
      EDIT: On further thought this would be useless, as the rate of data going into switch would still only be the bandwidth of a single port. Unless your server was connected to a 1Gbps port but the switches were connected together using a pair of 100Mbps ports.

Best Answer

  1. Yes, if I am reading things correctly it appears your understanding is correct.
  2. Yes, there are implementations that will allow you to do link aggregation between a host and two switches. Switch stacking will allow a stack of individual switches to be managed as one device. Typically one of switches in the stack becomes the master for the stack allowing it to manage link aggregation across multiple switches. A second option is virtual switching which also allows this functionality across multiple switches even if they are not stacked. This typically requires higher end hardware, specific software versions and additional requirements in order to implement. Examples are virtual switching system (VSS)/multichassis EtherChannel (MEC)/virtual port channel (VPC) from Cisco or virtual chassis from Juniper.
  3. No. One of the hard invariants (i.e. absolute requirements) for L2 networking is the sequential delivery of frames. In link aggregation this is enforced by requiring a flow to traverse only one link in the group. If there is any sort of delay on that link, this invariant can still be maintained. If a flow were traversing two links and one of the links were to experience a delay (even a very short one), this could result in frames being delivered out of order violating this invariant.

Ultimately, if you are running into a need to exceed the speed of a link for a single flow, you would need to upgrade your interfaces to the next available speed technology (i.e. 1G to 10G, 10G to 40G, etc). Cisco is also spearheading a push for *multigigabit" providing speeds of 2.5G or 5G across Cat5e/6 cabling at distances up to 100 meters.