Cisco – How to anticipate spanning-tree changes to prevent packet loss during convergence

ciscopacketlossspanning tree

We have a LAN with Cisco switches, redundant cabling and spanning tree.
If I understand it correctly, when I pull out a redundant cable (that is currently "used" by the spanning tree) it takes several seconds until the spanning tree converges in reaction. How can I prevent this packet loss (assuming of course I know beforehand that the cable will be pulled)? That is, how can I make the spanning tree adapt "proactively"?

I would have guessed that an interface shutdown plus waiting a couple of seconds should suffice, but did not dare to try that out yet. Actually, I am afraid an interface shutdown would cause the same interruption times during convergence because I suffered from such an interruption yesterday when makeing a supposedly harmless configuration change at some interfaces. (Edit: I just confimed this experimentally; as expected there was some 20 seconds of interruption after interface shutdown – note that I am looking for a "lossless" soluiton, not just "less loss")

Best Answer

It sounds like you're using class STP instead of rapid STP. Two options will speed up the convergence time significantly.

interface *server interface*
spanning-tree portfast

This should be applied to server interfaces. It will tell STP that there is no switch on the other side of this port, and that it is safe to skip the normal "safe" method of preventing loops. The port should move straight to forwarding.

spanning-tree mode rapid-pvst

Enables the newer Rapid Per-VLAN Spanning Tree protocol, which uses messages between switches to enable re-convergence within a couple of seconds rather than 30-45.

You might try setting up a port-channel between your switches instead of redundant single links. This would allow all traffic to fail over to the remaining port if one is lost.