When an isolated port transmits data, that data is mapped into an auxiliary VLAN. Data in the auxiliary VLAN will be mapped to the primary VLAN -only- for transmission to promiscuous ports. Promiscuous ports, in turn, transmit data into the primary VLAN. All ports can receive information in the primary VLAN.
Putting an otherwise isolated port into a community VLAN means that traffic it transmits will be mapped into both the auxiliary and the community VLAN. Community ports will receive data from both the primary and the community VLAN.
A given pair of ports will have bidirectional communication under the following conditions-
- One or both are promiscuous, or...
- Both are in the same PVLAN community.
VACL's are a completely different mechanism and provide some measure of per-packet (and usually protocol based) control of traffic bridged within a given VLAN. You might, for instance, block traffic on TCP/80 between all hosts within the VLAN while allowing all other traffic to pass.
It's possible to approximate the effects of PVLAN's by using a VACL but this tends to be somewhat fragile, difficult to manage and there are often inherent hardware limitations with which to contend (...highly dependent on platform).
First, like the others have mentioned you have no bridging loop here due to running a Portchannel. That said, running STP is still fine. Let me clear some confusions on how these commands work on Cisco switches.
spanning-tree portfast trunk
This command is supposed to be run on trunk ports towards non bridging devices, such as a server with multiple VLANs or a router. This command should not be run on trunks towards switches because the port will bypass the listening and learning phase which could potentially create a bridging loop.
If you have an interface configured like this:
interface x/x
spanning-tree portfast
spanning-tree bpdufilter enable
spanning-tree bpduguard enable
BPDU guard will never kick in because BPDU filter is filtering both the outgoing and incoming BPDUs. This also means that the port can never lose its Portfast status which it would normally do if BPDUs were received inbound. If you remove the filter then BPDU guard will kick in and shutdown the port if a BPDU is received. This is done before the port can lose its Portfast operatational state so basically the port will always operate in Porfast operational mode.
If you apply the commands globally instead:
spanning-tree portfast default
spanning-tree portfast bpdufilter default
spanning-tree portfast bpduguard default
The first command enables Portfast on all access ports.
When BPDU filter is applied globally, the difference is that it sends out 11 BPDUs before going silent. Because normally one BPDU is sent out every 2 seconds and the default MaxAge is 20 seconds that means that if there is a device at the other end that can process BPDUs, at least one BPDU would be received when the old BPDU (if there was one) has expired.
If a BPDU is received inbound when BPDU filter is applied globally then the port stops filtering and it will lose its Portfast status.
The BPDU guard default command will only apply to ports that are in a Portfast operational state.
If you combine these three commands together then what will happen is that when a BPDU is received the port loses its BPDU filter, BPDU guard can then kick in. The port will never lose its Portfast operational state because the port is shutdown before.
So you see when applied to the interface BPDU guard can never kick in but if you apply it globally it can.
If you run just Portfast globally and BPDU filter globally then if a BPDU comes in, the port loses the filter and loses the Portfast operational state and will operate as a normal port.
Best Answer
Cisco has developed two ways to manage a group of access switches in a somewhat consolidated fashion.
The first on the scene was switch clustering. This provided the means to manage a group of switches utilizing only one management IP address. It did not provide any sort of fault tolerance or centralized configuration (you had to "hop" from the master switch to the others and still configure them individually). Most network engineers felt the benefits of this approach did not out weigh the negatives and it was seldom deployed outside environments where CiscoWorks/LMS was also deployed.
The second approach was switch stacking. This required that the switches provided the ability to be connected utilizing a "stacking cable" that provided a shared control and possibly data plane. It provides centralized management of all switches (i.e. you can configure them all from one switch) as well was fault tolerance as any stack member is a potential master and can take over if the master fails.
Since you mention 2960G switches, AFAIK none of them had the option to utilize switch stacking so you are only left with clustering. I would recommend not clustering them unless you really have a need and the ability to properly manage them. It probably won't provide you any real benefit.
The link you provide doesn't cover either of these two technologies, so not sure how that comes into play at all.
Just to note it in case anyone comes across it and is confused by the terminology, Cisco did also produce a product called a GigaStack. This GBIC solution really has nothing to do with stacking or clustering, but really only provides interconnectivity between switches.