WiFi Performance – How Listener-less Multicast Packets Affect It

multicastnetworkingwifi

I've got a program that sends out a IPv6 multicast packet (to ff12::2:0:8afb:382b:c053:85f%en1) every 50ms. I've got it running on a very simple single-computer LAN (Mac mini <-wifi-> Linksys wifi router <-cat5-> DSL modem <-> Internet). In my test there are no computers joined to this multicast group (i.e. nobody is listening to these packets)

My problem is that when this program is running, the Mac's WiFi performance drops by over 50%. Presumably the problem is that all of these multicast packets are eating up a lot of my WiFi bandwidth and causing congestion… but I don't understand why the packets are being transmitted at all. It was my understanding that multicast uses a spanning tree algorithm to ensure that multicast packets are only routed to hosts that are actually interested in receiving them. If that's true, and given that there are no other computers on my LAN joined to this multicast address, shouldn't my Mac realize that and not actually send out any packets unless/until some other host joins the multicast group? Or is the spanning tree culling only implemented at the switch, and not by hosts themselves?

Best Answer

Multicast is a tricky thing. Routers are the ones to arbitrate multicast packets, and smart switches can sometimes ensure that the packets won't get to where they're not supposed to go. However, if there is no router between the multicaster and your client stations (and there isn't if I'm reading that right) then Multicast will behave exactly like Broadcasts on that subnet.