Linux Kernel – Multicast UDP Packets Not Passing Through

multicastudp

Recently I've set up a new Ubuntu Server 10.04 and noticed my UDP server is no longer able
to see any multicast data sent to the interface, even after joining the multicast group. I've got the exact same set up on two other Ubuntu 8.04.4 LTS machines and there is no problem receiving data after joining the same multicast group.

The ethernet card is a Broadcom netXtreme II BCM5709 and the driver used is:

b $ ethtool -i eth1
driver: bnx2
version: 2.0.2
firmware-version: 5.0.11 NCSI 2.0.5
bus-info: 0000:01:00.1

I'm using smcroute to manage my multicast registrations.

b$ smcroute -d
b$ smcroute -j eth1 233.37.54.71

After joining the group ip maddr shows the newly added registration.

b$ ip maddr

    1:  lo
        inet  224.0.0.1
        inet6 ff02::1
    2:  eth0
        link  33:33:ff:40:c6:ad
        link  01:00:5e:00:00:01
        link  33:33:00:00:00:01
        inet  224.0.0.1
        inet6 ff02::1:ff40:c6ad
        inet6 ff02::1
    3:  eth1
        link  01:00:5e:25:36:47
        link  01:00:5e:25:36:3e
        link  01:00:5e:25:36:3d
        link  33:33:ff:40:c6:af
        link  01:00:5e:00:00:01
        link  33:33:00:00:00:01
        inet  233.37.54.71 <------- McastGroup.
        inet  224.0.0.1
        inet6 ff02::1:ff40:c6af
        inet6 ff02::1

So far so good, I can see that I'm receiving data for this multicast group.

b$ sudo tcpdump -i eth1 -s 65534 host 233.37.54.71
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), capture size 65534 bytes
09:30:09.924337 IP 192.164.1.120.58848 > 233.37.54.71.15572: UDP, length 212
09:30:09.947547 IP 192.164.1.120.58848 > 233.37.54.71.15572: UDP, length 212
09:30:10.108378 IP 192.164.1.120.58866 > 233.37.54.71.15574: UDP, length 268
09:30:10.196841 IP 192.164.1.120.58848 > 233.37.54.71.15572: UDP, length 212
...

I can also confirm that the interface is receiving mcast packets.

b $ ethtool -S eth1 | grep mcast_pack
rx_mcast_packets: 103998
tx_mcast_packets: 33

Now here's the problem. When I try to capture the traffic using a simple ruby UDP server I receive zero data! Here's a simple server that reads data send on port 15572 and prints
the first two characters. This works on the two 8.04.4 Ubuntu Servers, but not the 10.04 server.

require 'socket'
s = UDPSocket.new
s.bind("", 15572)
5.times do
  text, sender = s.recvfrom(2)
  puts text
end

If I send a UDP packet crafted in ruby to localhost, the server receives it and prints out the first two characters. So I know that the server above is working correctly.

irb(main):001:0> require 'socket'
=> true
irb(main):002:0> s = UDPSocket.new
=> #<UDPSocket:0x7f3ccd6615f0>
irb(main):003:0> s.send("I2 XXX", 0, 'localhost', 15572)

When I check the protocol statistics I see that InMcastPkts is not increasing. While on
the other 8.04 servers, on the same network, received a few thousands packets in 10 seconds.

b $ netstat -sgu ; sleep 10 ; netstat -sgu
IcmpMsg:
    InType3: 11
    OutType3: 11
Udp:
    446 packets received
    4 packets to unknown port received.
    0 packet receive errors
    461 packets sent
UdpLite:
IpExt:
    InMcastPkts: 4654 <--------- Same as below
    OutMcastPkts: 3426
    InBcastPkts: 9854
    InOctets: -1691733021
    OutOctets: 51187936
    InMcastOctets: 145207
    OutMcastOctets: 109680
    InBcastOctets: 1246341
IcmpMsg:
    InType3: 11
    OutType3: 11
Udp:
    446 packets received
    4 packets to unknown port received.
    0 packet receive errors
    461 packets sent
UdpLite:
IpExt:
    InMcastPkts: 4656  <-------------- Same as above
    OutMcastPkts: 3427
    InBcastPkts: 9854
    InOctets: -1690886265
    OutOctets: 51188788
    InMcastOctets: 145267
    OutMcastOctets: 109712
    InBcastOctets: 1246341

If I try forcing the interface into promisc mode nothing changes.

At this point I'm stuck. I've confirmed the kernel config has multicast enabled. Perhaps there are other config options I should be checking?

b $ grep CONFIG_IP_MULTICAST /boot/config-2.6.32-23-server
CONFIG_IP_MULTICAST=y

Any thoughts on where to go from here?

Best Answer

In our instance, our problem was solved by sysctl parameters, one different from Maciej.

Please note that I do not speak for the OP (buecking), I came on this post due to the problem being related by the basic detail (no multicast traffic in userland).

We have an application that reads data sent to four multicast addresses, and a unique port per multicast address, from an appliance that is (usually) connected directly to an interface on the receiving server.

We were attempting to deploy this software on a customer site when it mysteriously failed with no known reason. Attempts at debugging this software resulted in inspecting every system call, ultimately they all told us the same thing:

Our software asks for data, and the OS never provides any.

The multicast packet counter incremented, tcpdump showed the traffic reaching the box/specific interface, yet we couldn't do anything with it. SELinux was disabled, iptables was running but had no rules in any of the tables.

Stumped, we were.

In randomly poking around, we started thinking about the kernel parameters that sysctl handles, but none of the documented features was either particularly relevant, or if they had to do with multicast traffic, they were enabled. Oh, and ifconfig did list "MULTICAST" in the feature line (up, broadcast, running, multicast). Out of curiosity we looked at /etc/sysctl.conf. 'lo and behold, this customer's base image had a couple of extra lines added to it at the bottom.

In our case, the customer had set net.ipv4.all.rp_filter = 1. rp_filter is the Route Path filter, which (as I understand it) rejects all traffic that could not have possibly reached this box. Network subnet hopping, the thought being that the source IP is being spoofed.

Well, this server was on a 192.168.1/24 subnet and the appliance's source IP address for the multicast traffic was somewhere in the 10.* network. Thus, the filter was preventing the server from doing anything meaningful with the traffic.

A couple of tweaks approved by the customer; net.ipv4.eth0.rp_filter = 1 and net.ipv4.eth1.rp_filter = 0 and we were running happily.