Linux – Higher rmem_max value leading to more packet loss

linuxlinux-networkingpacketlosssysctludp

The rmem_max Linux setting defines the size of the buffer that receives UDP packets.
When traffic becomes too busy, packet loss starts occurring.

I made a graph showing how packet loss increases depending on the incoming bandwidth.
(I use IPerf to generate UDP traffic between two VM instances).
The different colors are for different rmem_max values:

enter image description here

As you can see, setting rmem_max to 26214400 (dark blue) results in packet loss earlier than smaller values. Linux's default value is 131071 (dark green) looks reasonable.

In these conditions, why does the JBoss documentation recommend to set rmem_max to 26214400?
Is it because UDP traffic is expected to be higher than 350 MBytes/second? I don't think anything would work with more than 1% packet loss anyway…

What am I missing?

Details: I used sysctl -w net.core.rmem_max=131071 (for instance) on both nodes, and used on as server iperf -s -u -P 0 -i 1 -p 5001 -f M and the other as client iperf -c 172.29.157.3 -u -P 1 -i 1 -p 5001 -f M -b 300M -t 5 -d -L 5001 -T 1.

Best Answer

More buffer doesn't necessarily imply more speed. More buffer simply implies more buffer. Below a certain value you'll see overflow as applications can't necessarily service received data quickly enough. This is bad, but at the point where there is sufficient buffer for the app to service at a reasonable rate even in the event of the occasional traffic spike then anything else is likely wasted.

If you go -too- large then you're placing a much larger burden on the kernel to find and allocate memory which, ironically, can lead to packet loss. My hunch would be that this may be what you're seeing, but that some other metrics would be required to confirm.

It's likely that the 2.5M number may come from recommendations around setting rmem and wmem values for TCP - where the relationship between window sizing and buffer settings can have significant effects under certain circumstances. That said, TCP != UDP - but some folks assume that if it helps TCP that it will also help UDP. You've got the right empirical information. If I were you, I'd stick at the 256K value and call it even.

Related Topic