In our instance, our problem was solved by sysctl parameters, one different from Maciej.
Please note that I do not speak for the OP (buecking), I came on this post due to the problem being related by the basic detail (no multicast traffic in userland).
We have an application that reads data sent to four multicast addresses, and a unique port per multicast address, from an appliance that is (usually) connected directly to an interface on the receiving server.
We were attempting to deploy this software on a customer site when it mysteriously failed with no known reason. Attempts at debugging this software resulted in inspecting every system call, ultimately they all told us the same thing:
Our software asks for data, and the OS never provides any.
The multicast packet counter incremented, tcpdump showed the traffic reaching the box/specific interface, yet we couldn't do anything with it. SELinux was disabled, iptables was running but had no rules in any of the tables.
Stumped, we were.
In randomly poking around, we started thinking about the kernel parameters that sysctl handles, but none of the documented features was either particularly relevant, or if they had to do with multicast traffic, they were enabled. Oh, and ifconfig did list "MULTICAST" in the feature line (up, broadcast, running, multicast). Out of curiosity we looked at /etc/sysctl.conf
. 'lo and behold, this customer's base image had a couple of extra lines added to it at the bottom.
In our case, the customer had set net.ipv4.all.rp_filter = 1
. rp_filter is the Route Path filter, which (as I understand it) rejects all traffic that could not have possibly reached this box. Network subnet hopping, the thought being that the source IP is being spoofed.
Well, this server was on a 192.168.1/24 subnet and the appliance's source IP address for the multicast traffic was somewhere in the 10.* network. Thus, the filter was preventing the server from doing anything meaningful with the traffic.
A couple of tweaks approved by the customer; net.ipv4.eth0.rp_filter = 1
and net.ipv4.eth1.rp_filter = 0
and we were running happily.
On the Dell R710 (and many other makes/models) you can monitor the power usage yourself with this command:
# ipmitool sdr list | grep Watts
System Level | 84 Watts | ok
That's the linux version, but there are Windows equivalents. Graphing that in your favorite tool is left as an exercise for the reader. It should be noted that this gives the power draw into the motherboard. Power supplies are not ever 100% efficient, so add about 15% to that number to get the input into the power supply. Or connect it to a watt meter and measure the efficiency yourself. PSUs are most efficient in the middle of their stated range, somewhere about 50-60% of the rated capacity.
If you are concerned about power usage you might consider using an L-series processor.
What happens when you draw more? That depends on the provider. You'll likely just get a warning (if they even notice at all.) And that's also the scary part. What if everyone draws just over and the circuit breaker trips? How closely do they monitor those circuits? Is it active monitoring or passive monitoring (is there a meter on the circuit or do the building engineers do spot checks with a clamp on meter?) If there is a meter is it per power port or per circuit?
Overall, it's just best to monitor the draw yourself.
How do you know before you order the server? Well, that's a guessing game. Unless you're really cranking on the HW you won't get near the peak.
Best Answer
You can't rely on UDP to deliver packets in order because the specification doesn't provide those guarantees. Even assuming the most ideal situation, a single piece of ethernet cable between two hosts, there is still the matter of the OS, the network stack, the NIC driver, and the libc implementation that your writing against.
At every step in that chain, the writers of that code will have chosen NOT to prioritise ordering UDP packets even if they arrive in order for the simple reason that they don't have to.
One contrived example could be the data structure that incoming packets are read into, which might be a ring buffer. Packets arriving in order, will be placed, in order into the ring buffer, but it may be simpler for the driver writer to dump them to the upper layers of the networking code in memory order, hence randomising their ordering.
Taking your situation, a virtual machine run on a shared infrastructure that will be run for volume, not performance, then the probability of predicting the order UDP packets will be received will be low.
In short, if the spec says you can't rely on UDP packet ordering. You can't rely on it, and you can't try to tweak the environment to give a stronger guarantee than the spec ever promised.