Implementing jumbo frames on one interface and not the other

jumboframesnetworking

I have a distributed application that passes a whole lot of traffic among a number of machines. Currently, those machines share a gigabit network with the rest of the machines in the rack, and I'm starting to see problems (packet collisions). Casting about for solutions, I ran across discussion of jumbo frames, which would definitely solve my problem if it works as advertised. But …

The servers that I use for the distributed application (it's a type of web crawler) also need access to the Internet, and everything I've read about jumbo frame cautions that for it to work well every device attached to the network has to support jumbo frame. My router might very well handle splitting the jumbo packets before transmitting, but doing so would slow things incredibly.

My servers all have two network cards. Can I set up a private network for the distributed application, making sure that all the machines' first network cards are set for jumbo frame, and use the second card on those machines (with jumbo frame turned off) to connect to the rest of my network and to the outside world. My thinking here is that the heavy traffic internal to the crawler subsystem would then be isolated from the rest of the network, including the Internet traffic, and the use of jumbo frame would improve communication speeds.

The machines are all Dell PowerEdge 1950 servers running Windows 2008. I know that the PE servers' Broadcom GigE network adapters support jumbo frame, but can I configure one with jumbo frame and one without?

Finally, how do I make sure I get a switch that supports jumbo frame? We're using a TP-Link switches that seem to work well currently, but I can't find any information on whether they support jumbo frame.

I know I have a lot of questions. Does what I'm considering sound reasonable?

Best Answer

Make sure that your NICs exist in separate netblocks when doing this. If you use Linux, packets are routed via the first NIC in the system in the netblock, so, even though eth1 has an MTU of 9000, it could end up routing those packets through eth0.

We set up a separate VLAN to our storage network and had to set up a separate netblock on eth1 to avoid that behavior. Increasing the MTU to 9000 easily increased throughput as that particular system deals with streaming a number of rather large files.