Oracle RAC interconnect in a Dell M1000e Blade Enclosure

blade-servernetworkingoracleoracle-rac

We are looking at a Dell M1000e enclosure and appropriate Blades with 4 NICs each. We are planning on running Linux/Oracle 11g RAC on two blades, storage will be handled on an iSCSI SAN for which two NICs (via passthrough) will be connected leaving us with two NICs (via blade centre switches).

We would like to have an interconnect (obviously) , an external IP and an internal IP.

Would best practice be to:

  • bond the remaining two interfaces and VLAN as appropriate to provide three virtual interfaces?
  • run the interconnect on one interface and VLAN the external/internal interfaces?
  • purchase a blade with more NICs as the above is a terrible idea?
  • Another option?

Please feel free to point out the blindingly obvious or to relevant documentation on support.oracle.

I am specifically interested in supported configurations and best practices.

Thanks!

Best Answer

To your question, I am a fan of separate interfaces for the OCFS2 Interconnect, but from your setup it looks like the best solution would be NIC Bond for two interfaces and VLAN out the virtual ip's for the ext and int (public and private) hostnames. The separation for the interconnect traffic is just for latency, not bandwidth. If you are certain there will be no latency to affect your interconnect there should be no problem in your BOND + TRUNK solution above.