HP ProLiant BL460C Gen9 – NICs

networking

Is there a way to get more than 4 physical NICs (4 active/4 standby) on an HP BL460c Gen9 server WITHOUT using mezzanine cards and extra VCMs?

EDIT for more information:

I have two c7000 enclosures. In each, I have two Flex-10 VCMs setup as active / standby. The BL460c Gen8 and Gen9 blades in each c7000 have the 10GB LOM modules and run ESXi v6. In the current configuration, the systems look as follows and works fine:

[for clarity, Blade NICs 5,6,7 and 8 are the redundant side of NICs 1,2,3 and 4]

Uplink in x1 = 10GB Trunk (all vlans except SAN)
Uplink in x2 = 10GB SAN vlan only

Blade NIC 1 (Uplink x1) – 100MB VMware Management
Blade NIC 2 (Uplink x1) – 1GB vMotion
Blade NIC 3 (Uplink x2) – 5GB iSCSI SAN
Blade NIC 4 (Uplink x1) – 3.9GB all data networks (separated by vlans)

Our Storage team has purchased a new SAN and configured this new SAN to use two Fault Domains, requiring two physical connections to the SAN from each blade.

Now, I suppose I could run VMware management and vMotion on the same NIC, but this is not a VMware best practice. Then, I would have freed up a NIC to be the second for the new SAN fault domain.

My initial question was to see if there is any way to parse another NIC on each blade so that I do not have to go against VMware best practice and more importantly, I do not have to purchase four new mezzanine cards and four more VCMs. I will mention that I have a thrid c7000 that is configured the same way as the two listed above and this third c7000 will join the other two in function so purchase numbers would acually be six mezzanine cards and six VCMs.

You guys asking me for more details has forced me to actually document our configuration, something I have been meaning to do for years, so thanks for that.

Best Answer

You're making life very hard for yourself, I've been dealing with Virtual Connects since pretty much day-one and use them extensively now - what I think you should do is this;

Create two VC 'Ethernet Networks', both pretty identical, set preferred to 10Gbps, enable VLAN passthrough and Smart Mode, point one at VC 1 port 1 (or whatever is your main trunk on physical switch 1) and the second to VC2 port 1 (or whatever does the same for physical switch 2) - then trunk all appropriate VLANs up both links.

Then create a Server Profile that has either 2 x 10Gbps, one link per 'Ethernet Network' (you'll have to use the software iSCSI initiator) OR 4 x vNICS with 2 for iSCSI and 2 for everything else if you want to use the hardware initiator, you decide the speed-split.

Then all you need to do is setup your vSS or vDS to have Port Groups for Management, vMotion, iSCSI and your VM traffic and then use teaming to by default send all Management, vMotion and iSCSI traffic over the second or third+fourth vNICs (therefore to physical switch 2) and then the VM traffic over the first vNIC (therefore to physical switch 1).

This way with both links up your VMs get either 10Gbps to themselves or whatever speed-split you choose while your Management, vMotion and iSCSI gets their own 10Gbps or again whatever you speed-split them. Obviously if you lose a link they're all on the same 'side' but that's true of any setup like this.

We have literally hundreds of hosts setup similar to this (2 x 10Gbps as we use FC rather than iSCSI) and it works a treat, it's quite easy to manage too.

Either way the one lesson I've learned with VC is just because you CAN split your pNICs into multiple vNICs it doesn't mean it makes sense to - keep things as simple as you can.

Related Topic