VMware vSphere 5.0 – can’t configure host on multiple subnets

networkingsubnetvmware-vsphere

Is there an easy way to allow a single vSphere 5.0 host communicate on multiple subnets and allow VMs hosted there to communicate on those subnets?

I've got a small farm of vSphere 5.0 that are managed by a vCenter instance. For this farm, I've been allocated an internal /26 subnet (lets say it's 10.13.111.0/26).

Each vSphere host has two NICs in that are connected to a cisco 3750 switch as a "Route based on IP hash" trunk to give 2GBit bandwidth between each host and the switch. This link is also a .11q trunk. On the hosts themselves, I've got two virtual machine port groups, one on VLAN 118 called "Public" and the other on vlan 999 called "Private". The private VLAN is addressed in the 192.168.100.0/24 space and is used for inter-VM traffic. I have two VMKernel Ports both attached to the same 118 VLAN and addressed on the 10.13.111.0/26 network.

Here's what the configuration of one of the hosts looks like:

vm config

The problem is that I've run out of IP addresses in this subnet, and have been allocated another subnet – 10.13.110.0/26. However, if I add another VMKernel Port on VLAN 118 to this host and assign it 10.13.110.1, that IP address remains unreachable. I'm not sure if this is an issue on the switch or on the VMware vSphere host itself.

How can I configure this subnet to be usable by the host?

EDIT

Just to add, the 3750 has VLAN118 defined as:

interface Vlan118
 ip address 10.13.111.62 255.255.255.192
end

Can I just expand the definition to include the new subnet and have it all magically work?

EDIT 2

This is looking like it's a switch config issue.

If I reconfigure the VLAN118 on the 3750 as follows:

interface Vlan118
 ip address 10.13.111.62 255.255.255.192
 ip address 10.13.110.5 255.255.255.192 secondary
end

then the switch can now ping the VMKerenel device on the new .110 network. However, everything past the switch in the wider network can't ping the device.

The gateway for the .111 network is 10.13.111.3 and for the .110 network it's .110.3 – both of these IP addresses are hosted by a router that is connected to the core switch (or maybe even the core switch). I've got an uplink to that core switch from my 3750.

I guess that any traffic from .110 to .111 needs to flow from vSphere through my 3750 out to the core to .110.3, hop over to .111.3 and then back into my 3750 and into vSphere. Does that sound about right?

Best Answer

You stated that you cannot communicate to the newly assigned address - where are you trying to communicate from?

For non-routed traffic (within the same subnet), your newly assigned VMkernel addresses should already be working just fine. You should be able to verify by communicating with them from another device on the new 10.13.110.0/26 network.

Routed traffic is another story. Two issues with routed traffic:

  • Something needs to handle inter-vlan routing - if it's that 3750 (I'm guessing not - I'd expect the gateway to be .1 or .63, not .62) then it's not got an address on the new 10.13.110.0/26 network.
  • Only one gateway is permitted for all VMkernel traffic - setting up the gateway on the new subnet will drop the config for the old subnet's first hop.

Since your subnets are sliced into very small chunks (what's up with that, anyway? Does someone in your organization need reminded that 10.0.0.0/8 contains 16.8 million addresses?), I bet most of these hosts' communication is routed.

The tricky step of this migration for you will be the change to the hosts' gateways, as everything that's connecting to them from outside the subnet will need to be switched over to the new address.