Linux – Bridging Virtual Networking into Real LAN on a OpenNebula Cluster

cloudlinuxnetworkingopennebulavirtualization

I'm running Open Nebula with 1 Cluster Controller and 3 Nodes.

I registered the nodes at the front-end controller and I can start an Ubuntu virtual machine on one of the nodes.

However from my network I cannot ping the virtual machine. I am not quite sure if I have set up the virtual machine correctly.

The Nodes all have a br0 interfaces which is bridged with eth0. The IP Address is in the 192.168.1.x range.

The Template file I used for the vmnet is:

NAME = "VM LAN"
TYPE = RANGED

BRIDGE = br0 # Replace br0 with the bridge interface from the cluster nodes

NETWORK_ADDRESS = 192.168.1.128 # Replace with corresponding IP address
NETWORK_SIZE    = 126
NETMASK         = 255.255.255.0
GATEWAY         = 192.168.1.1
NS              = 192.168.1.1

However, I cannot reach any of the virtual machines even though sunstone says that the virtual machine is running and onevm list also states that the vm is running.

It might be helpful to know that we are using KVM as a hypervisor and I am not quite sure if the virbr0 interface which was automatically created when installing KVM might be a problem.

Best Answer

I ran into this same problem and it was the VM that I setup using ubuntu vmbuilder.

Check out this intro that includes a complete vm. I'm certain that this should work for you.

They provide an init script, amongst other things, that set up certain networking parameters at boot so this is likely at the heart of the issue. They more fully explain this here with contextualization.

General Context Method

So it turns out installing the contextualization files in the VM is a rather simple issue.

If you are using vmbuilder to make your VM you can use a post install hook (and I'm sure other build methods also have various post install hooks too).

Create the copy file (hostfile to guestfile, make sure it's single space separated?)

copy.cfg

<path>/rc.context /etc/init.d/vmcontext
<path>/postinstall.sh /tmp/postinstall.sh

Create the post install hook

postinstall.sh

#!/bin/bash
# create mount point for context image
mkdir /mnt/context
# setup vmcontext at runlevel 2 service level 1
ln -s /etc/init.d/vmcontext /etc/rc2.d/S01vmcontext

Create script to chmod to vm guest

chvm.sh

#!/bin/bash
# vmbuilder passes vmguest root as $1
chroot $1 /tmp/postinstall.sh

Finally, edit your vmbuilder conf file for the VM

yourvm.cfg

...
copy = <full_path>/copy.cfg
execscript = <full_path>/chvm.sh
...

Then construct with vmbuilder

sudo vmbuilder kvm ubuntu -c vmbuilder.cfg

Add a nebula based vnc

Include something like this in your context

GRAPHICS = [
  LISTEN = 0.0.0.0,
  PORT   = 5900,
  TYPE   = vnc ]

Then ssh tunnel to a computer that's on the guest machines network

ssh -L 5900:127.0.0.1:5900 yourserver.com

And open a vnc client at 127.0.0.1 on your local computer.

Thoughts

Nebula can't force kvm/libvirt to run your drives on hd*/sh* so you'll need to play around with where they wind up (and edit the rc file to reflect this). E.g. with my Ubuntu setup the qcow2 image gets /dev/sda and the context image gets /dev/sr0.

I also had an issue where either kvm or nebula couldn't guess the format of my .qcow2 image. Hence, in DISK I had to include DRIVER=qcow2. This same problem occurs for processor architecture hence in OS I had to include ARCH=x86_64 (since I was running an amd64 guest).

Good luck