Kvm guest can not connect to outside of host, vice versa

kvm-virtualizationnetworkingopennebulaubuntu-12.04

I have a vmware vm running ubuntu 12.04 server(name=vmhost), with network bridged and full access to the internet.

This vmhost is using the kvm hypervisor and is running a vm(centOS 6.4),network bridged as well.

The vmhost can access internet and can also access its vm, and the vm can access the vmhost.
The vm cannot access the internet, nor can I ping/ssh into it from another pc on my subnet.

I have a bridge for the vmhost/its vm and have checked the iptables/routes but haven't found anything. Also I have ip_forwarding.
Running tcpdump I see that vmhost can see the packages but does nothing with them.
I have also tried disabling the ufw but didn't help.

Infor for VHMOST
route:

Kernel IP routing table

Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
default         192.168.0.1     0.0.0.0         UG    100    0        0 virbr0
192.168.0.0     *               255.255.255.0   U     0      0        0 virbr0

The vmhouste Iptables -l
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         
ACCEPT     udp  --  anywhere             anywhere             udp dpt:domain
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:domain
ACCEPT     udp  --  anywhere             anywhere             udp dpt:bootps
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:bootps

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         
ACCEPT     all  --  anywhere             192.168.122.0/24     state RELATED,ESTABLISHED
ACCEPT     all  --  192.168.122.0/24     anywhere            
ACCEPT     all  --  anywhere             anywhere            
REJECT     all  --  anywhere             anywhere             reject-with icmp-port-unreachable
REJECT     all  --  anywhere             anywhere             reject-with icmp-port-unreachable

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination



Interface file:
auto lo
iface lo inet loopback

auto virbr0
iface virbr0 inet static
        address 192.168.0.21
        network 192.168.0.0
        netmask 255.255.255.0
        broadcast 192.168.0.255
        gateway 192.168.0.1
    dns-nameservers 192.168.0.1
        bridge_ports eth0
        bridge_fd 9
        bridge_hello 2
        bridge_maxage 12
        bridge_stp off

Brctl show:
bridge name bridge id       STP enabled interfaces
virbr0      8000.000c29f8f8e4   yes     eth0 vnet1
vnet0       8000.000000000000   no   




 THE VM 
ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN 
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host 
   valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 02:00:c0:a8:00:20 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.32/24 brd 192.168.0.255 scope global eth0
inet6 fe80::c0ff:fea8:20/64 scope link 
   valid_lft forever preferred_lft forever


ip route
192.168.0.0/24 dev eth0  proto kernel  scope link  src 192.168.0.32 
default via 192.168.0.1 dev eth0 

I will post tcpdump results shortly.

It is also worth mentioning that I am running opennebula with vmhost as my vm host, but I don't think this is the problem.

Best Answer

You may want to try unbridging all the virtual machines from the NAT so that your router assigns each their own separate IP address using DHCP. Alternatively, try keeping the virtual machines each bridged to the NAT and checking "Replicate physical connection state" for each machine. It seems, intuitively, that one of these should present a consistent, mutually-visible environment for the virtual machines.