Google compute engine instance unable to connect between nodes using internal ip

google-compute-engine

it was working fine before i created a new instance of my existing instance.
when i redeployed the snapshot of both instance onto a another zone,
i wasn't able to ping or connect to both instances via lan ip.

one instance is on
10.240.0.5 and another on 10.240.0.6
with 10.240.0.0/16

its on the same network,
and i have an existing firewall rules that allow all connection ports between nodes, but for some reason it's not working.
so i deleted the existing setting and reenter a new firewall rule
default-all-lan

Network

default
Source IP ranges

10.240.0.0/16
Allowed protocols and ports

tcp:1-65535
udp:1-65535
icmp

i still can't ping or connect to any tcp port including web port 80.
but when i tried to wget http://node2_externalip from node1, it works!
what could be the issue?

Best Answer

That is a strange behavior. If a VM is recreated in the same network, Google Firewall rules will still apply. In general there are a couple of things you might want to look at:

1) Check that the Google Firewall rule is still valid. For example if you recreate the VMs without metadata tags and the firewall rule was using them, traffic will get blocked.

2) Confirm that the VM itself is not running a firewall that is dropping the packets.

3) Check that the VM has a service listening on the appropriate port netstat --listen

4) As already mentioned, check the routes on Google project.

The use of tools like nmap or telnet can also help to troubleshoot.

Related Topic