Azure VPN Site-to-site connected but host not reachable

azurecheckpointsite-to-site-vpnvpnwindows-server-2016

Using Azure gateway VPN I created a site to site connection with another vpn device (checkpoint) over which I have no control (customer endpoint).

I created the connection, using their public ip, declared the secret key and for local address space I discussed with the client what 'local' IP is desired from both sides. We agreed to an IP in the 172.0.0.0 range.

The gateway connection says succeeded/connected, and I see very little traffic in the data-out field (kb's not mb's).

However, when I try to ping the local address space (172.xxx.xxx.xxx) from my windows server 2016 VM I only get Request timed out-errors.

Do I need to create additional routes in windows? I tried adding route

  route -p ADD 172.xxx.xxx.xxx MASK 255.255.255.255 0.0.0.0

but the host is still unreachable.

Any Ideas? Thanks

EDIT: added some progress below

Thanks, I allowed the ping and I can now ping my VPN Gateway from my Azure VM (which is 10.XXX.XXX.4). I then added the route
"route -p ADD 172.xxx.xxx.xxx MASK 255.255.255.255 10.XXX.XXX.4"

and via tracert I can see the 172 address is routed to/via de vpn gateway, but then it times out. Does this mean the issue now is on the on-premise side?

Edit 2

By now, when running the vpn diag. log I do see some traffic, but I still cannot reach the other side.

Connectivity State : Connected
Remote Tunnel Endpoint : XXX.XXX.XXX.XXX
Ingress Bytes (since last connected) : 360 B
Egress Bytes (since last connected) : 5272 B
Ingress Packets (since last connected) : 3 Packets
Egress Packets (since last connected) : 130 Packets
Ingress Packets Dropped due to Traffic Selector Mismatch (since last connected) : 0 Packets
Egress Packets Dropped due to Traffic Selector Mismatch (since last connected) : 0 Packets
Bandwidth : 0 b/s
Peak Bandwidth : 0 b/s
Connected Since : 9/18/2017 5:33:18 AM

Best Answer

First of all, check if Windows Firewall is not blocking ICMP.

Search for Windows Firewall, and click to open it.

  1. Click Advanced Settings on the left.
  2. From the left pane of the resulting window, click Inbound Rules.
  3. In the right pane, find the rules titled File and Printer Sharing (Echo Request - ICMPv4-In).
  4. Right-click each rule and choose Enable Rule.

Second, make sure you have the proper routing in place. The servers in your on-premises environment need to know how to reach the Azure environment. If your gateway can ping the Azure servers and the other way around is also true, then it's all good except that the only device that know this route is your GW. Make sure the servers in your network know how to reach the Azure network as well by adding a route to the Azure network through the GW. Example:

Next hop is also on-prem VPN:

VMs -> Default Windows Gw/Vpn Device -> Azure VPN Gw
route -p add <azure_network> mask <azure_net_mask> gw <azure_vpn_gw_ip>

As your VMs next hop is usually the Default Windows Gw, this will make sure that the next hop to reach azure_network is the azure_vpn_gw_ip. Make sure the route tables (local gateway configuration in Azure) has your on-premises network segment as well.

Related Topic