OpenVPN Routing to LAN Behind Server

ip-routinglinux-networkingtcpdumpubuntu-16.04vpn

I have a site-to-site VPN configured using OpenVPN. The tunnel seems to come up just fine (and I can ping from one end to another), but I cannot get the networks on the two ends to see each other.

My topology is as follows:

Net1 (192.168.13.0/24)
 |
 | 
 |
192.168.13.35
ens160
-----------
OVPN Client
-----------
tun0
10.13.10.2
 |
 |
10.13.10.1
tun0
-----------
OVPN Server
-----------
ens160
10.1.121.6
 |
 |
Net2 (10.1.121.0/26)

I can ping from client to server:

srv# ping 10.13.10.2
PING 10.13.10.2 (10.13.10.2) 56(84) bytes of data.
64 bytes from 10.13.10.2: icmp_seq=1 ttl=64 time=5.46 ms
64 bytes from 10.13.10.2: icmp_seq=2 ttl=64 time=5.01 ms

I can get from the client to Net1 (after adding the appropriate routes, of course):

client#ping 10.1.121.8
PING 10.1.121.8 (10.1.121.8) 56(84) bytes of data.
64 bytes from 10.1.121.8: icmp_seq=1 ttl=63 time=48.0 ms

However, I am completely unable to go the other way around (ping something on the client network -Net2- from the server). I cannot even get to the client's IP on Net2 from the server:

server#ping 192.168.13.35
PING 192.168.13.35 (192.168.13.35) 56(84) bytes of data.
^C
--- 192.168.13.35 ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 2014ms

I do have the appropriate routes:

server# ip route
default via 10.1.121.1 dev ens160 onlink 
10.1.121.0/26 dev ens160  proto kernel  scope link  src 10.1.121.6 
10.13.10.0/24 dev tun0  proto kernel  scope link  src 10.13.10.1 
192.168.13.0/24 via 10.13.10.2 dev tun0 

client# ip route
default via 192.168.13.1 dev ens160 onlink 
10.1.121.0/24 via 10.13.10.1 dev tun0 
10.13.10.0/24 dev tun0  proto kernel  scope link  src 10.13.10.2 
192.168.13.0/24 dev ens160  proto kernel  scope link  src 192.168.13.35 

IPTables is not blocking anything (everything is set to ACCEPT):

client# iptables -L -vn
Chain INPUT (policy ACCEPT 56 packets, 3839 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain OUTPUT (policy ACCEPT 40 packets, 4343 bytes)
 pkts bytes target     prot opt in     out     source               destination   

server# iptables -L -vn
Chain INPUT (policy ACCEPT 736 packets, 75398 bytes)
 pkts bytes target     prot opt in     out     source               destination         
    2   168 ACCEPT     all  --  tun0   *       0.0.0.0/0            0.0.0.0/0           

Chain FORWARD (policy ACCEPT 4 packets, 236 bytes)
 pkts bytes target     prot opt in     out     source               destination         
    1    84 ACCEPT     all  --  tun0   *       0.0.0.0/0            0.0.0.0/0           

Chain OUTPUT (policy ACCEPT 449 packets, 43393 bytes)
 pkts bytes target     prot opt in     out     source               destination 

If I run a tcpdump on the tunnel interface, I see the ICMP packets leaving the client, but I do not see them incoming on the server:

server# ping 192.168.13.35
PING 192.168.13.35 (192.168.13.35) 56(84) bytes of data.
16:57:40.262004 IP 10.13.10.1 > 192.168.13.35: ICMP echo request, id 1562, seq 1, length 64
16:57:41.269165 IP 10.13.10.1 > 192.168.13.35: ICMP echo request, id 1562, seq 2, length 64
16:57:42.277154 IP 10.13.10.1 > 192.168.13.35: ICMP echo request, id 1562, seq 3, length 64
16:57:43.285163 IP 10.13.10.1 > 192.168.13.35: ICMP echo request, id 1562, seq 4, length 64

client# tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on tun0, link-type RAW (Raw IP), capture size 262144 bytes

Both endpoints are Ubuntu 16.04 Server LTS (x64, installed mostly with the defaults).

I thought I knew a little bit about Linux networking, but… it appears I was wrong 🙂 . I have no idea why this is not working, and I have run out of ideas for things I could try. Can anyone please point me in the right direction?

Thank you!

Best Answer

You may be missing the iroute. Over and above pushing routes, you would need iroute in configuration files. Here is the excerpt from OpenVPN man page.

--iroute network [netmask]

Generate an internal route to a specific client. The netmask parameter, if omitted, defaults to 255.255.255.255. This directive can be used to route a fixed subnet from the server to a particular client, regardless of where the client is connecting from. Remember that you must also add the route to the system routing table as well (such as by using the --route directive). The reason why two routes are needed is that the --route directive routes the packet from the kernel to OpenVPN. Once in OpenVPN, the --iroute directive routes to the specific client. This option must be specified either in a client instance config file using --client-config-dir or dynamically generated using a --client-connect script. The --iroute directive also has an important interaction with --push "route ...". --iroute essentially defines a subnet which is owned by a particular client (we will call this client A). If you would like other clients to be able to reach A's subnet, you can use --push "route ..." together with --client-to-client to effect this. In order for all clients to see A's subnet, OpenVPN must push this route to all clients EXCEPT for A, since the subnet is already owned by A. OpenVPN accomplishes this by not not pushing a route to a client if it matches one of the client's iroutes.

You would need config entries like below in respective client and servers.

iroute 192.168.3.0 255.255.255.0

Further, you may need to checkout CCD if you have multiple networks behind multiple clients.

client-config-dir

This directive sets a client configuration directory, which the OpenVPN server will scan on every incoming connection, searching for a client-specific configuration file (see the the manual page for more information). Files in this directory can be updated on-the-fly, without restarting the server. Note that changes in this directory will only take effect for new connections, not existing connections. If you would like a client-specific configuration file change to take immediate effect on a currently connected client (or one which has disconnected, but where the server has not timed-out its instance object), kill the client instance object by using the management interface (described below). This will cause the client to reconnect and use the new client-config-dir file