If I increase the MTU over 1500 to 9000 for OpenVPN when connecting to a remote server over the internet, I'll have fragmentation and won't gain any performance due to the fact that I don't know if the routers that handle my packets support jumboframes?
Many providers still use 1500 byte IP MTUs; you cannot depend on anything larger. It is very unlikely that you will see 9000 Byte IP packets make it to another internet destination without fragmentation.
FYI, fragmentation almost always happens in a router's CPU packet procesing path. ASICs normally don't handle fragmentation, it's done by punting to the CPU of the router / L3 switch... thus you stand a decent chance of making your performance worse by using Jumbos through the internet... the first-hop that has a 1500 Byte MTU would punt all jumbos to the CPU, which would limit your transfer speed as well. On the other end, you wind up reassembling the packets, which adds yet another possible complication to making transfers faster.
Caveat: I personally don’t know if OpenVPN sets or clears the DF bit in the IP header. If it’s set, packets larger than the providers IP MTU will get dropped.
Just did a quick test to see whether tcpdump (I̶ ̶a̶s̶s̶u̶m̶e̶ ̶T̶S̶H̶A̶R̶K̶ ̶b̶e̶h̶a̶v̶e̶s̶ ̶t̶h̶e̶ ̶s̶a̶m̶e̶) shows the NAT'd or original source IP address on the egress interface, when NAT is configured using iptables, even without OpenVPN configured.
Setup :
Ubuntu eth1 <-> eth1 CentOS eth0 <-> "Internet"
Ubuntu config :
sudo ip addr add 172.16.1.2/30 dev eth1
sudo ip link set dev eth1 up
sudo ip route add default via 172.16.1.1
CentOS config :
dhclient -v eth0
ip addr add 172.16.1.1/30 dev eth1
ip link set dev eth1 up
echo 1 > /proc/sys/net/ipv4/ip_forward
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
iptables -F FORWARD
iptables -A FORWARD -i eth1 -o eth0 -j ACCEPT
iptables -A FORWARD -i eth0 -o eth1 -m state --state RELATED,ESTABLISHED -j ACCEPT
(Just ask if you need any of the above config explained.)
Now we ping from Ubuntu (172.16.1.2) to 8.8.8.8 (Google DNS) ...
Internal :
[root@localhost ~]# tcpdump -ni eth1 'icmp'
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), capture size 65535 bytes
11:02:44.251212 IP 172.16.1.2 > 8.8.8.8: ICMP echo request, id 2996, seq 1, length 64
11:02:44.269621 IP 8.8.8.8 > 172.16.1.2: ICMP echo reply, id 2996, seq 1, length 64
11:02:45.252338 IP 172.16.1.2 > 8.8.8.8: ICMP echo request, id 2996, seq 2, length 64
11:02:45.268138 IP 8.8.8.8 > 172.16.1.2: ICMP echo reply, id 2996, seq 2, length 64
11:02:46.253904 IP 172.16.1.2 > 8.8.8.8: ICMP echo request, id 2996, seq 3, length 64
11:02:46.270498 IP 8.8.8.8 > 172.16.1.2: ICMP echo reply, id 2996, seq 3, length 64
External :
[root@localhost ~]# tcpdump -ni eth0 'icmp'
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
11:02:44.251252 IP 10.0.2.15 > 8.8.8.8: ICMP echo request, id 2996, seq 1, length 64
11:02:44.269581 IP 8.8.8.8 > 10.0.2.15: ICMP echo reply, id 2996, seq 1, length 64
11:02:45.252363 IP 10.0.2.15 > 8.8.8.8: ICMP echo request, id 2996, seq 2, length 64
11:02:45.268126 IP 8.8.8.8 > 10.0.2.15: ICMP echo reply, id 2996, seq 2, length 64
11:02:46.253942 IP 10.0.2.15 > 8.8.8.8: ICMP echo request, id 2996, seq 3, length 64
11:02:46.270469 IP 8.8.8.8 > 10.0.2.15: ICMP echo reply, id 2996, seq 3, length 64
Success !
More importantly, we see that tcpdump shows the NAT'd address !
In your output, the captured IP is "172.16.1.6" - a clear indication of why no traffic is being returned.
Your ISP (or some Internet router) is dropping the traffic as soon as it sees a private address.
As another comment mentioned, check your NAT configuration - or try to get NAT working as I did above before you even touch OpenVPN.
Cheers !
EDIT : Just tested with tshark and it does behave the same as tcpdump, in that it shows the source IP address post-NAT :
[root@localhost ~]# tshark -ni eth0 'icmp'
Running as user "root" and group "root". This could be dangerous.
Capturing on eth0
0.000000000 10.0.2.15 -> 8.8.8.8 ICMP 98 Echo (ping) request id=0x0bbb, seq=1/256, ttl=63
0.015220791 8.8.8.8 -> 10.0.2.15 ICMP 98 Echo (ping) reply id=0x0bbb, seq=1/256, ttl=54
Looking at your iptables config more closely ...
iptables -t nat -A POSTROUTING -o tun0 -j MASQUERADE
iptables -A INPUT -i tun0 -j ACCEPT
iptables -A FORWARD -i tun0 -j ACCEPT
The first line specifies the "external networking device" on which to perform NAT as tun0.
The second line isn't really relevant to forwarding, since the INPUT chain concerns traffic destined for the local server itself, not traffic passing through, iinm.
The third line specifies that tun0 is the "internal / inside" interface.
Seems to me that
1) your NAT'd interface should be eth0, not the tunnel interface, and
2) that you need to specify the outside interface (eth0) with an option like "-o eth0 -j ACCEPT" on the same line you specify your inside interface (tun0), and
3) don't forget a separate iptables statement for return traffic, as in my example above !
Best Answer
IMHO, the biggest disadvantage to OpenVPN is that it's not interoperable with the vast majority of products from "big name" network vendors out there. Cisco & Juniper's security and router products don't support it - they only support IPsec and proprietary SSL VPNs. Palo Alto, Fortinet, Check Point, etc. don't support it, either. So, if your organization / enterprise wants to setup a site-to-site extranet VPN to another company and you've only got an OpenVPN appliance, you're probably going to be out of luck.
That being said, some network hardware & software companies are starting to embrace OpenVPN. MikroTik is one of them. It's been supported since RouterOS 3.x:
http://wiki.mikrotik.com/wiki/OpenVPN
Also, for the longest time the only way to run an OpenVPN client on Apple's iOS required jailbreaking. This is not so, anymore:
https://itunes.apple.com/us/app/openvpn-connect/id590379981?mt=8
Overall, the situation is improving. However, without vendors like Cisco & Juniper implementing it in their products, I can't see large enterprises adopting it without facing interoperability problems.