I've been stuck on this problem for several days now.
I'm currently running an OpenVPN server on a self-hosted server. Our staff use this so that after logging into the VPN from remote locations, they are able to access resources in our office network. The primary use for this is to use RDP to connect to Windows machines.
We need to migrate this to the cloud. We've set up a test environment whereby AWS VPNs connect from our office to a Transit Gateway, which is connected to various VPCs. One of those VPCs contains an instance on which I am running an OpenVPN server which almost matches the configuration of the current VPN, with a few small changes. However, while RDP works perfectly fine using the current OpenVPN installation, it is unusable using the cloud hosted OpenVPN install. By unusable I mean:
- trying to play YouTube clips so that we can test framerates, the playback freezes for up to 10 seconds immediately, then the RDP session gets disconnected
- when doing anything not involving video, it usually works for a minute or two, then everything freezes, and the session gets disconnected after a while
So we are able to connect fine but clearly something isn't performing as it should, and I've tried everything I can think of. These are the details for the current, working, self-hosted OpenVPN server.
- CentOS 6.10
- OpenVPN 2.4.7
And contents of the server.conf:
local 192.168.1.103
port 1194
proto tcp
dev tun
cert /etc/openvpn/keys2/mycert
key /etc/openvpn/keys2/mykey
dh /etc/openvpn/keys2/dh.pem
server 10.8.0.0 255.255.255.0
topology subnet
route 192.168.2.0 255.255.255.0
push "route 10.8.0.0 255.255.255.0"
push "route 192.168.1.0 255.255.255.0"
push "dhcp-option DNS 192.168.1.1"
push "dhcp-option DNS 8.8.8.8"
push "dhcp-option DNS 8.8.4.4"
client-config-dir /etc/openvpn/ccd
client-to-client
duplicate-cn
keepalive 20 600
cipher AES-128-CBC
max-clients 100
user nobody
group nobody
persist-key
persist-tun
status /var/log/openvpn/openvpn-status.log
log /var/log/openvpn/openvpn.log
verb 4
username-as-common-name
plugin /usr/lib64/openvpn/plugins/openvpn-plugin-auth-pam.so openvpn
reneg-sec 0
management localhost 17505
compress lz4
mssfix 1432
mute 10
ifconfig-pool-persist ipp.txt
key-direction 0
tcp-queue-limit 256
verify-client-cert none
The details of the cloud OpenVPN server:
- CentOS 7.6.1810
- OpenVPN 2.4.7
And the contents of server.conf:
port 1194
proto udp
dev tun
username-as-common-name
ca /etc/certs/ca.crt
cert /etc/certs/server.crt
key /etc/certs/server.key
dh /etc/certs/dh2048.pem
server 10.8.0.0 255.255.255.0
topology subnet
push "route 10.8.0.0 255.255.255.0"
push "dhcp-option DNS 8.8.8.8"
push "dhcp-option DNS 8.8.4.4"
client-config-dir /etc/openvpn/ccd
client-to-client
duplicate-cn
keepalive 20 600
tcp-queue-limit 256
cipher AES-256-CBC
auth SHA256
max-clients 100
user nobody
group nobody
persist-key
persist-tun
status /var/log/openvpn/openvpn-status.log
log /var/log/openvpn/openvpn.log
verb 4
plugin /usr/lib64/openvpn/plugins/openvpn-plugin-auth-pam.so openvpn
reneg-sec 0
compress lz4
mssfix 1432
mute 10
ifconfig-pool-persist ipp.txt
key-direction 0
verify-client-cert none
You can see some of the routing is different by necessity. Also, while currently the new server is using UDP, as noted below this is because I've changed this to try to resolve the issue.
This is what I've tried:
- tried both TCP and UDP (UDP never worked well for us in the office so we have used TCP for years)
- many variations in MTU settings, from 500 to 2500, in many increments
- changing the cipher to AES-128-CBC
- all available RDP colour and bandwidth settings
- various RDP display size settings
- TightVNC, which worked perfectly well, but unfortunately isn't an option as a solution. I just wanted to be sure the issue was specific to RDP
Would appreciate any ideas as I'm out of them.
Best Answer
The issue here was the DoS Defense settings in the router being used by the clients we were attempting to RDP to - specifically the UDP flood defense setting (we use a DrayTek Vigor 2926). The threshold was set at 50 packets per second. Disabling this resulted in a massively improved experience so after experimenting a bit we settled on 5000 packets/second. Now works perfectly well.