SSH – Port Forwarding via Remote Hostname Unable to Connect

internal-dnsnetworkingsshssh-tunnel

I'm trying to perform local port forwarding over ssh to a remote server (the "jump server") which has access to a mariadb server (the "DB server"). The jump server has the public IP address 1.2.3.4, while the DB server has the internal IP address 10.5.6.7. The following command works as expected:

ssh -v -N user@1.2.3.4 -L 3306:10.5.6.7:3306

However, the DB server's internal IP address is not static, so I'd like to rely on its internal hostname, mariadb.local. However the following does not work:

ssh -v -N user@1.2.3.4 -L 3306:mariadb.local:3306

Producing the following output from sshd on the jump server:

debug1: active: key options: agent-forwarding port-forwarding pty user-rc x11-forwarding
debug1: Entering interactive session for SSH2.
debug1: server_init_dispatch
debug1: server_input_global_request: rtype no-more-sessions@openssh.com want_reply 0
debug1: server_input_channel_open: ctype direct-tcpip rchan 2 win 2097152 max 32768
debug1: server_request_direct_tcpip: originator 127.0.0.1 port 58953, target mariadb.local port 3306
connect_to mariadb.local: unknown host (Try again)
debug1: server_input_channel_open: failure direct-tcpip

So it seems like it must be a DNS resolution issue, but I'm not sure how DNS resolution works during port forwarding so this is where my knowledge breaks down. To be clear, this is the result of ping from the jump server:

/ # ping mariadb.local
PING mariadb.local (10.5.6.7): 56 data bytes

A note: This whole environment is actually within kubernetes, so the hostname mariadb.local is being made available via as service, and resolved through k8s coreDNS. However, I can't see why that would affect anything, so I omitted it from the main description of the problem to avoid complicating the matter further, and avoid suggestions of "use kubectl port-forward"; this isn't an option as I want to make this service available to users I don't want to give kubectl access to.

Best Answer

The issue was that the user was chrooted, which as it turns out, if UsePrivilegeSeparation is on (which is mandatory as of openSSH 7.5) will also chroot the forked process.

Tangibly, what this resulted in was the process not being able to read /etc/resolv.conf which is necessary to resolve hostnames correctly. There are a few solutions to this, but essentially if you're having issues like this it's likely that resolv.conf is not readable for some reason or another (another possible issue would be permissions of that file).