When the user's home directory is encrypted with ecryptfs
sshd
cannot read the authorized_keys
file from the user's home directory before the home directory has been mounted.
During login sshd
will use pam
to authenticate the user, and pam
will use the password entered by the user to mount the encrypted home directory.
This is problematic if you want to restrict sshd
to only permit public key authentication.
However you can place an unencrypted authorized_keys
file on the server as well. This will permit the user to login using a key, but since this does not invoke pam
, the home directory will not be mounted, and mounting the home directory without knowing the password won't work either.
Since the unencrypted home directory gets hidden by the encrypted home directory, placing the unencrypted authorized_keys
file in the first place can be a bit tricky. A bind mount of the underlying file system can help with that.
If for example /home
is just a directory on the root file system, you can do as follows:
mkdir /mnt/rootfs
mount --bind / /mnt/rootfs
And then you can create /mnt/rootfs/home/$USER/.ssh/authorized_keys
There is more you can do. Since the encrypted and unencrypted version of authorized_keys
are two different files, you can put different contents in them. For example the unencrypted version can invoke a script in order to mount the encrypted home directory:
command="/usr/local/bin/ecryptfs-mount-from-ssh" ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDM1Ot12ThbTcPOGpfh7AiRqp3P4BMm3DNo4mDg7gDFPwCmM9rKRHTH0fBVSqkSGlXm84q29bckDukg7vfqkbTpbkP3e2YmTkP6p1J2SoX2QMUnBRRgL9It/ZiAfA2I4QzUrcywVvokO1F2DqcRLy5e5wKTUFfvIm6D2QfBmGbnW2Kkpn16hQyLT1ClXjFC1qXUhazePv0cAtWUCUGjRcLr/ipOphS7eOB46cGhYqtbMkKx0t93ZG4f6jM0o32cYy3RqprpZpTmCeG1gDyG+IlSLBYXYggr72iwTKsTZ9pMDTCBQ8Pb7l317TPOcJzTtDxnpgpGE3x4Vu/Ww+zhsIeT kasperd 2014 May 24
The important part is the command
specified before the key. This get invoked instead of the shell. But that happens only when this particular public key is used, and only if the user's home directory is not mounted.
If the user's home directory is already mounted, this authorized_keys
file is hidden and the encrypted version is used instead. The encrypted version of authorized_keys
does not have the command
, so the script to mount the home directory is not run.
So, what goes in the script. Here is my version:
#!/bin/bash -e
if [ $# = 1 ]
then
PUBKEY="$(
grep "$1" "$HOME/.ssh/authorized_keys" |
sed -e 's/.* ssh-rsa //;s/ .*//')"
/usr/local/bin/ssh-agent-ecryptfs-decryption.py "$PUBKEY" "$1" |
ecryptfs-unwrap-passphrase "$HOME/.ecryptfs-ssh-wrapped/$1" - |
ecryptfs-add-passphrase --fnek
fi
ecryptfs-mount-private
cd "$HOME"
if [ "$SSH_ORIGINAL_COMMAND" != "" ]
then
exec /bin/bash -c "$SSH_ORIGINAL_COMMAND"
fi
exec /bin/bash -l
In the example above, the authorized_keys
file is invoked without arguments, so the first if
block is skipped. The ecryptfs-mount-private
command will thus ask for the user's password. But this does not require sshd
to have password authentication enabled, and thus will work on sshd
with public key authentication only.
The next command will change to the user's encrypted home directory (until then the script would be running inside the unencrypted home directory).
The last part of the script will run the command given as argument to the ssh
command if any, or the users login shell if no command was given.
One caveat is that this does not work with X11 forwarding, because the home directory is not available yet, when the cookie would be stored. But any other session opened while the home directory is already mounted, will be able to handle X11 forwarding.
Using ~/.ssh/rc
instead could possibly solve the X11 forwarding issue. This is something I have not looked into yet.
The first if
block is a bit of a hack, which I came up with to allow the user's home directory to be mounted without needing a password. Instead it uses a forwarded ssh-agent
to mount the user's home directory. That part comes with disclaimers about not having had any peer review, so trusting the cryptography in the ssh-agent-ecryptfs-decryption.py
is entirely at your own risk.
The python script looks like this:
#!/usr/bin/env python
from sys import argv
from os import environ
import socket
s = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
s.connect(environ['SSH_AUTH_SOCK'])
def encode_int(v):
return ('%08x' % v).decode('hex')
def encode_string(s):
return encode_int(len(s)) + s
def encode_mpint(v):
h = '%x' % v
if len(h) & 1: h = '0' + h
return ('%04x%s' % (len(h) * 4, h)).decode('hex')
key_blob = argv[1].decode('base64')
msg = 'ecryptfs-decrypt ' + argv[2]
s.send(encode_string(chr(13) +
encode_string(key_blob) +
encode_string(msg) +
encode_int(0)))
response = s.recv(1024)
assert response == encode_string(chr(14) + response[5:]), argv[1]
passphrase = response[-48:].encode('base64').replace('\n', '')
print passphrase
So how does the decryption work? First of all the argument to the script as provided by authorized_keys
is any random value. A uuid generated with uuidgen
could work. The shell script uses grep to find the relevant line in the authorized_keys
file to extract the public key.
The base64 encoded public key as well as the uuid are given to the python script. The public key used is exactly the one, which the user authenticated with. The python script asks the forwarded agent for a signature on a specific message using the public key in question (because signing messages is exactly what ssh-agent
can do). Part of the signature is then encoded with base64 to produce a password.
This password is used to decrypt an ecryptfs
wrapped password file, but the primary file is encrypted using the user's login password. This one is encrypted with a password generated from the ssh key.
Best Answer
after adding an external IP and the following line to the "customer_metadata" in the create script, i could connect from smartos, but not from my laptop:
"user-script" : "/usr/sbin/mdata-get root_authorized_keys > ~root/.ssh/authorized_keys ; /usr/sbin/mdata-get root_authorized_keys > ~admin/.ssh/authorized_keys"