SSH login fails for EC2 instances created from image of working EC2

amazon ec2amazon-amisshssh-keys

I have a functioning EC2 instance with several users, some of whom are chrooted to their home directories, some of whom are ftp-only and have no shell access, etc… ec2-user is the main admin account, though others also have root access and full ssh logins. Everything works great on the running instance.

I can take a snapshot image of the instance and launch new instances from the snapshot. No matter what I select in terms of the key pair associated with the new instance (use the original keypair for ec2-user, create a new keypair, or use no keypair), once the new instance is launched and running, I am unable to ssh into the server using ec2-user or any other ssh-enabled user. ftp works fine, however.

Security groups are not an issue, as far as I can tell, the incoming traffic is allowed (and it's the same security group as the original instance, anyways).

The /var/log/secure of the login attempts gives me:

sshd[1739]: debug1: userauth-request for user ec2-user service ssh-connection method none
sshd[1739]: debug1: attempt 0 failures 0
sshd[1738]: debug1: user ec2-user does not match group list sftponly at line 142
sshd[1738]: debug1: PAM: initializing for "ec2-user"
sshd[1738]: debug1: PAM: setting PAM_RHOST to "..."
sshd[1738]: debug1: PAM: setting PAM_TTY to "ssh"
sshd[1739]: debug1: userauth-request for user ec2-user service ssh-connection method publickey
sshd[1739]: debug1: attempt 1 failures 0
sshd[1738]: debug1: temporarily_use_uid: 500/500 (e=0/0)
sshd[1738]: debug1: trying public key file /etc/ssh/keys/ec2-user
sshd[1738]: debug1: restore_uid: 0/0
sshd[1738]: debug1: temporarily_use_uid: 500/500 (e=0/0)
sshd[1738]: debug1: trying public key file /etc/ssh/keys/ec2-user
sshd[1738]: debug1: restore_uid: 0/0
sshd[1738]: Failed publickey for ec2-user from xx.xx.xx.xx port 60597 ssh2
sshd[1739]: Connection closed by xx.xx.xx.xx 
sshd[1739]: debug1: do_cleanup

It's the same error for all ssh-enabled users. As you can see from the log, I had changed my sshd_config on the original instance so that it looks for the public keys in the /etc/ssh/keys folder.

I have mounted the failed instances as volumes on the working instance. The keys are in the folder, with the same permissions, and the same key values, all as expected. Here is ls -al of the keys folder (.) and the ec2-user file.

drwxr-xr-x. 2 root     root     4096 Dec  1 16:59 .
-rw-------. 1 ec2-user ec2-user  388 Jul 25 13:27 ec2-user

What could be causing this problem? I would like to solve the problem at the point of saving and launching a snapshot or in setting up the original instance, but not mounting the problematic instance and making manual changes so that it is functional but doesn't fix the larger problem.

UPDATE:
Here are the active settings in the sshd_config file:

#...
Protocol 2
#...
SyslogFacility AUTHPRIV
LogLevel DEBUG
#...
AuthorizedKeysFile /etc/ssh/keys/%u
#...
PasswordAuthentication no
#...
ChallengeResponseAuthentication no
#...
GSSAPIAuthentication yes
#...
GSSAPICleanupCredentials yes
#...
UsePAM yes
#...
AcceptEnv LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES
AcceptEnv LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT
AcceptEnv LC_IDENTIFICATION LC_ALL LANGUAGE
AcceptEnv XMODIFIERS
#...
X11Forwarding yes
#...
Subsystem       sftp    internal-sftp -f AUTH -l VERBOSE
#...
Match group sftponly
ChrootDirectory /home/%u
X11Forwarding no
AllowTcpForwarding no
ForceCommand internal-sftp -l VERBOSE -f AUTH
#...

Best Answer

I suspect this is an SELinux problem. Check the context of the folder you are using, I expect it won't be correct for holding keys.

Im afraid I no longer have access to a Redhat box to establish exactly what the context should be. That said, try this:

ls -lZ /root/.ssh

This will yield the selinux context that your new folder needs to be. If I remember correctly it should be something like system_u:system_r:sshd_t

Then we need to apply that security context to your new authorized keys location:

semanage fcontext -a -t system_u:system_r:sshd_t “/etc/ssh/keys(/.*)?”

Which associated the correct context with the new keys location. Finally, we can apply the context to the new location

restorecon -Rv /etc/ssh/keys
Related Topic