Ubuntu – RSA fingerprint changed after moving EBS and EIP to new instance

amazon ec2ssh-keysUbuntu

I'm running ubuntu on an EBS-backed EC2 instance.

In order to change the security group of my instance, I followed the instructions here for moving the ebs volumes to a new instance. Then I reassigned my elastic ip to the new instance.

Now ssh complains that the rsa key has changed, but I don't seen any mention of RSA key generation in the console log. Why does it do this? How can I get the "new" host RSA fingerprint or restore the "old" one?

Update: The procedure I detailed below is much more involved than necessary. The easiest way to manage ssh keys on a ubuntu ec2 server is to specify them at instance launch with user data.

Here's how I was able to get the new server RSA fingerprint:

  1. Run new EBS-backed instance, record new temporary RSA fingerprint from console log.
  2. Stop the new instance
  3. Detach EBS vol from new instance
  4. Attach old vol to /dev/sda1 on new instance
  5. Start the new instance with old volume attached.
    This is when, as Michael Lowman points out, the ssh_host_rsa_key was (silently) regenerated. If I had skipped straight to step 7, I should have seen the host_rsa_key from the old instance.
  6. Stop the new instance
  7. Detach the old volume from /dev/sda1 and re-attach to /dev/sdb
  8. Re-attach the new instance' original EBS boot volume to /dev/sda1
  9. Start the new instance, connect via SSH (RSA fingerprint should match the temporary one noted in step 1)
  10. Copy the new ssh_host_rsa_key.pub from the old EBS volume (now mounted on /dev/sdb) into my local known_hosts file.
  11. Stop the new instance, detach the new volume from /dev/sda1 and delete it.
  12. Detach and re-attach the old volume to /dev/sda1.
  13. Bring up the new instance
  14. ssh doesn't complain about the host RSA fingerprint

The question still remains: why did it change?

Best Answer

The host key is generated on first boot of any instance. Init scripts are run at every boot that access the machine instance data. The initscript saves the instance id in a particular file: this way, if the file is absent or contains a different ID, the system initialization stuff is run.

That includes generating the host key (stored at /etc/ssh/ssh_host_{rsa,dsa}_key), downloading the user public key from the metadata and storing it in the authorized_keys file, setting the hostname, and performing any other system-specific initialization.

Since the determining factor is not the hard disk, but the (unique to each instance) instance ID, these things will always be done when you boot EBS volume attached to a new instance.

Edit:

I looked deeper into Ubuntu specifically and installed an ubuntu ami (3ffb3f56). I'm not a big ubuntu guy (usually prefer debian) so this was getting a little deeper into the ubuntu upstart-based init sequence than I usually go. It seems what you're looking at are /etc/init/cloud*.conf. These run /usr/bin/cloud-init and friends, which have lines like

cloud.sem_and_run("set_defaults", "once-per-instance",
        set_defaults,[ cloud ],False)

All the code's in python, so it's pretty readable. The base is provided by the package cloud-init and the backend for the scripts is provided by cloud-tools. You could look and see how it determines "once-per-instance" and trick it that way, or work around your problem with some other solution. Best of luck!

Related Topic