An AMI, as you note, is a machine image. It's a total snapshot of a system stored as an image that can be launched as an instance. We'll get back to AMIs in a second.
Lets look at EBS. Your other two items are sub-items of this. EBS is a virtual block device. You can think of it as a hard drive, although it's really a bunch of software magic to link into another kind of storage device but make it look like a hard drive to an instance.
EBS is just the name for the whole service. Inside of EBS you have what are called volumes. These are the "unit" amazon is selling you. You create a volume and they allocate you X number of gigabytes and you use it like a hard drive that you can plug into any of your running computers (instances). Volumes can either be created blank or from a snapshot copy of previous volume, which brings us to the next topic.
Snapshots are ... well ... snapshots of volumes: an exact capture of what a volume looked like at a particular moment in time, including all its data. You could have a volume, attach it to your instance, fill it up with stuff, then snapshot it, but keep using it. The volume contents would keep changing as you used it as a file system but the snapshot would be frozen in time. You could create a new volume using this snapshot as a base. The new volume would look exactly like your first disk did when you took the snapshot. You could start using the new volume in place of the old one to roll-back your data, or maybe attach the same data set to a second machine. You can keep taking snapshots of volumes at any point in time. It's like a freeze-frame instance backup that can then easy be made into a new live disk (volume) whenever you need it.
So volumes can be based on new blank space or on a snapshot. Got that? Volumes can be attached and detached from any instances, but only connected to one instance at a time, just like the physical disk that they are a virtual abstraction of.
Now back to AMIs. These are tricky because there are two types. One creates an ephemeral instances where the root files system looks like a drive to the computer but actually sits in memory somewhere and vaporizes the minute it stops being used. The other kind is called an EBS backed instance. This means that when your instances loads up, it loads its root file system onto a new EBS volume, basically layering the EC2 virtual machine technology on top of their EBS technology. A regular EBS volume is something that sits next to EC2 and can be attached, but an EBS backed instance also IS a volume itself.
A regular AMI is just a big chunk of data that gets loaded up as a machine. An EBS backed AMI will get loaded up onto an EBS volume, so you can shut it down and it will start back up from where you left off just like a real disk would.
Now put it all together. If an instance is EBS backed, you can also snapshot it. Basically this does exactly what a regular snapshot would ... a freeze frame of the root disk of your computer at a moment in time. In practice, it does two things different. One is it shuts down your instance so that you get a copy of the disk as it would look to an OFF computer, not an ON one. This makes it easier to boot up :) So when you snapshot an instance, it shuts it down, takes the disk picture, then starts up again. Secondly, it saves that images as an AMI instead of as a regular disk snapshot. Basically it's a bootable snapshot of a volume.
The host key is generated on first boot of any instance. Init scripts are run at every boot that access the machine instance data. The initscript saves the instance id in a particular file: this way, if the file is absent or contains a different ID, the system initialization stuff is run.
That includes generating the host key (stored at /etc/ssh/ssh_host_{rsa,dsa}_key
), downloading the user public key from the metadata and storing it in the authorized_keys
file, setting the hostname, and performing any other system-specific initialization.
Since the determining factor is not the hard disk, but the (unique to each instance) instance ID, these things will always be done when you boot EBS volume attached to a new instance.
Edit:
I looked deeper into Ubuntu specifically and installed an ubuntu ami (3ffb3f56). I'm not a big ubuntu guy (usually prefer debian) so this was getting a little deeper into the ubuntu upstart-based init sequence than I usually go. It seems what you're looking at are /etc/init/cloud*.conf
. These run /usr/bin/cloud-init
and friends, which have lines like
cloud.sem_and_run("set_defaults", "once-per-instance",
set_defaults,[ cloud ],False)
All the code's in python, so it's pretty readable. The base is provided by the package cloud-init
and the backend for the scripts is provided by cloud-tools
. You could look and see how it determines "once-per-instance" and trick it that way, or work around your problem with some other solution. Best of luck!
Best Answer
Not sure if this will work but instead of what you did above.
Attach the old EBS drive instance to /dev/sdb2 and don't detach the current /dev/sdb1 as this is running an instance that you can still SSH into.
Inside the new running instance you should be able to run
Followed by:
Now detach the drive in EC2 console and reattach it to your old instance and start it back up. Hopefully now you'll be able to ssh back in.