By default, EC2 instance "terminate" automatically deletes all EBS volumes that were automatically created with the instance, but this can be changed. It does not,by default, delete EBS volumes that were attached after the instance started running, and this also can be changed.
Here's an article I wrote on how to protect your important data with EBS boot instances:
Three Ways to Protect EC2 Instances from Accidental Termination and Loss of Data
http://alestic.com/2010/01/ec2-instance-locking
Note: Instance failure should not automatically delete your EBS volume. However, EBS volume failure is itself one failure mode, so make sure you are creating regular EBS snapshots. Not only does this give you a backup to rely on, but it also magically and transparently increases the reliability of the EBS volume itself.
If you start an EBS-root community AMI - an EBS volume(s) is created under your account from the snapshot(s) associated with that AMI - after that you have no real connection to the original AMI any longer - you will be modifying the local EBS volume that you now own.
By default, most AMIs are set to delete the root volume on termination - even if the root volume is an EBS volume. You can change this by modifying the instance attributes. If you make such a change, termination of the instance will not delete the EBS volume - so you can attach it to another instance you start at a later time point, or snapshot the volume after the instance is terminated.
You must use a snapshot to create an EBS backed AMI - since you define the root volume by referencing your snapshot. You can also create S3 backed instances that have attached EBS volumes by pointing the block-device-mapping at an EBS snapshot. (So, an image is only EBS-backed if you use a snapshot for the root volume).
Snapshots persist independently of the volumes they are associated with or the instances those volumes may have been associated with.
Typically EBS volumes are not deleted by default when an instance terminates (the exception being the root volume, as mentioned above). So, if you create an EBS volume and attach it to an instance, make changes to it, and terminate that instance, the EBS volume will persist, despite the instance being terminated (even in the absence of a snapshot).
Snapshots are point in time backups. The EBS volume is a block device - Amazon creates a map of these blocks in its snapshots, and tracks which blocks have changed. So, EBS snapshots are differential - only changed blocks are stored; point in time - you can deleted any previous snapshot without affecting any other one - and any snapshot can be restored at any time; and compressed - only the amount of data present is stored - empty blocks are ignored.
Changes made to an EBS volume do not affect any pre-existing snapshots - they will only be added to a snapshot if you explicitly take a new snapshot. So, when you restore your snapshot, the resulting EBS volume will be an identical block copy of the EBS volume from which the snapshot originated (this means that deleted files can be undeleted from a restored snapshot using the usual methods - it is not a file copy, and is file system agnostic). Just to reiterate, nothing added after a snapshot is taken will be available when a snapshot is restored.
As per [Amazon's page on EBS][1], snapshots are stored in S3 and benefit from S3's redundancy. They do not show up in your buckets - or on your S3 usage reports. Usually the only way to determine how much snapshot space you are using is to look under your EC2 usage report, under the EBS category - where it lists snapshot data stored.
A few other interesting points about snapshots: a) they load lazily - you can access an EBS volume created from a snapshot, before all the data has loaded, and the necessary data will be fetched from S3 on request - handy if you have large volumes. b) you can create larger (but not smaller) EBS volumes from a snapshot (although, you will need to resize the file system after doing so). c) It is possible to create RAID setups of EBS volumes, and snapshot these, since snapshots work at a block level.
Best Answer
There are a couple approaches to consider for how to terminate an instance from itself:
Start the EC2 instance with the instance-initiated-shutdown-behavior set to "terminate", then "sudo halt" or equivalent from inside the instance.
Start the EC2 instance with an IAM role that allows it to terminate itself, then invoke the ec2 terminate-instances API from the instance (e.g., using the aws-cli). Get the instance id from the instance metadata.
The first method is quite a bit easier and has less risk of the instance being able to terminate other instances, but you're already calling AWS API from the instance so you're half way to rm /etc/rc2.d/S90halt-after-create-image the second method as well.
Now for the question of how to trigger the termination after the create-image reboot.
You could simply drop the desired halt/terminate command into a startup script like /etc/rc.local and it would get run when the system comes back up. As @AlexB points out in the comments, you need to make sure this doesn't cause new instances with the image to halt, so perhaps test the instance_id.
There's no need to wait for the new AMI creation to complete. It will finish just fine even though your instance is no longer running.
Here's a quick hack that has lots of room for improvement:
This code creates a startup script that removes itself and halts the system if certain conditions are true. Could cause general havok if things go wrong. Tested on Ubuntu 12.04. May not work elsewhere.