Does EC2 monitor the health of services inside the guests?
If not, and that is something you want, then Pacemaker would be relevant here.
Corosync probably isn't an option yet as it only does mcast and bcast, so it would be a pacemaker+heartbeat scenario.
Here's a guide to how people do it with linode instances, much of it is likely to also be relevant on EC2: http://library.linode.com/linux-ha/
To answer the question of what the pieces are, Pacemaker is the thing that starts and stops services and contains logic for ensuring both that they're running, and that they're running in only one location (to avoid data corruption).
But it can't do that without the ability to talk to itself on the other node(s), which is where heartbeat and/or corosync come in.
Think of heartbeat and corosync as a bus that any node can throw messages on and know that they'll be received by all its peers. The bus also ensures that everyone agrees who is (and is not) connected to the bus and tells you when that list changes.
For two nodes Pacemaker could just as easily use sockets, but beyond that the complexity grows quite rapidly and is very hard to get right - so it really makes sense to use existing components that have proven to be reliable.
First of all, let me say that I've set up quite a few clusters over the last decade, and I've never seen one where there was a dependency like you've described. Usually one would set it up so that the services provided don't depend on which host is active and which is standby and you don't care which host has the resource, as long as it's up on one of them.
The only way I can come up with to implement what you want is to implement the slave node as a resource that is initiated by the master node, for example by SSHing over to the slave node to run the IPaddr2 and other resources you need. Likely using SSH public key authentication with an identity file and authorized_keys entry so that the master can run the commands on the slave without requiring a password.
So this would require creating a "slaveIPaddr2" resource script, that would just wrap a command like:
HOST=`hostname`
exec ssh -i /path/to/ssh-identity dbslave$${HOST#db} /path/to/IPaddr2 "$@"
Then change the ip_dbslave resource to "slaveIPaddr2" instead of "IPaddr2" as the resource to run.
As far as scripts to run before and after migration, these mostly just sound like they would be the normal multiple resource scripts that make up a resource group and precedence using the "group" and "order" configuration items. For example, creating "master_pre" (the "before" script you want to run on the master), "slave_pre", "master_post", etc... resources, then using "order" to specify that they run in the appropriate order (master_pre, slave_pre, ip_dbmaster, ip_dbslave, master_post, slave_post). Here you'll also likely need to wrap the slave items with the SSH wrapper, to effectively treat them as a single host as I mentioned above.
It sounds like you want the "pre" script to be run before a migration is even attempted, rather than as a part of starting the resource? Pacemaker isn't going to migrate a service unless it's told to by you, or the node currently running the service is failing. In the case of a failing node, your service is down anyway, so no reason to check to try to avoid the migration. So if you are concerned with preventing the migration when you tell it to migrate, the best answer there may be to make a "migrate" script that runs your pre-service checks, and only goes on with the migration request if the tests succeed.
I don't know of a way in pacemaker to test the other hosts in the cluster before doing a migration, if that is what you are trying to achieve with #4, so it'll likely have to be an external check that enforces that.
Running other resources than just the IPaddr2 is easily done via the "group" and "order" directives.
Best Answer
The timeouts specified within the resource agent's metadata are not defaults, but rather the advised minimum values defined by the resource agents author.
The default value, if unspecified, is actually 20s as mentioned in the "Clusters from Scratch" documentation:
It is considered good practice specify timeout values. I will often specify values even when using the default 20s.