First of all, let me say that I've set up quite a few clusters over the last decade, and I've never seen one where there was a dependency like you've described. Usually one would set it up so that the services provided don't depend on which host is active and which is standby and you don't care which host has the resource, as long as it's up on one of them.
The only way I can come up with to implement what you want is to implement the slave node as a resource that is initiated by the master node, for example by SSHing over to the slave node to run the IPaddr2 and other resources you need. Likely using SSH public key authentication with an identity file and authorized_keys entry so that the master can run the commands on the slave without requiring a password.
So this would require creating a "slaveIPaddr2" resource script, that would just wrap a command like:
HOST=`hostname`
exec ssh -i /path/to/ssh-identity dbslave$${HOST#db} /path/to/IPaddr2 "$@"
Then change the ip_dbslave resource to "slaveIPaddr2" instead of "IPaddr2" as the resource to run.
As far as scripts to run before and after migration, these mostly just sound like they would be the normal multiple resource scripts that make up a resource group and precedence using the "group" and "order" configuration items. For example, creating "master_pre" (the "before" script you want to run on the master), "slave_pre", "master_post", etc... resources, then using "order" to specify that they run in the appropriate order (master_pre, slave_pre, ip_dbmaster, ip_dbslave, master_post, slave_post). Here you'll also likely need to wrap the slave items with the SSH wrapper, to effectively treat them as a single host as I mentioned above.
It sounds like you want the "pre" script to be run before a migration is even attempted, rather than as a part of starting the resource? Pacemaker isn't going to migrate a service unless it's told to by you, or the node currently running the service is failing. In the case of a failing node, your service is down anyway, so no reason to check to try to avoid the migration. So if you are concerned with preventing the migration when you tell it to migrate, the best answer there may be to make a "migrate" script that runs your pre-service checks, and only goes on with the migration request if the tests succeed.
I don't know of a way in pacemaker to test the other hosts in the cluster before doing a migration, if that is what you are trying to achieve with #4, so it'll likely have to be an external check that enforces that.
Running other resources than just the IPaddr2 is easily done via the "group" and "order" directives.
There a few things that you can try. Firstly make sure that the nodes can "see" each other.
node1:~# ping node2 && node2:~# ping node1
There other thing that you shouldn't forget is that Pacemaker must control DRBD. DRBD can't be running when pacemaker starts. If it is you will get all kinds of weird things happening. Other than that you could post your drbd configuration.
Hope this was in someway helpful. Keep us posted, I'm an avid DRBD user myself so I am interested to know the solution.
Best Answer
1: you need to be sure, the resource agent is there
2: I don't seen nginx in your previous output
3: I'm using Suse 11 Sp2 and I have the nginx installed, without using extra package
I know Redhat has removed many resource agents, for more information, you can use the clusterlabs mailing list archive