I think you have a cart before the horse problem here in that when you export a file system using NFS it locks on to the source direcotory. You are trying to not even have that source directory available at that time and only put something there via a mount later.
This will not work, because once you give NFS a handle on something to share, it will always share that thing, even if it ends up underneath a layer of mounts.
Let's say you setup a directory with a file 'frog' and export it with nfs:
[server] $ mkdir /mnt/test && touch /mnt/test/frog
[server] $ echo '/mnt/test *(ro)' >> /etc/exports
[server] $ exportfs -a
Then you mount it on a client somewhere, you will see the file frog as expected:
[client] $ mkdir /mnt/test
[client] $ mount -t nfs server:/mnt/test /mnt/test
[client] $ ls $/mnt/test
frog
Now let's say you mount something else on top of that folder on the server:
[server] $ mkdir /mnt/test2 && touch /mnt/test2/fish
[server] $ mount -o bind /mnt/test2 /mnt/test
[server] $ ls /mnt/test
fish
Spiffy. But what is nfs serving up?
[client] $ ls /mnt/test
frog
You can't even get to that file frog on the server because it's got a different thing mounted on top of it, but NFS is serving up that under layer!
To make a long story short, if you want to export your file systems via NFS you will need to have them mounted up properly at the time NFS starts up and exports them and they will need to stick around. Exporting file systems that are themselves mounted using autofs will never work quite right. You will need to permanently mount those ISOs in order to export them via NFS.
Best Answer
The problem here is that you have made a redundant storage array using DRBD, but you have two disjointed NFS daemons running with the same shared data. NFS is stateful - as long as you cannot transfer the state as well, you will have serious problems on failover. Solaris HA setups do have daemons that take care of this problem. For a Linux installation, you will have to make sure that your NFS state directory (configurable, typically /var/lib/nfs) is located on the shared disk for both servers.
Stick with Heartbeat or Corosync for failure detection and failover - it generally does the Right Thing (tm) when configured with a Quorum. Other failover techniques might be too focused on just providing a virtual IP (e.g. VRRP) and would not suit your needs. See the http://linux-ha.org for further details and additional components for a cluster setup.