Firstly what do you mean by "properly through DHCP"? it's generally a bad idea to use DHCP for servers
Secondly it's a really bad idea to use DHCP for cluster VIPs.
The most probable explanation for that behavior has to do with the quorum configuration. Take a look at http://technet.microsoft.com/en-us/library/cc731739.aspx.
Basically, when your network switch went down, the two nodes lost communication with each other. At that point, neither node knew what the other one was doing. If one node decided that it was going to assume ownership of all of the clustered resources (ie, virtual machines) and boot them up, who's to say that the other one wouldn't do the same thing? You'd end up in a scenario where both nodes are trying to take total ownership of all the virtual machines, and you'd have some really nasty hard drive corruption on your hands.
The Quorum Configuration solves this problem by stating that in order for a node to function, it must be in contact with a majority of the nodes (and optionally, also a disk or file share). If it can't do that, it'll stop functioning as a member of the cluster.
To verify that this is the case, open the Failover Cluster Manager and check the "Quorum Configuration" on the summary page for the cluster. If it's Node Majority and you have an even number of nodes, then what I described is almost certainly what happened.
The solution is to set up a small disk, called the Disk Witness, (50 MB is more than enough) and add it to the storage for you cluster (but NOT the cluster shared volumes). Then, change the Quorum Configuration to Node and Disk Majority. With this setting, if you experienced the same failure as before, the node that had ownership of the disk at the time of failure would continue functioning (and would actually assume ownership of all of the resources from the other node), and the other node would stop. The VM's that failed-over to the functioning node would experience a brutal restart, but at least they'd be online as quickly as possible.
As you stated, the ideal scenario would be to have your switches on the UPS also. That would've prevented the failure altogether; however, you should also make sure that you're using the recommended quorum configuration for the number of nodes that you have.
Best Answer
I don't have a Windows Server 2008 R2 failover cluster to check at the moment but deleting the VM from Failover Cluster Manager should remove it as a clustered role/service but leave the VM intact. It should be the equivalent to the Remove option in a Windows Server 2012 Failover Cluster.