We are thinking of implementing Hyper V Clusteriazation on Windows server 2012 servers. We aim to use the Live migration feature to eliminate downtime when one of the servers fails. My question is: Is live migration hot, that is if the server currently hosting the Cluster Hyper V suddenly shuts down( power failure or something), will the clustered hyper V resume working immedialtely on the second server without interruption, or it will shut down and then start on the second server.
Windows Server 2012 Hyper V Clustering Live migration Hot Failover
clusterhyper-vlive-migrationwindows-server-2012
Related Solutions
I've been there! Last year I set up a similar cluster (except my boxes were Fujitsu) using an iSCSI SAN for Hyper V.
Actually it's not that hard, but there are going to be snags along the way. If you are colocating, I would definitely be rehearsing your installation in a server rack on your own premises before moving it to the datacentre (I used a soundproofed server cabinet for this).
Oh, one other thing in preparation, you don't mention this, but one thing I wouldn't bother with is iSCSI boot which is offered on some iSCSI systems. It's a pain to set up and it doesn't always work with redundnancy. It's always better to have one or two physical boot disks on the nodes so that if you have a network configuration problem or iSCSI issue, you can still boot them up. I use small (40GB) solid state drives in each of my servers for boot disks.
You definitely need a separate AD DC. In fact I started with a 3 node cluster and then limited this to 2 nodes, plus an unclustered 'master node' which runs backups on DPM 2010 and a virtualised DC.
You mention 6 ports, this may be enough. But allow me to illustate my node configutation which has 10 ports:
- You will always need 2 ports per node for the iSCSI network (each one belongs to a different subnet for MPIO redundancy and should be on separate NICs)
- 1 port for heartbeat that doesn't have any other traffic on it
- 1 port for live migration and transfers (this is the only one that you might want to upgrade to 10g or infiniband, but unless you are provisioning 10s of VMs per day it is not worth it)
- 1 port for remote desktop access
- 1 port for server management card
- 1 set of teamed ports that constitutes the main access network*
- 1 set of team ports that consitutes a DMZ network (optional)*
*It is often pointed out by detractors that Microsoft does not officially support port teaming (whereas VMWare does) but actually official word is that they don't discourage it, but simply feel that support is in the hands of the NIC vendors. I use Intel NICs that are of the ET generation, which have specific virtual network features and I find work very well with Hyper V. They actually allow you to split a team between switches so that if one of the switches fails you have consistent team access, a bit like MPIO but for virtual machines.
Hyper V is actually really resilient and good to use. I would approach your job in this sequence:
1) Set up the nodes individually, install iSCSI initiator, install MPIO, give your iSCSI ports, transport and heatbeat ports, management ports different subnet addresses.
2) Set up Hyper V and assign your chosen ports to your virtual network
3) Then run the cluster validation wizard and form the cluster.
You should always assign ports to your virtual network first, because this prevents them from being used by the cluster. This sounds counter-intuitive, but basically you are going to be keeping your virtual network agnostic to your cluster network. This still gives you redundancy, so don't worry. But to achieve this (and there is no other way) you are going to have to have either a separate set of switches for your cluster and hyper V (two each for redunancy) or you will need to set up VLANs on your switches. I do the latter (using untagged VLANS) and it works great.
Some other posts here have suggested that you use a consultant to do this work. If they are familiar with Hyper V, this may be a good idea. It doesn't give you the indepth knowledge that you'd otherwise acquire from DIY, but it will save you time. I had plenty of time last year, and I'm not embarassed to admit that it took me several months of work and figuring things out to get it all up and running.
Good luck!
You didn't mention which services are delegated to. You should have the following on each Hyper-V host:
- Microsoft Virtual System Migration Service/COMPUTER
- Microsoft Virtual System Migration Service/COMPUTER.FQDN
- cifs/COMPUTER
- cifs/COMPUTER.FQDN
Have you tried (as a test) allowing all services to be delegated?
Best Answer
LiveMigration and virtual machine failover are two different things. LiveMigration is for planned migrations of a virtual machine from one Hyper-V host to another with no downtime of the virtual machine or it's services and applications.
Failover of a virtual machine occurs when the host it is running on fails and the cluster restarts the virtual machine on another cluster host, in which case there is downtime for the virtual machine and it's services and applications. When a cluster host fails the state of virtual machines running on that host is lost.
From Microsoft:
Live migration: When you initiate live migration, the cluster copies the memory being used by the virtual machine from the current node to another node, so that when the transition to the other node actually takes place, the memory and state information is already in place for the virtual machine. The transition is usually fast enough that a client using the virtual machine does not lose the network connection. If you are using Cluster Shared Volumes, live migration is almost instantaneous, because no transfer of disk ownership is needed. A live migration can be used for planned maintenance but not for an unplanned failover.