Dynamic quorum doesn't work by changing the quorum type, it works by modifying the NodeWeight
property on one or more nodes as required.
For example, let's say I have a three node cluster running in Node Majority quorum mode. You would choose this mode as it gives the quorum an odd number of participating votes, which is required in order to make a decision.
Now let's say I lose a node. With a traditional quorum configuration, I am now running in Node Majority mode but with two votes. This means that if I lose another node without changing any of the quorum settings (either modifying the type or the NodeWeight
of one of the nodes), my cluster will go down if I lose a second node.
With a dynamic quorum, the cluster will recalculate the NodeWeight
s on the fly. It sees that I have an even number of nodes, so it sets the NodeWeight
of one of my remaining nodes to zero. Thus, that node does not effectively have a vote in the quroum, and I'm back to an odd vote count. Now if another node fails, my cluster will stay up on the remaining nodes.
You still need to select the appropriate quorum type for your configuration, dynamic quroum just works within that type to set NodeWeights
as necessary to maintain cluster uptime. You still decide from the beginning if you are going to use, for example, a file share witness or a witness disk as a quorum participant. Dynamic quroum will only work if the failures have been sequential rather than simultaneous, meaning the cluster has had time to recover and recalculate between events.
Let me start with explaining some basics bout Windows Server Failover clustering.
"The Cluster" maintains some core resources, called "Core Cluster Resources" or "Cluster Group" which contains the "Cluster IP" address, and some others.
This "Cluster IP" address is not to be used for anything else than logging into the cluster remotely with Failover Cluster Manager. It is not the "access" for any clustered application.
Furthermore, "The Cluster" maintains a group called "Available Storage" this group contains disks available for clustering, but which are not used for any clustered application. The disks in this group should not be used directly for any clustered application. The group exists to "protect" the disks of becoming read/write on more than one node of the cluster. In case that happens you will be facing data corruption or data loss. Hence cluster puts those disks in a group.
For you to install a clustered application, you need to follow one of the wizards in Failover Cluster Manager, it will create a group for you, put an IP address (different then the Cluster IP) in that group, a network name, a disk if required (it will remove the disk from "Available Storage" and put it in a newly created group) and it will create the application resources for you.
This "group" is now your clustered application, a collection of resources making up your application, and you can move this group from node to node. As this is a "group" all resources needed for your application (IP, network name, disk, executable) will move together. "Clients" will access your clustered application by the IP-address/network-name which is in this group (NOT the Cluster IP address, NOR Cluster Network Name)
In fact, you should leave the "Core Cluster Resources" or "Cluster Group" to be managed by cluster, there are very few occasions where you need to move this group administratively!
q1) is the above observed behavior normal ? or somewhere my
configurations are wrong ?
Yes, the behaviour you are witnessing is by design. You are administratively moving the Cluster Core Resources, and your application should NEVER be in this group.
q2) why isn't the cluster disk consider as part of the core cluster
resource ? what would be a reason that i would want my cluster IP on
node2 and my cluster disk mounted on node 1 ?
There may or may not be a disk in your Cluster Core Resources, this would be a witness disk. Only if you have configured a Witness disk in your quorum settings. Even if there is a disk in your Cluster Core Resources, this should NEVER be used as data disk for an application. You should create a application (group) for your clustered application. Same goes for your disk or disks in "Available Storage".
q3) i also realize with the "Move available storage" option, it is
moving ALL cluster disks, and not particular cluster disk. What if i
want some disk mounted on Node1 and some mounted on Node2 ?
As you move to create seperate cluster groups for your applications, the wizards will assign a disk resource or disk resources to that application. You can then place those on the nodes you want, and the disk(s) will move with the application group. You can have multiple application groups, which are on different nodes, and therefore your disks will be online on the nodes where the application group is online. Again "Available Storage" is not to be used, it is only a place holder for disks not used (in order to protect them)
I hope this explains your situation.
Best Answer
You can set permissions for users unknown to the local system by specifying the SID instead of the user name. You need a tool that accepts SIDs, of course. SetACL does.
It might be simpler to make "charley" a member of one of the predefined local groups like "users" which have a well-known SID (i.e. the same SID on every computer) and set permissions for that group instead of the local user.