You basically have it.
User vs Machine vs Share Authentication
SMB/CIFS bases access on user credentials of some sort (whether they be KRB tokens, user/password pairs, or what have you) per session where each session is mapped to one user. NFSv3 uses host-based authentication where all users of a given remote machine share the same connection. SMB/CIFS, specifically the Samba implementation, also allows host-based allow/deny, if you need the feature; window file server probably does as well and if not in the file server subsystem, the firewall will handle it.
SMB/CIFS also implements share-based authentication where the share has its own password.
NFSv4 can be configured to use per-user authentication via kerberos.
Trust Model
NFSv3 relies strongly on the remote machine to enforce permissions by hoping the remote sends truthful, cross-machine-coherent numeric IDs in requests whereas SMB/CIFS enforces permissions on the local disk based on the connection(session)-authenticated remote user.
As a consequence in NFSv3, if a user has root on the remote box, they generally (i.e. by default) have read-only root access to the entire NFSv3 share and can impersonate any other user ID. For a share to a single-user machine, NFS has all_squash as a workaround, but this is per-IP.
On the flip side, most unix-like smb.mount implementations (linux pre-3.3, freebsd, solaris) do not support system-wide multi-session (multiuser) mounts, thus when mounting a remote SMB filesystem, your system's session is as only the user set on mounting, viz. all users act with the permissions of the username set at mount-time. Linux 3.3 and later have cifscreds to mitigate, and there are FUSE SMB/CIFS implementations available. As expected, this was never a problem with Windows clients.
ID Mapping
Also in NFSv3, your numeric UIDs must map exactly: user 1001 on the client machine will be given permissions as user 1001 on the server; there is no textual username mapping. Since SMB/CIFS binds the ID to the session, the mapping is automatic; your share UID matches your credentials.
NFSv4 has a daemon for ID mapping GSS-domain authenticated users, but if you don't already have a GSS-domain deployed, it's likely easier to synchronize your UIDs.
ACLs
NFSv3 and earlier can be a bit sketchy with ACL support (and xattrs are right out). NFS's "POSIX ACLs" are implemented in a sideband RPC (not in the main protocol) so there are a few more things that can go wrong and not all OSes support NFS's POSIX ACLs.
SMB/CIFS generally has no trouble with ACLs. If you need to modify them, windows and unix-like clients can modify Samba shares with their standard mechanisms (GUI and setfacl respectively). I am not sure if unix-like clients can modify ACL-like permissions on a Windows file server share.
NFSv4 has ACLs built in.
Best Answer
For direct connection to a server—for true server related storage—iSCSI is the way to go. And you would then manage the user access—via SMB/CIFS or NFS—via the server.
But when you say this following quote it is a bit confusing as to what your question is where/how this storage is connected to the main server to begin with:
Is this simply a physical Windows server with four 6TB Western Digital RED drives? Or is this a server that operates on its own and the four 6TB Western Digital RED drives exist on a NAS?
Or are you describing your connection from the client side? Meaning you will have this Windows server with four 6TB Western Digital RED drives and you then want to connect to it?
My guess is the later. In general, you only need to use iSCSI if you need to have storage setup as if it is an physical drive connected directly to your machine—even if this is over the network—since iSCSI is purely raw space. Meaning when you connect via a freshly setup iSCSI volume you need to format it. I only do that when there is a need for massive storage and the connection is fairly permanent since iSCSI allocates raw space for use.
SMB/CIFS and NFS are the more common ways sundry remote clients would connect to a machine to get data stored on the share. SMB/CIFS would be the best and most common way to connect. And the times I have used NFS is purely when a non-Windows OS is connected to the server. Such as a Linux server needing to access data in some way. But be forewarned: NFS can be a pain because it’s simply not as simple to setup on the client side as SMB/CIFS.
So the breakdown would be:
iSCSI: Permanent, preallocated network connected storage for a server that needs it. It’s basically the same as having an external drive on your desktop and all sharing functionality would need to be managed by your server itself. In your case, I would recommend preallocating raw space on that device for the HyperV stuff. And then using the remaining stuff for SMB/CIFS or NFS.
SMB/CIFS: This would be the way most any client can remotely connect to your sundry shared storage. You just allocate space on the server for shares and set permissions and away you go. This is not raw space, but server connected space. And allows for pretty much any client from an OS connect remotely. But you can’t do things you can do in iSCSI like treat that space as directly connected raw space.
NFS: Basically the best fall-back alternative based on usage when SMB does not work. I use NFS mounts mainly for Linux setups that somehow need general file share connectivity, but somehow just act “weird” with SMB/CIFS.