You're looking for "List Folder Contents" permission (which includes the "Traverse Folder" right) applied to folders w/o inheritance. In order for access-based enumeration to work, though, you can't inherit that permission down the hierarchy, so you have to get a bit crazy with it.
At the root of the share, add the permission "HR Managers - List Folder Contents", and then in the "Advanced" settings, set that permission to apply to "This folder only". Because you're not inheriting the new permission to subfolders or files ABE will "hide" the subfolders and files the user doesn't have access to but still allow the "HR Managers" users to traverse the top level folder of the share.
Repeat that moving down each level of the hierarchy until you hit the level where permissions have become permissive for "HR Managers".
Doing this for a lot of different groups can make for large ACLs on folders and the potential for a lot of administration headache. I end up using "Authenticated Users - List Folder Contents" applied to the root of shares with restricted folders right off the root. I also try to keep my permission hierarchies as shallow as possible so that I don't have to do this "This folder only" trick with other groups at lower levels, if possible.
It's an ugly hack, but it's the best way I know to get access-based enumeration to do what you want. An "inherited rights filter" would be SO nice and would do exactly what we want, but Microsoft didn't implement such a thing.
(I never particularly liked Netware, but the permission model on the filesystem w/ respect to real-time inheritance and inheritance filtering is pretty sweet.)
OK there's a few things you need to watch out for here.
Firstly is redundancy / fallover. If you have 5 machines running at 90% capacity, and one of the machines fails, then the other 4 machines have to pick up the slack ... whoops ... that takes them over 100% capacity and you will likely start to have a cascade failure on your hands.
Secondly, IF you're running multiple processes, then remember that the OS takes compute cycles to SWITCH processes too. This means that if the system load gets too high, the system can start spending too long loading and suspending tasks for execution, and less and less time actually executing the processes.
Thirdly, if you're running MS SQL server, for goodness sake configure it correctly or get someone to do it for you. MS SQL server will suck up all available RAM for cache and can bog down the machine if you don't limit its RAM usage. I have had clients who were complaining about RAM usage on a server, doubled the RAM, and noticed no performance gain because MSSQL server sucked it all up again!
Hope those help :-)
Best Answer
I've made 30TB volumes before. They were holding large files, so that greatly assists in avoiding performance degradation. No problems there versus smaller volumes.
Where problems might begin to occur is if that large filesystem builds up enough files and directories to get to the insanely big ranges. I'm talking 20 million files and directories or more levels. At that point the MFT is likely to be pretty fragmented and on RAM constrained servers that might start to introduce some directory-display performance issues. Actual access to files should not be affected, just fileOpen() and directory scanning operations.
Is it a real possibility? Yes, but the effect also holds true for similarly large filesystems on other platforms and for the same reason. Also, the operations most impacted by the performance drop may not even be the ones you're worried about. Since this sounds like a new setup, it doesn't sound like you'll be using this 10TB on a server with only 1GB of RAM so you shouldn't have to worry.
Where the worry about size came from is likely WinNT and Win2K era, when MBR limitations made large volumes tricky to create. Such needed some trickery to address all of that, and in the era the trickery had a cost. This magnified the lots of little files performance penalty. GPT doesn't have this problem. GPT volumes were first in Windows 2003, but their newness meant conservative sysadmins didn't use them much at first.
Old attitudes die hard, sometimes. Especially if there is trauma reinforcing them.