To my knowledge, NTFS itself does not have any performance problems associated with larger cluster sizes.
If you're really looking to eek out all the speed you can, I'd recommend simulation and benchmarking. How your application reads data (4K blocks, 8K blocks, etc) is going to make a difference, as is the cache hit pattern on the NT cache and the underlying RAID cache. The disk / storage hardware (RAID layout, SAN configuration, etc) is going to make a difference, too.
Ultimately, the behavior of the application is going to be the biggest dictator of performance. You see "planning guides" for various applications (Exchange, SQL Server, etc) out on the 'net. All of the serious ones are based on real-world benchmarking with load simulation. You can write "rules of thumb", but with any given system there may be quirks in implementation at lower levels that turn rules of thumb on their ear.
If your application is suited to simulated work, spin up a simulated corpus of files and simulate a workload on them, using various filesystem / RAID / disk configurations. That's going to be the only way to know for sure.
(Aside: Does anybody else find it funny to hear a 10MB file called "small"? God, I'm old...)
Best Answer
I spent hours in front of a partition with my hex editor and discovered that the $VOLUME_NAME attribute of the $Volume metafile is actually just that - the textual volume name seen in 'Computer' and the likes - i.e. "My Disk"
It turns out that the GUID style I asked about above is stored only in the mount manager database within the registry at MountedDevices. What finally led me to this is that the same disk (with the same serial number on its NTFS partition) will get a different GUID if you plug it into a different machine.