No synchronization tool can synchronize files that are open w/o running the risk of making inconsistent copies. Unless the tool has hooks into the application holding the file open to request that it "quiesce" the file there will always be a risk that a copy made of an open file will end up being inconsistent and unusable.
It sounds, to me, like you're going to be served poorly by just about any tool, given the profile of open files that you're talking about. I wonder if a version control system / document management system wouldn't possibly be a better fit for you.
I've used the SureSync synchronization tool from Software Pursuits, albeit not in the scenario you're distribing, and have been very pleased with it. It runs as a Windows service on the servers in the replication set and does delta transfers (with the "SPI Agent" add-on). It can replicate open files (can can quiesce VSS-aware applications), though you could potentially run into consistency issues, as I said above.
Response re: comments:
This is the classic fast/cheap/good triangle tradeoff. If you want your replicas to stay in sync throughout the day you're going to need to shell out a lot of money for fast connectivity. If you don't care that the replicas fall out of sync (but "catch up" overnight) then you can spend less money on fast connectivity.
I don't have any Customers who expect all files replicated in such a a manner to be "in sync" at all times on all servers. They don't have the money to spend on LAN-speed WAN connectivity to support it.
If you have a small corpus of files that need to be kept in sync more rigidly you could look at using this more real-time replication solution to cover those files and cover the rest of the files in a slower, less bandwidth-intensive replication solution.
You have to pay the piper somehow is, I guess, what I'm saying.
It sounds like BranchCache may be a good fit for you, though you may also benefit from some performance improvements in DFS-R made in Server 2008 and Server 2008 R2.
There have been some case studies written about BranchCache that may tell you more about BranchCache performance in the real world. Just search for "BranchCache case study".
BranchCache is careful to always honor the most current access control settings on any piece of content (file, web page ...). Before a client pc can download data from the cache in the branch office (either on a hosted cache server or on a peer) it must obtain content identifiers from the main office server. If the client doesn't have permission to access the data, the main office server won't send the identifiers. There are a bunch of documents explaining how this works on branchcache.com.
If you want, you can pre-load the BranchCache cache by having one of the clients in the branch (or the actual hosted cache server) access data ahead of time. This might be scriptable in some cases if you want to preload the cache before workers get in.
If you're going to keep a server in the branch, and you're going to upgrade to R2, there's no reason why you can't deploy a combination of BranchCache and DFS-R. A single box can act as a DFS-R replication point and as a hosted cache server simultaneously. You can get sharepoint and SMB optimization this way, and by spreading your data across the two technologies, you can get the best properties of each for your various categories of data.
I hope this helps!
-Tyler
Best Answer
You already have main options:
When dealing with larger data files the latter has been giving us the best results, with the added benefit that it also allows better control of our data and improved the work from home possibilities for the local staff as well.
Citrix's ICA protocol has proven to deal well with relatively high latency / low bandwidth links and still provide an acceptable user experience for our users.
The newer virtual desktop solutions are even better at providing stuff as video acceleration as well which might be needed in CAD.