The hard bit of a backup is the bare metal restore. Users files are, after all, just user files and can be easily backed up and restored in a wide variety of ways.
If your C: partition is small enough then I recommend using one of the many partition snapshot apps. My own favorite is Drive Snapshot (www.drivesnapshot.de), which I've used for several years, but there are several similar products out there. Drive Snapshot takes a sector level copy of a partition and (this is the impressive bit) it can copy the system partition while the server is running. If you lose the server you simply boot off a WinPE (or BartPE) CD and use Drive Snapshot to copy the snapshot back onto the disk. Snapshot can even create the partitions for you from the info stored in the snapshot. I've done this in anger, and under considerable stress, and Drive Snapshot has never let me down.
I tend to use tape for Backups where I can, as you can take the tapes off site, but I do have some servers that backup to a NAS device. Whichever the case, I have the backup script run Snapshot, then either write the snapshot file to tape and/or copy it to a NAS box. If the snapshot isn't too big I keep the last week or so copies of the daily snapshot file.
Re rsync, I use this a lot myself, but on Windows you'd be using the Cygwin rsync and this tends to hang when syncing folders with a lot (e.g. 100,000) of files. If you're syncing on a LAN there are more reliable alternatives. An app called "reconcile" is my own favourite, but then I wrote it :-) See http://www.ratsauce.co.uk/winsrc/ if you want a play. If you're syncing through a WAN link then rsync is the obvious way to do it.
Some other fairly obvious points that you probably already considered. Is if possible to upgrade the disks in the two servers to provide enough space for the servers to sync to each other? Then you could lose one server and have the remaining one take up the slack. Also note that in general you can't restore a system partition from one server onto another with different hardware and expect it to work very well. If the disk controllers are identical it will probably boot, but you'll lose the network config.
One last point (possibly more for the future) for application servers like Exchange servers I tend to use virtualisation these days. VMWare works very well on Server 2003 or of course Hyper-V is built into Server 2008. Virtual servers are very easy to backup because you just copy the files from the host server.
My standard backup advice:
The whole point of backing up is to be able to restore. Unless you're fully confident that you can get your stuff back, your backups are useless. Everything you implement in your backup solution should be coming from the perspective of "how do I restore from this?"
Tape isn't that expensive, and it has the advantage that it's far more durable than disk. Less moving parts, no live electrical current going through it on a constant basis, all good stuff. If it saves your ass once then it's already paid for itself in my book.
As well as "how much data can you afford to lose" you also need to consider "how long can you afford to be down for in the case of a DR scenario?" A 3 day restore time is 3 days of lost business. You should be counting your restore times in hours and on the fingers of one hand.
You can very quickly get into silly money if you allow yourself to get too paranoid about this however, so you should be looking to divide your servers into 2 or 3 lots. Those you absolutely need to get back NOW in order to continue your core business functions, and those you can defer until after the core ones are back. Put the heavy investment into the first lot, ensure that you have fully documented restore procedures (for the OS, for applications and for data) that a blind leprous monkey with one hand tied behind it's back can follow. Print and bind a copy and keep it in a fireproof safe - you're screwed if all you have is an electronic copy and that gets lost or destroyed. But don't think that this means you can get lax with the second lot, just that you can delay getting them back or take a little longer doing so (eg. by putting them on slower media).
Specific examples: your core fileserver goes into the first lot, for sure. Your HR server goes into the second lot. It's important to the HR people, but will your core business functions be OK for a coupla days without a HR system? Yup, I reckon they will.
Keep your backup solution simple and boring. Far too often I have seen people implement fancy or complex backup solutions that just end up being too complex, fiddly and unreliable. Backups are boring because backups should be boring. The simpler they are, the easier it will be to restore. You want a "me Og, Og click button, Og get data back" approach. Keep a daily manual element in there. This helps to establish a drill, which can avoid situations where someone forgets to change a tape or rotate a HD in the pool. You can fire the person responsible afterwards if this happens, but guess what? You're still in a position where you've lost a month of data.
Best Answer
I'd strongly consider running Windows Server 2008 R2 on the Dell T310 and making it a secondary DNS server, Domain Controller, and Global Catalog server. You could even go so far as to keep the filesystem layout similiar on the T310, allowing you to "fail over" to it as a file server (by adding an alias name, for example) in the event the production file server fails. Since you're talking about so little data (6GB seems pretty low if you're including the OS-- are you sure you didn't mean 60GB?) you could easily keep a copy of the "live" for this fail over purpose in addition to the backups that your backup software maintains.
Insofar as the backup goes, using whatever third-party software you want seems reasonable. I don't think you're going to get much delta compression from rsync, but it'll certainly mirror the files to a remote server effectively.
Edit:
The built-in backup utility works well but has some limitations. Chief among these is the handling of multiple generations of backup.
If you expose the storage on the T310 via iSCSI and mount it "locally" on the source server you can use the native Windows Backup functionality to store multiple generations of backups on the T310 in a single backup folder. If you expose the storage via SMB (i.e. a "shared folder" accessed via a UNC) you can't store multiple generations of backups in the same folder. You can do incremental backups but only the most recent generation will be available.