My standard backup advice:
The whole point of backing up is to be able to restore. Unless you're fully confident that you can get your stuff back, your backups are useless. Everything you implement in your backup solution should be coming from the perspective of "how do I restore from this?"
Tape isn't that expensive, and it has the advantage that it's far more durable than disk. Less moving parts, no live electrical current going through it on a constant basis, all good stuff. If it saves your ass once then it's already paid for itself in my book.
As well as "how much data can you afford to lose" you also need to consider "how long can you afford to be down for in the case of a DR scenario?" A 3 day restore time is 3 days of lost business. You should be counting your restore times in hours and on the fingers of one hand.
You can very quickly get into silly money if you allow yourself to get too paranoid about this however, so you should be looking to divide your servers into 2 or 3 lots. Those you absolutely need to get back NOW in order to continue your core business functions, and those you can defer until after the core ones are back. Put the heavy investment into the first lot, ensure that you have fully documented restore procedures (for the OS, for applications and for data) that a blind leprous monkey with one hand tied behind it's back can follow. Print and bind a copy and keep it in a fireproof safe - you're screwed if all you have is an electronic copy and that gets lost or destroyed. But don't think that this means you can get lax with the second lot, just that you can delay getting them back or take a little longer doing so (eg. by putting them on slower media).
Specific examples: your core fileserver goes into the first lot, for sure. Your HR server goes into the second lot. It's important to the HR people, but will your core business functions be OK for a coupla days without a HR system? Yup, I reckon they will.
Keep your backup solution simple and boring. Far too often I have seen people implement fancy or complex backup solutions that just end up being too complex, fiddly and unreliable. Backups are boring because backups should be boring. The simpler they are, the easier it will be to restore. You want a "me Og, Og click button, Og get data back" approach. Keep a daily manual element in there. This helps to establish a drill, which can avoid situations where someone forgets to change a tape or rotate a HD in the pool. You can fire the person responsible afterwards if this happens, but guess what? You're still in a position where you've lost a month of data.
you could continue doing what you're currently doing with a few minor changes to your rsync backup scripts.
rsync can run inside a VM and backup to a remote host via ssh, just as it can from a physical machine. e.g. i backup /etc, /usr/local/, /home, parts of /var and a few other directories from all my machines to /var/backups/hosts/$HOSTNAME on my backup server (which, in turn, gets backed up with rsync to another machine and also to tape). database servers also run scripts which dump their dbs to text before the rsync.
to restore, just create a new VM (it's handy to have a few minimal-install images of various sizes that you can just clone), and rsync the backed-up files back in.
BTW, i usually don't bother backing up /bin, /sbin, /usr because i run debian on almost all machines. it would waste too much disk space and waste time to backup programs i've got packaged in my local debian mirror. instead i backup the list of installed packages with dpkg --get-selections "*" > $hostname.sel and restore them with cat $hostname.sel | dpkg --set-selections ; apt-get dselect-upgrade.
this is how i currently clone physical machines...i'm in the process of converting several machines to virtual (running under KVM) and so far haven't found any reason why i'd have to make more than minor changes to the procedure for that.
one of these days, i'll change to using rdiff-backup rather than rsync so i can have versioned backups online as well.
finally, you could also try searching the http://libvirt.org/ web site or googling for "+libvirt +rsync". someone may have come up with an efficient method of rsyncing VM images directly.
Best Answer
We're just trying to bring our Macs into the fold here. My original plan was to use Backup Exec's Mac agent. Then I found out that the agent doesn't support 10.9, or even 10.8. So if you're keeping the OS up-to-date, that's out. I've heard legend tell that the next SP will get it up to speed, but I'm not holding my breath.
It has been a few years, but Retrospect used to be the gold (and only) standard for Mac backup. Install the agent and you could set a schedule so the Macs would back up once connected to the network. I don't have recent experience with it, though it did work via VPN many moons ago. You'd then want to have it save the backup sets to storage that you would sweep into your existing backup environment.
If you get a Mac Mini with OS X Server, you can redirect Time Machine on the laptops to the network, then sweep that connection up with another disk backup tool. I don't know if there's any granularity to Time Machine, though -- I believe it grabs the entire disk, or nothing.
I know you mentioned cloud may not be an option, but if that is because of the VMs (which are now out of scope?), then perhaps that makes your CrashPlan/BackBlaze/Carbonite options more palatable.
If you do want to bring the VMs in scope, you could install a Windows-based agent in the VM, and treat that as you would anything else.