Windows Server 2003 R2 and later has support for DFSR, which I used extensively to sync and backup large amounts of data over a rather small pipe across three sites (80GB+ over a T1<-->T1<-->T1 topology).
msdn.microsoft.com/en-us/library/bb540025(VS.85).aspx
Replicating data to multiple servers increases data availability and gives users in remote sites fast, reliable access to files. DFSR uses a new compression algorithm called Remote Differential Compression (RDC). RDC is a "diff over the wire" protocol that can be used to efficiently update files over a limited-bandwidth network. RDC detects insertions, removals, and rearrangements of data in files, enabling DFSR to replicate only the deltas (changes) when files are updated.
DFSR is fully multimaster and can be configured however you want. That will keep your data in sync on the "backup" location, for a very small amount of bandwidth and CPU. From here, you can use the Volume Shadow Copy Service.
technet.microsoft.com/en-us/library/cc785914.aspx
The Volume Shadow Copy Service can produce consistent shadow copies by coordinating with business applications, file-system services, backup applications, fast-recovery solutions, and storage hardware. Several features in the Windows Server 2003 operating systems use the Volume Shadow Copy Service, including Shadow Copies for Shared Folders and Backup.
The shadow copies reside on disk, and take "no space" aside from the changed files from snapshot to snapshot. This is a process that can run on a live dataset with no ill effects, aside from slightly increased disk I/O as the snapshot is being created.
I used this solution for quite some time with great success. Changes to files were written out to the other sites within seconds (even over the low bandwidth links), even in cases where just a few bytes out of a very large file changes. The snapshots can be accessed independently from any other snapshot taken at any point in time, which provides both backups in case of emergency and very very little overhead. I set the snapshots to fire at 5 hour intervals, in addition to once before the workday started, once during the lunch hour and once after the day was over.
With this, you could store all data in parallel at both locations, kept relatively up to date and "backed up" (which amounts to versioned, really) as often as you want it to.
The Shadow Copy Client can be installed on the client computers to give them access to the versioned files, too.
www.microsoft.com/downloads/details.aspx?FamilyId=E382358F-33C3-4DE7-ACD8-A33AC92D295E&displaylang=en
If a user accidentally deletes a file, they can right-click the folder, properties, Shadow Copies, select the latest snapshot and copy it out of the snapshot and into the live copy, right where it belongs.
MSSQL backups can be written out to a specific folder (or network share) which would then automatically be synched between sites and versioned on a schedule you define.
I've found that data redundancy and versioning with these can act as an awesome backup system. It also gives you the option to copy a specific snapshot offsite without interfering with the workflow, as the files it reads from aren't in use...
This should work with your setup, as the second backup site can be configured as a read-only sync/mirror.
Best Answer
I've been doing research on this, funny enough.
Your backups to S3 can fail depending on your region because of eventual consistency; the basic warning is that if you do this enough, at some point you'll have errors opening or finding files as the filesystem magic in the background of Amazon syncs among servers, so your backups may not be reliable.
As for whether you need to save them another way, this depends on your risk management. Do you trust Amazon to hold your data?
It's possible they may lose something or have a larger failure of their storage system; they no doubt have clauses in their contracts specifying that if they lose your data, that's your problem. Not theirs. Also, seeing as your data is housed somewhere else, you don't know what they will do with it; law enforcement want your data? You may not even know someone else accessed it.
Do you trust it? If the data isn't key to your business and you're willing to accept this risk, then there's no need to download it to offsite-storage. If you are not willing to risk that your data will be safe in Amazon's storage servers out there, you should make arrangements to periodically dump it to your own storage.
In other words I don't think there is a straight answer to this as it depends on your risk tolerance and business needs. Many people wouldn't completely trust their income solely on storage with the cloud, personally I feel a little wary of that...
To do this better, in discussions and research, another approach to consider is creating an EBS volume large enough to store the data, attach it to the EC2 instance, save your data there, then you can unmount the volume and save that data to S3. I'm in the middle of researching whether this would be done as saving the volume file itself to S3 or the contents...but then you can delete the EBS instance when done to save storage costs.
EDIT I see in re-reading that you're saving FROM S3 TO the EC2 instance, not vice-versa (although I don't know if the eventual consistency issue could still cause problems there). You're trying to save data to an EC2 instance as backup? I would think that cost-wise that's not a sound tactic; it may be cheaper to back things up to a local drive when you factor in long-term storage of that kind of data, along with VM time. With drive costs you could copy data down to a local disk as a backup.
I still would keep the warnings about trusting Amazon and their storage. If you want to keep everything in Amazon S3 but have more redundancy, duplicate your S3 buckets across regions, and if they have an outage affecting one region it shouldn't knock out all of them. You'd hope. Anything is possible though.
It comes down to how much you value your data, how much you're willing to pay for it and how much risk you want to tolerate.