My standard backup advice:
The whole point of backing up is to be able to restore. Unless you're fully confident that you can get your stuff back, your backups are useless. Everything you implement in your backup solution should be coming from the perspective of "how do I restore from this?"
Tape isn't that expensive, and it has the advantage that it's far more durable than disk. Less moving parts, no live electrical current going through it on a constant basis, all good stuff. If it saves your ass once then it's already paid for itself in my book.
As well as "how much data can you afford to lose" you also need to consider "how long can you afford to be down for in the case of a DR scenario?" A 3 day restore time is 3 days of lost business. You should be counting your restore times in hours and on the fingers of one hand.
You can very quickly get into silly money if you allow yourself to get too paranoid about this however, so you should be looking to divide your servers into 2 or 3 lots. Those you absolutely need to get back NOW in order to continue your core business functions, and those you can defer until after the core ones are back. Put the heavy investment into the first lot, ensure that you have fully documented restore procedures (for the OS, for applications and for data) that a blind leprous monkey with one hand tied behind it's back can follow. Print and bind a copy and keep it in a fireproof safe - you're screwed if all you have is an electronic copy and that gets lost or destroyed. But don't think that this means you can get lax with the second lot, just that you can delay getting them back or take a little longer doing so (eg. by putting them on slower media).
Specific examples: your core fileserver goes into the first lot, for sure. Your HR server goes into the second lot. It's important to the HR people, but will your core business functions be OK for a coupla days without a HR system? Yup, I reckon they will.
Keep your backup solution simple and boring. Far too often I have seen people implement fancy or complex backup solutions that just end up being too complex, fiddly and unreliable. Backups are boring because backups should be boring. The simpler they are, the easier it will be to restore. You want a "me Og, Og click button, Og get data back" approach. Keep a daily manual element in there. This helps to establish a drill, which can avoid situations where someone forgets to change a tape or rotate a HD in the pool. You can fire the person responsible afterwards if this happens, but guess what? You're still in a position where you've lost a month of data.
Your approach sounds very good - but I can think of a possible way to improve it.
To reduce the impact of data loss since the last backup, and EBS volume failure (unlikely, but still possible) you could store your data on a separate EBS volume than your system files, and back up the data volume more frequently than the system volume.
With your current strategy, you'll lose any data that was created between the time of the last backup and the time your instance failed. With the new approach, the data volume will be getting written to right up until the instance failure, so you can just reattach it to your new instance once it's up and running.
Best Answer
My recommendations:
Always document and/or script the setup of each new instance so that you can reproduce the software installation and system configuration in the event you lose the instance. Test this by starting a new instance and following the procedure. You can use a custom, private AMI if the installation takes a long time and you need to start instances quickly, but that AMI itself should be built using a documented and/or scripted procedure.
Keep your important data on separate EBS volume(s) and not on the root EBS volume. This has many benefits including making it easier to port your data to new instances (e.g., based on different AMIs) and making it easier to get copies of your data on other instances (e.g., with snapshots and new volumes).
Create regular snapshots of the EBS data volumes. If possible/applicable, use a tool like my ec2-consistent-snapshot to improve the chances that you are taking a snapshot of a consistent filesystem / database. Back up the data outside of AWS/EC2, as your AWS account itself is a single point of failure.
Create snapshots of the root EBS volume from time to time on important instances. Though this may help you in the event of instance or EBS volume failure, that part is not so critical because of #1 and #2 above. The main reason I do this is that creating snapshots reduces the risk of failure of the root EBS volume itself.
The rate of failure of an EBS volume is directly related to the number of blocks that have been modified on that volume since the last EBS snapshot.