MongoDB Disaster Prep on AWS

amazon-web-servicesbackupmongodbreplica-setsnapshot

I'm looking for best practice advice covering MongoDB disaster recovery within an AWS hosted environment.

Our setup is fairly standard at this point, replica set of 3 servers (1 primary, 1 secondary, and 1 arbitrator),
the mongo volumes on primary and secondary are EBS backed. All in a single region, spread across multiple availability
zones. Eventually we'll need to span regions but that's a discussion for another day.

The backup advice I've seen in the Mongo documentation talks about EBS snapshots (which are easy enough to automate).
However, should disaster strike they are not going to get us back to the time of failure.

  • Do I need to record oplogs and use those in conjunction to restore
    after a failure?
  • Should I spin up another instance within the replica set specifically for backups and snapshot that vs. taking snapshots of primary and secondary? If so, we're back to the oplog issue aren't we?
  • Should I snapshot each replica volume and rely on on the replica set completely to cover the time between failure and the last snapshot?

I'm looking for the most robust strategy available. Up to the second data protection and system restoration speed after a failure are higher priority than price. We can optimize on price later on.

Thanks in advance for all suggestions…

Best Answer

First, if you take a snapshot, it will include the oplog - the oplog is just a capped collection living in the local database. Snapshots will get back to a point in time, and assuming you have journaling enabled (it is on by default), you do not need to do anything special for the snapshot to function as a backup.

The only absolute requirement is that the EBS snapshot has to be recent enough to fall within your oplog window - that is the last (most recent) operation recorded in the snapshot backup oplog must also still be in the oplog of the current primary so that they can find a common point. If that is the case it will work something like this:

  1. You restore a secondary from an EBS snapshot backup
  2. The mongod starts, looks for (and applies) any relevant journal files
  3. Next, the secondary connects to the primary and finds a common point in the two oplogs
  4. Any subsequent operations from the primary are applied on the RECOVERING secondary
  5. Once the secondary catches up sufficiently, it moves to the SECONDARY state and the backup is complete

If the snapshot is not recent enough, then it can be discarded - without a common point in the oplog, the secondary will have to resync from scratch anyway.

To answer your specific questions:

Do I need to record oplogs and use those in conjunction to restore after a failure?

As explained above, if you snapshot, you already are backing up the oplog

Should I spin up another instance within the replica set specifically for backups and snapshot that vs. taking snapshots of primary and secondary? If so, we're back to the oplog issue aren't we?

There's no oplog issue beyond the common point/window one I mentioned above. Some people do choose to have a Secondary (usually hidden) for this purpose to avoid adding load to a normal node. Note: even a hidden member gets a vote, so if you added one for backup purposes you can remove the arbiter from your config, you would still have 3 voting members.

Should I snapshot each replica volume and rely on on the replica set completely to cover the time between failure and the last snapshot?

Every member of a replica set is intended to be identical - the data is the same, any secondary can become primary etc. - these are not slaves, every replica set member contains the full oplog and all the data.

So, taking multiple snapshots (assuming you trust the process) is going to be redundant (of course you may want that redundancy). And yes, the whole intention of the replica set functionality is to ensure that you don't need to take extraordinary measures to use a secondary in this way (with the caveats above in mind, of course).

Related Topic