Database – Fault tolerance through replication of SQL databases

databasedatabase-design

Suppose the middle tier servers are replicated 3 way and the backend database (MySQL, PostgreSQL, etc…) is replicated 3 way. A user request ends up creating some user data in the middle tier servers and I would like to commit this to the backend database in a way resilient to failures.

A candidate attempt solution, for example, if I send the data to one SQL database and have it replicate the data to the other databases then if the one SQL database has the harddrive crash before it can replicate the data, the data is lost.

What is the best practice solution for fault tolerance that is used in the actual real world.

Best Answer

Replication would typically be used for site to site data transfer.

Fault tolerance is all about redundancy. The more redundant your system is more tolerant it is to faults. A lot of can be solved through infrastructure, rather than code, especially for a database.

Hopefully the server has redundant hard drives that can be hot swapped on the fly if there is a failure. So the server itself must have redundant feature(s) such as hardrives power supplies, etc.

But what if server crashed, then we are into failover clustering, this is when another server takes over seamlessly should one server fail. In this case, multiple nodes would be available. Usually one is active and the other passive waiting for failure.

Then is a more serious failure, data center crash, for example an earth quake or some other act of god. If this case another data center would take over. In this scenario the most likely the data is replicating from one site to another.

All of these cost more money to implement and maintain. It depends on how serious the data is and what the client demands are. In some cases, just a nightly back up will do.

Hopefully your online banking system is using more than that.

I don't know what your requirements are, but those are the most common scenarios and levels of escalation.

Related Topic