Fault tolerance through replication of SQL databases
Suppose the middle tier servers are replicated 3 way and the backend database (MySQL, PostgreSQL, etc...) is replicated 3 way. A user request ends up creating some user data in the middle tier servers and I would like to commit this to the backend database in a way resilient to failures.
A candidate attempt solution, for example, if I send the data to one SQL database and have it replicate the data to the other databases then if the one S开发者_StackOverflow社区QL database has the harddrive crash before it can replicate the data, the data is lost.
What is the best practice solution for fault tolerance that is used in the actual real world.
Many databases have the option for clustering, as an out-of-the-box solution to the requirement you sketch. I'd strongly recommend using an out-of-the-box solution, rather than rolling your own - there are some nasty problems with this kind of solution which you don't want to solve on your own.
A classic example is primary keys - if you have 3 back-end servers which could receive a "create new record in table x" instruction from the middleware servers, and you want to replicate the data, you have to ensure there are no clashes in the primary key. If you use integers as the primary key data type, you have to make sure db server 1 doesn't create a customer record with ID 1 if that ID is already used for a record on server 2. It's not an impossible challenge - but you could easily burn a few weeks building and testing a solution.
The other thing you need to consider is how long your application can be off-line in the case of a database outage - no outage at all, minutes, hours, or days. The shorter the outage window, the more expensive and complex the clustering solution needs to be.
精彩评论