Best design for distributed databases
I have a project where we have one central system that exposes an API on top of MySQL. We now need to replicate that same service locally on several different boxes (which could be 50+). We wanted to have a local cache of the DB on each of those boxes to ensure quick responses and failover if the "central" system goes down.
Any idea what's the best design for this? I was thinking some sort of master/slave set up, but I'm开发者_Python百科 not sure if that works with 50+ servers. I'm not sure what's the best approach.
What about MySQL's own replication solution? If you've already ruled that out, you should say why.
With the replication that I've seen, you have a master and slave(s). If the master goes down, one of the slaves takes over. With 50+ slaves, you'd have a long (and confusing) chain of masters.
Not knowing anything about the type of data you have or the read/write percentages, I would suggest one of the following:
- Cache static data locally (memcache, etc). Reads would be local, with writes going back to the mysql master. This works for mostly-static configuration information. I have 6 servers in that setup now.
- Shard your data. With 50 servers, set them up in 25 master/slave pairs and put 1/25th of the data on each shard. Get one more server for N+1 redundancy.
Hope that helps.
精彩评论