开发者

Techniques for Working With Very Large Tables in Highly-Available Systems

We have a large table of ~20 million records in MySql InnoDB (v5.0.85). The table records state for user actions and is shared by multiple app servers. Occasionally a new requirement comes along that means we need to add a new column to this table to store additional information. Running the alter command takes about 20 minutes, during which the application cannot perform its duties as the table is locked - this is is no longer acceptable to our business model as the application needs to remain as available as possible with only a very short outage to push new server code.

I am trying to find ways to re-design the architecture such that we can add new columns to support new use cases and keep the application online at the same time. We are looki开发者_如何学Gong at introducing an archiving process to remove old records from the table, but it is still expected to be very large.

Can anyone point me to any books, online resources, or just your own experiences about what has and hasn't worked for managing the balance between availability, table size and changing requirements?

Thanks!


You can't get something for nothing. Almost every solution I can think of to add a new field will either result in a one-time short outage or a long term permanent performance problem but no outage, or the cost of new equipment and structure better suited to the size of the data you have. Personally I find the short one-time outage (that happens during the least busy hours) to be the best long-term solution.


If you need to add new columns on a regular basis, you should probably find a technique where adding new information that doesn't require altering tables while under load.

Here are a few solutions that come to mind:

  • Use PropertyType/PropertyValue tables, where PropertyType is an enum to which you can add a new entry whenever a new information has to be added. As noted by HLGEM, this kind of schema has drawbacks. EAV allows very dynamic models, but are difficult to manage if queries are not generated by a dedicated layer.

  • Have a completely normalized schema where you create a new independent table for each new information.

  • You don't say anything about read/write statistics, but if having a 20 minute read-only window is acceptable, you can have a replicated read-only version which will handle the load of queries during the alter table.


you should be able to create an extension table with the new column. It would have a one-to-one relationship with the existing table. And if you want, later you could bring those columns into the main table as a more planned maintenance task.


How about having two different setup of database servers, where 1 cluster would be taken down, altered then taken back up and replicated to with the new schema. Just added columns might work with regular replication, when the source lacks the new columns. When it is up to date again, switch all traffic to the new cluster, alter the passive one and start replicating back to that one.

Quite a lot of work, but for 0 down-time nothing is gonna be easy.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜