开发者

Database sync between Mercurial updates

So, I have a development and production environment that are accessing the same BitBucket repository, and changes I push to the repo, I pull down on the production server by using hg pull and hg update.

This keeps my PHP code all up to date and works fine.

But I could use some advice keeping my MySQL schemas in sync between the two environments. For example I quite often make changes on the development machine that I need to reflect on the production server.

Any advice on how to开发者_C百科 do this would be very gratefully received.


What you are trying to do, in a nutshell, is version your database schema so that it stays in line with the code as things change. The critical parts of being able to do that is to be able to track the changes to the DB schema, and also being able to track the current state of the DB schema (ie. what version it is at)

One way to track the changes to the schema would be to manually script all changes to the schema. These change scripts are essentially your "diffs" between versions of the schema. Another way to generate these change files would be to use a program that can generate a diff between two databases, or between a database and a create script. In theory, you should be able to develop a pre-commit hook script that can generate the alter script from the current database for that working copy and the previous database for that working copy, but this isn't a trivial task.

Once you have your DB being versioned, you now have to solve the problem of applying those changes on Update. To do this, you will need to develop a post-update hook that can look at the database (probably at some sort of Version table within it that links to the Mercurial changeset Id) and determine what scripts need to be run in order to get the DB up to date.

Since Mercurial allows you to update to a previous version, you will either have to only have non-breaking changes to your database, or simply not allow (in the social sense or the technical sense) the production working copy to be updated to previous versions. Regardless of how you handle it, the post-update hook script that is doing the actual DB updates probably needs to be smart enough to try to apply DB alter scripts that it has already applied.

There are obviously a number of issues to resolve and lots of testing to do to get this all to work, and it isn't a pre-built solution for you by any means, but it should get you well on your way to automating your DB updates to keep them in line with your code. Good luck!


Take a look at the Rails framework. They use database migrations to manipulate (even create) the database. It integrates great with testing and across development machines too (for teams). It's Ruby (which many find preferable to PHP) so it won't work for you unless you switch, but it might give you some ideas on how to implement this for your application.

http://guides.rubyonrails.org/migrations.html

Migrations are a convenient way for you to alter your database in a structured and organized manner. You could edit fragments of SQL by hand but you would then be responsible for telling other developers that they need to go and run them. You’d also have to keep track of which changes need to be run against the production machines next time you deploy.

Active Record tracks which migrations have already been run so all you have to do is update your source and run rake db:migrate. Active Record will work out which migrations should be run. It will also update your db/schema.rb file to match the structure of your database.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜