开发者

Firebird backup restore is frustrating, is there a way to avoid it?

I am using Firebird, but lately the database grows really seriously. There is really a lot of delete statements running, as well update/inserts, and the database file size grows really fast. After tons of deleting records the database size doesn't decrease, and even worse, i have the feeling that actually the query getting slowed down a bit. In order to fix this a daily backup/rest开发者_如何学编程ore process have been involved, but because of it's time to complete - i could say that it is really frustrating to use Firebird.

  • Any ideas on workarounds or solution on this will be welcome.

  • As well, I am considering switching to Interbase because I heard from a friend that it is not having this issue - it is so ?


We have a lot of huge databases on Firebird in production but never had an issue with a database growth. Yes, every time a record being deleted or updated an old version of it will be kept in the file. But sooner or later a garbage collector will sweap it away. Once both processes will balance each other the database file will grow only for the size of new data and indices.

As general precaution to prevent an enormous database growth try to make your transactions as short as possible. In our applications we use one READ ONLY transaction for reading all the data. This transaction is open through whole application life time. For every batch of insert/update/delete statements we use short separate transactions.

Slowing of database operations could be resulted from obsolete indices stats. Here you can find an example of how to recalculate statistics for all indices: http://www.firebirdfaq.org/faq167/


Check if you have unfinished transactions in your applications. If transaction is started but not committed or rolled back, database will have own revision for each transaction after the oldest active transaction.

You can check the database statistics (gstat or external tool), there's oldest transaction and the next transaction. If the difference between those numbers keeps growing, you have the stuck transaction problem.

There are also monitoring tools the check situation, one I've used is Sinatica Monitor for Firebird.

Edit: Also, database file doesn't shrink automatically ever. Parts of it get marked as unused (after sweep operation) and will be reused. http://www.firebirdfaq.org/faq41/


The space occupied by deleted records will be re-used as soon as it is garbage collected by Firebird. If GC is not happening (transaction problems?), DB will keep growing, until GC can do its job.

Also, there is a problem when you do a massive delete in a table (ex: millions of records), the next select in that table will "trigger" the garbage collection, and the performance will drop until GC finishes. The only way to workaround this would be to do the massive deletes in a time when the server is not very used, and run a sweep after that, making sure that there are no stuck transactions.

Also, keep in mind that if you are using "standard" tables to hold temporary data (ie: info is inserted and delete several times), you can get corrupted database in some circumstances. I strongly suggest you to start using Global Temporary Tables feature.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜