开发者

How to "defragment" MongoDB index effectively in production?

I've been looking at MongoDB. Feels good.

I added some indexes to a collection, uploaded a bunch of data, then removed all the data, and I noticed the indexes did not change size, similar to the behavior reported here.

If I call

db.repairDatabase()

the indexes are then squashed开发者_Python百科 to near-zero. Similarly if I don't remove all the data, but call repairDatabase(), the indexes are squashed somewhat (perhaps because unused extends are truncated?). I am getting index size from "totalIndexSize" of db.collection.stats().

However, that takes a long time (I've read it could be hours on a large database). It's unclear to me how available the database is for reads or writes while it is running. I am guessing not so available.

Since I want to run as few instances of mongod as possible, I want to understand more about how indexes are managed after deletes. Can anyone point me to anything or give any advice?


To summarize David's linked question:

  • No way except db.repairDatabase().
  • If you need to minimize downtime, set up a master/slave configuration, and "repairDatabase" the slave, then switch slave and master.


I think Reducing MongoDB database file size answers your question.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜