开发者

MongoDB's performance on aggregation queries

After hearing so many good things about MongoDB's performance we decided to give Mongodb a try to solve a problem we have. I started by moving all the records we have in several mysql databases to a single collection in mongodb. This resulted in a collection with 29 Million documents (each one of them have at least 20 fields) which takes around 100 GB of space in the HD. We decided to put them all in one collection since all the documents have the same structure and we want to query and aggregate results on all those documents.

I created some indexes to match my queries otherwise even a simple count() would开发者_如何转开发 take ages. However, queries such as distinct() and group() still take way too long.

Example:

// creation of a compound index    
db.collection.ensureIndex({'metadata.system':1, 'metadata.company':1})

// query to get all the combinations companies and systems
db.collection.group({key: { 'metadata.system':true, 'metadata.company':true }, reduce: function(obj,prev) {}, initial: {} });

I took a look at the mongod log and it has a lot of lines like these (while executing the query above):

Thu Apr  8 14:40:05 getmore database.collection cid:973023491046432059 ntoreturn:0 query: {}  bytes:1048890 nreturned:417 154ms
Thu Apr  8 14:40:08 getmore database.collection cid:973023491046432059 ntoreturn:0 query: {}  bytes:1050205 nreturned:414 430ms
Thu Apr  8 14:40:18 getmore database.collection cid:973023491046432059 ntoreturn:0 query: {}  bytes:1049748 nreturned:201 130ms
Thu Apr  8 14:40:27 getmore database.collection cid:973023491046432059 ntoreturn:0 query: {}  bytes:1051925 nreturned:221 118ms
Thu Apr  8 14:40:30 getmore database.collection cid:973023491046432059 ntoreturn:0 query: {}  bytes:1053096 nreturned:250 164ms
...
Thu Apr  8 15:04:18 query database.$cmd ntoreturn:1 command  reslen:4130 1475894ms

This query took 1475894ms which is way longer than what I would expect (the result list has around 60 entries). First of all, is this expected given the large number of documents in my collection? Are aggregation queries in general expected to be so slow in mongodb? Any thoughts on how can I improve the performance?

I am running mongod in a single machine with a dual core and 10GB of memory.

Thank you.


The idea is that you improve the performance of aggregation queries by using MapReduce on a sharded database that is distributed over multiple machines.

I did some comparisons of the performance of Mongo's Mapreduce with a group-by-select statement in Oracle on the same machine. I did find that Mongo was approximately 25 times slower. This means that I have to shard the data over at least 25 machines to get the same performance with Mongo as Oracle delivers on a single machine. I used a collection/table with approximately 14 million documents/rows.

Exporting the data from mongo via mongoexport.exe and using the exported data as an external table in Oracle and doing a group-by in Oracle was much faster than using Mongo's own MapReduce.


Couple things.

1) Your group query is processing lots of data. While your result set is small, it looks like it's doing a table scale of all of the data in your collection in order to generate that small result. This is probably the root cause of the slowness. To speed this up, you might want to look at the disk performance of your server through iostat while the query is running as that is likely the bottleneck.

2) As has been pointed out in other answers, the group command uses the javascript interpreter, which is going to limit performance. You might try using the new aggregation framework that is released as beta in 2.1 (note: this is an unstable release as of Feb 24 2012). See http://blog.mongodb.org/post/16015854270/operations-in-the-new-aggregation-framework for a good introduction. This won't overcome data volume problem in (1), but it is implemented in C++ and if javascript time is the bottleneck, then it should be much faster.

3) Another approach would be to use incremental map-reduce to generate a second collection with your grouped results. The idea is that you'd run a map-reduce job to aggregate your results once, and then periodically run another map-reduce job that re-reduces new data into the existing collection. Then you can query this second collection from your app rather than running a group command every time.


Aggregation (map reduce or otherwise) is very slow in mongo because it is done by the javascript VM, not the database engine. This continues to be a limitation of this (very good, imo) db for time series data.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜