MongoDB MapReduce is much slower than pure Java processing?
I wanted to count all key's of my documents (inclusive embedded ones) of a collection. First I wrote a Java client to solve this. It took less than 4 seconds to show the result. Then I wrote a map/reduce function. The result was fine but running the function took over 30 seconds! I thought map/reduce function would be faster since it's executed server side. The Java client needs to fetch every document from the server but nevertheless it is much faster. Why is it so?
//Here is my map function:
map = function(){
for(var key in this) {
emit(key, {count:1});
if(isNestedObject(this[key])){
m_sub(key, this[key]);
}
}
}
//Here is my reduce function:
reduce = function (key, emits) {
total = 0;
for (var i in emits) {
total += emits[i].count;
}
return {count:total};
}
//Here is the call to mapreduce:
mr = db开发者_如何学编程.runCommand({"mapreduce":"keyword", "map" : map, "reduce" : reduce,
"scope":{
isNestedObject : function (v) {
return v && typeof v === "object";
},
m_sub : function(base, value) {
for(var key in value) {
emit(base + "." + key, {count:1});
if(isNestedObject(value[key])){
m_sub(base + "." + key, value[key]);
}
}
}
}
})
//Here is the output:
{
"result" : "tmp.mr.mapreduce_1292252775_8",
"timeMillis" : 39087,
"counts" : {
"input" : 20168,
"emit" : 986908,
"output" : 1934
},
"ok" : 1
}
//Here is my Java client:
public static Set<String> recursiv(DBObject o){
Set<String> keysIn = o.keySet();
Set<String> keysOut = new HashSet<String>();
for(String s : keysIn){
Set<String> keys2 = new HashSet<String>();
if(o.get(s).getClass().getSimpleName().contains("Object")){
DBObject o2 = (DBObject) o.get(s);
keys2 = recursiv(o2);
for(String s2 : keys2){
keysOut.add(s + "." + s2);
}
}else{
keysOut.add(s);
}
}
return keysOut;
}
public static void main(String[] args) throws Exception {
final Mongo mongo = new Mongo("xxx.xxx.xxx.xxx");
final DB db = mongo.getDB("keywords");
final DBCollection keywordTable = db.getCollection("keyword");
Multiset<String> count = HashMultiset.create();
long start = System.currentTimeMillis();
DBCursor curs = keywordTable.find();
while(curs.hasNext()){
DBObject o = curs.next();
Set<String> keys = recursiv(o);
for(String s : keys){
count.add(s);
}
}
long end = System.currentTimeMillis();
long duration = end - start;
System.out.println(new SimpleDateFormat("mm:ss:SS").format(Long.valueOf(duration)));
System.out.println("duration:" + duration + " ms");
//System.out.println(count);
System.out.println(count.elementSet().size());
}
//Here is the output:
00:03:726
duration:3726 ms
1898
Don't worry about the different number of results (1934 vs. 1898). This is because map reduce counts also key's in the array's which are not counted by the java client. Thank you to shed some light on the different execution times.
This isn't that much of an answer, but in the o'reilly mongo book, kristina says that map-reduce queries are one of the slowest things you can do, but they are also the most flexible and the most scalable. Mongo will be able to break apart the query and handle the processing power across all the nodes, which means you should get linear scalability with each node you add. But on a single node, even a group by query will be faster then map reduce.
Another reason is that mongodb has problems with it's javascript engine which only allows them to use one single thread. Mongodb plans to switch to google's v8 javascript engine wich hopefully allows mongodb to process map/reduce multi threaded. See http://www.mongodb.org/display/DOCS/MapReduce#MapReduce-Parallelism and https://jira.mongodb.org/browse/SERVER-2407
If you can you should look into the aggregation framework command. Not quite as flexible as MapReduce but performance is impressive. I used it to aggregate large number of collections data into hourly, daily, monthly summaries, the performance ratio with MapReduce was more than 1 to 50 in our situation.
We opted for a design with segmented collection with identical structure which allowd us to run small but numerous aggregation jobs, the pipeline concept of the aggregation command works great.
I also found the $group command very performant but limitation on size and shards restrain its usage.
精彩评论