开发者

How to tell MapReduce how many mappers to use?

I am trying to speed optimize MapReduce job.

Is there any way I can tell hadoop to use a particular number of mapper/reducer processes? Or, at least, minimal number of mapper processes?

In the documentation, it is specified, that you can do that开发者_开发知识库 with the method

public void setNumMapTasks(int n)

of the JobConf class.

That way is not obsolete, so I am starting the Job with Job class. What is the right way of doing this?


The number of map tasks is determined by the number of blocks in the input. If the input file is 100MB and the HDFS block size is 64MB then the input file will take 2 blocks. So, 2 map tasks will be spawned. JobConf.setNumMapTasks() (1) a hint to the framework.

The number of reducers is set by the JboConf.setNumReduceTasks() function. This determines the total number of reduce tasks for the job. Also, the mapred.tasktracker.tasks.maximum parameter determines the number of reduce tasks which can run parallely on a single job tracker node.

You can find more information here on the number of map and reduce jobs at (2)

(1) - http://hadoop.apache.org/mapreduce/docs/r0.21.0/api/org/apache/hadoop/mapred/JobConf.html#setNumMapTasks%28int%29
(2) - http://wiki.apache.org/hadoop/HowManyMapsAndReduces

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜