开发者

Why all the reduce tasks are ending up in a single machine?

I wrote a relatively simple map-reduce program in Hadoop platform (cloudera distribution). Each Map & Reduce write some diagnostic information to standard ouput besides the regular map-reduce tasks.

However when I'm looking at these log files, I found that Map tasks are relatively evenly distributed among the nodes (I have 8 nodes). But the reduce task standard output log can only be found in one single machine.

I guess, that means all the reduce tasks ended up executing in a single machine and that's problematic and confusing.

Does anybody have any idea what's happening h开发者_StackOverflow社区ere ? Is it configuration problem ? How can I make the reduce jobs also distribute evenly ?


If the output from your mappers all have the same key they will be put into a single reducer.

If your job has multiple reducers, but they all queue up on a single machine, then you have a configuration issue.

Use the web interface (http://MACHINE_NAME:50030) to monitor the job and see the reducers it has as well as what machines are running them. There is other information that can be drilled into that will provide information that should be helpful in figuring out the issue.

Couple questions about your configuration:

  • How many reducers are running for the job?
  • How many reducers are available on each node?
  • Is the node running the reducer better hardware than the other nodes?


Hadoop decides which Reducer will process which output keys by the use of a Partitioner If you are only outputting a few keys and want an even distribution across your reducers, you may be better off implementing a custom Partitioner for your output data. eg

public class MyCustomPartitioner extends Partitioner<KEY, VALUE>
{
    public int getPartition(KEY key, VALUE value, int numPartitions) {
            // do something based on the key or value to determine which 
            // partition we want it to go to.
    }
}

You can then set this custom partitioner in the job configuration with

Job job = new Job(conf, "My Job Name");
job.setPartitionerClass(MyCustomPartitioner.class);

You can also implement the Configurable interface in your custom Partitioner if you want to do any further configuration based on job settings. Also, check that you haven't set the number of reduce tasks to 1 anywhere in the configuration (look for "mapred.reduce.tasks"), or in code, eg

job.setNumReduceTasks(1); 
0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜