开发者

Error while running Mapreduce program

am getting the following error while Running a Map-reduce program.

The开发者_如何学JAVA program is to sort the o/p using TotalOrderpartition.

I have 2 node cluster. 
when i run teh program with -D mapred.reduce.tasks=2 its working fine
 But its failing with below error while running with -D mapred.reduce.tasks=3 option.


java.lang.RuntimeException: Error in configuring object
        at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:93)
        at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:64)
        at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
        at org.apache.hadoop.mapred.MapTask$OldOutputCollector.<init>(MapTask.java:448)
        at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:358)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
        at org.apache.hadoop.mapred.Child.main(Child.java:170)
Caused by: java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:88)
        ... 6 more
Caused by: java.lang.IllegalArgumentException: Can't read partitions file
        at org.apache.hadoop.mapred.lib.TotalOrderPartitioner.configure(TotalOrderPartitioner.java:91)
        ... 11 more
Caused by: java.io.IOException: Split points are out of order
        at org.apache.hadoop.mapred.lib.TotalOrderPartitioner.configure(TotalOrderPartitioner.java:78)
        ... 11 more

Plese let me know whats wrong here?

Thanks
R


The maximum number of reducers that can be mentioned is equal to the number of nodes in your cluster. Since the number of nodes is 2 here, you cannot set the number of reducers to be greater than 2.


Sounds like you don't have enough keys in your partition file. The docs say that TotalOrderpartitioner requires that you have at least N - 1 keys, where N is the number of reducers, in your partition SequenceFile.


I also met this problem, through the check soucecode found that because of sample, increase the reduce number makes in the splitpoint have the same element, so throw this error. It has relation with the data. type hadoop fs - text _partition look at the files generated the partition, if your tasks failure there must has same element.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜