开发者

Error in Hadoop MapReduce

When I run a mapreduce program using Hadoop, I get the following error.

10/01/18 10:52:48 INFO mapred.JobClient: Task Id : attempt_201001181020_0002_m_000014_0, Status : FAILED
  java.i开发者_StackOverflowo.IOException: Task process exit with nonzero status of 1.
    at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:418)
10/01/18 10:52:48 WARN mapred.JobClient: Error reading task outputhttp://ubuntu.ubuntu-domain:50060/tasklog?plaintext=true&taskid=attempt_201001181020_0002_m_000014_0&filter=stdout
10/01/18 10:52:48 WARN mapred.JobClient: Error reading task outputhttp://ubuntu.ubuntu-domain:50060/tasklog?plaintext=true&taskid=attempt_201001181020_0002_m_000014_0&filter=stderr

What is this error about?


One reason Hadoop produces this error is when the directory containing the log files becomes too full. This is a limit of the Ext3 Filesystem which only allows a maximum of 32000 links per inode.

Check how full your logs directory is in hadoop/userlogs

A simple test for this problem is to just try and create a directory from the command-line for example: $ mkdir hadoop/userlogs/testdir

If you have too many directories in userlogs the OS should fail to create the directory and report there are too many.


I was having the same issue when I run out of space on disk with log directory.


Another cause can be, JVM Error when you try to allocate some dedicated space to JVM and it is not present on your machine.

sample code:
conf.set("mapred.child.java.opts", "-Xmx4096m");

Error message:
Error occurred during initialization of VM
Could not reserve enough space for object heap

Solution: Replace -Xmx with dedicated memory value that you can provide to JVM on your machine(e.g. "-Xmx1024m")


Increase your ulimit to unlimited. or alternate solution reduce the allocated memory.


If you create a runnable jar file in eclipse, it gives that error on hadoop system. You should extract runnable part. That solved my problem.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜