开发者

running multiple MapReduce jobs in hadoop

I want to run a chain of map reduce jobs, so the easiest solution seems to be jobcontroller. say I have two jobs, job1 and job2. and I want to run job2 after job1. Well, it faced some problems. after hours of debugging, I narrowed down the code to these lines:

JobConf jobConf1 = new JobConf();  
JobConf jobConf2 = new JobConf();  
System.out.println("*** Point 1");
Job job1 = new Job(jobConf1);  
System.out.println("*** Point 2");
Job job2 = new Job(jobConf2);
System.out.println("*** Point 3");

I keep getting this output when running the code:

*** Point 1    
10/12/06 17:19:30 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
*** Point 2    
10/12/06 17:19:30 INFO jvm.JvmMetrics: Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
*** Point 3

What I guess is that my problem is related somehow to "cannot initialize JMV ...." line. What is that? And how I can instantiate more than one job, in order to pass them to JobController.

When I added job1.waitForTheCompletion(true) before initializing the second job, it gave me this error:

    10/12/07 11:28:21 INFO jvm.JvmMetrics: Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
Exception in thread "main" org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: file:/home/workspace/WikipediaSearch/__TEMP1
        at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:224)
        at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:241)
        at org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:885)
        at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:779)
        at org.apache.hadoop.mapreduce.Job.submit(Job.java:432)
        at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:447)
        at ch.ethz.nis.query.HadoopQuery.run(HadoopQuery.java:353)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
        at ch.ethz.nis.query.HadoopQuery.main(HadoopQuery.java:308)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAcce开发者_Go百科ssorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:156)

__Temp1 is the output folder of the first job, which I want to be the input for the second one. And even though I have this waitForCompletion line in my code, it's still complaining that this path doesn't exist.


Wowww, after two days of debugging, it turns out that the problem is with hadoop internal directories names rule. Seemingly, for the input or output map-reduce directories, one cannot choose names starting with underline "_". That stupid! And the warnings and error were no help at all.


Is it possible you can't create a job while another one hasn't finished? I use hadoop 0.20.2 (note that JobConf is deprecated. hadoop claims to support backwards compatibility but in my experience, it doesn't really) and I've done basically the same thing and never had that problem. Do you still have the problem if you add job1.waitForCompletion(true) before job2 is created?

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜