Why does Hadoop plugin for eclipse ignore changes I make to the program?
I have recently setup a vmware instance with hadoop on my windows 7 machine开发者_运维技巧. I also setup hadoop plugin within eclipse and successfully ran an example map-reduce program on the VM. However the changes I make to the mapreduce program in Eclipse doesn't get reflected in the run. When I run it through eclipse, it still runs the initial program. I tried setting up a new map reduce program from scratch using my changed code and I was able to run it with my changes. However any change I make after the first run doesn't take effect. If the code has compile errors Eclipse complains and it doesnt run but when it runs it still runs the first version. I am using hadoop 0.18.0, vmware image from yahoo's tutorial and eclipse 3.3.2. What am I missing?
In case anybody falls into the same trap here's how I resolved this problem.
The solution to this problem is to choose "Run--> Run As-->Run on Hadoop". This is what creates a .jar file (and site.conf file) that gets passed to Hadoop instance. After the .jar file is created it is copied to a folder that is listed in the "Run/Debug settings-->classpath" of the project. This is what Hadoop executes.
If you run your map reduce program as a regular java application (e.g. using the run short cut keys) like I was doing; it still runs the hadoop program but the .jar file for Hadoop does not get recreated. This results in same program being run again and again.
精彩评论