开发者

distributed cache

i am working with hadoop 19 on opensuse linux, i am not using any cluster rather running my hadoop code on my machine itself. i am following the standard technique on putting in开发者_如何学运维 distributed cache but instead of acessing the files from the distributed cache again and again, i stored the contents of the file in an array. this part of extracting from the file is done in the configure() function. i am getting nullPointerException when i try to use the fiel name. this is the part of the code:

.
..part of main()
..
 DistributedCache.addCacheFile(new URI("/home/hmobile/hadoop-0.19.2/output/part-00000"), conf2);
             DistributedCache.addCacheFile(new URI("/home/hmobile/hadoop-0.19.2/output/part-00001"), conf2);
.

.part of mapper

public void configure(JobConf conf2)
{
      String wrd; String line;     try {
                localFiles = DistributedCache.getLocalCacheFiles(conf2);
                System.out.println(localFiles[0].getName());// error NULLPOINTEREXCEPTION
            } catch (IOException ex) {
                Logger.getLogger(blur2.class.getName()).log(Level.SEVERE, null, ex);
            }
            for(Path f:localFiles)// error NULLPOINTEREXCEPTION
            {
                 if(!f.getName().endsWith("crc"))
                 {
                    BufferedReader br = null;
                    try {
                        br = new BufferedReader(new FileReader(f.toString()));

can such processing be not done in configure()?


This will depend on if you're using the local job runner (mapred.job.tracker=local) or if you're running in pseudo-distributed mode (i.e. mapred.job.tracker=localhost:8021 or =mynode.mydomain.com:8021). The distributed cache does NOT work in local mode, only pseudo-distributed and fully distributed modes.

Using the distributed cache in configure() is fine, otherwise.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜