开发者

STDIN or file as mapper input in Hadoop environment?

As we need to read in bunch of files to mapper, in non-Hadoop environment, I use os.walk(dir) and file=open(path, mode) to read in each file.

However, in Hadoop environment, as I read that HadoopStreaming convert file input to stdin of mapper and conver stdout of reducer to file output, I have a few questions about how to input file:

开发者_如何学JAVA
  1. Do we have to set input from STDIN in mapper.py and let HadoopStreaming convert files in hdfs input directory to STDIN?

  2. If I want to read in each file separately and parse each line, how can I set input from file in mapper.py?

My previous Python code for non-Hadoop environment sets: for root, dirs, files in os.walk('path of non-hdfs') .....

However, in Hadoop environment, I need to change 'path of non-hdfs' to a path of HDFS where I copyFromLocal to, but I tried many with no success, such as os.walk('/user/hadoop/in') -- this is what I checked by running bin/hadoop dfs -ls, and os.walk('home/hadoop/files')--this is my local path in non-Hadoop environment, and even os.walk('hdfs:// host:fs_port/user/hadoop/in')....

Can anyone tell me whether I can input from file by using file operation in mapper.py or I have to input from STDIN?

Thanks.


Hadoop streaming has to take input from STDIN. I think the confusion you're having is you're trying to write code to do some of the things that Hadoop Streaming is doing for you. I did that when I first started Hadooping.

Hadoop streaming can read in multiple files and even multiple zipped files which it then parses, one line at a time, into the STDIN of your mapper. This is a helpful abstraction because you then write your mapper to be file name/location independent. You can then use your mappers and reducers for any input which is handy later. Plus you don't want your mapper trying to grab files because you have no way of knowing how many mappers you will have later. If files were coded into the mapper, then if a single mapper failed you would never get output from the files hard coded in that mapper. So let Hadoop do the file management and have your code be as generic as possible.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜