开发者

hadoop split file in equally size

Im trying to learn diving a file stored in hdfs into splits and reading it to different process (on different machines.)

What I expect is if I have a SequenceFile containing 1200 records with 12 process, I would see around 100 records per process. The way to divide the file is by getting the length of data, then dividing by number of processes, deriving chunk/beg/end size for each split, and then passing that split to e.g. SequenceFileRecordReader, retrieving records in a simple while loop : The code is as below.

private InputSplit getSplit(int id) throws IOException {
...
    for(FileStatus file: status) {
        long len = file.getLen();
        BlockLocation[] locations =
            fs.getFileBloc开发者_如何学GokLocations(file, 0, len);
        if (0 < len) {
            long chunk = len/n;
            long beg = (id*chunk)+(long)1;
            long end = (id)*chunk;
            if(n == (id+1)) end = len;
            return new FileSplit(file, beg, end, locations[locations.length-1].getHosts());
        } 
    }
...
}

However, the result shows that the sum of total records counted by each process is different from the records stored in file. What is the right way to divide the SequenceFile into chunk evenly and distribute them to different hosts?

Thanks.


I can't help but wonder why you are trying to do such a thing. Hadoop automatically splits your files and 1200 records to be split into 100 records doesn't sound like a lot of data. If you elaborate on what your problem is, someone might be able to help you more directly.

Here are my two ideas:


Option 1: Use Hadoop's automatic splitting behavior

Hadoop automatically splits your files. The number of blocks a file is split up into is the total size of the file divided by the block size. By default, one map task will be assigned to each block (not each file).

In your conf/hdfs-site.xml configuration file, there is a dfs.block.size parameter. Most people set this to 64 or 128mb. However, if you are trying to do something tiny, like 100 sequence file records per block, you could set this really low... to say 1000 bytes. I've never heard of anyone wanting to do this, but it is an option.


Option 2: Use a MapReduce job to split your data.

Have your job use an "identity mapper" (basically implement Mapper and don't override map). Also, have your job use an "identity reducer" (basically implement Reducer and don't override reduce). Set the number of reducers to the number of splits you want to have. Say you have three sequence files you want split into a total of 25 files, you would load up those 3 files and set the number of reducers to 25. Records will get randomly sent to each reducer, and what you will end up is close to 25 equal splits.

This works because the identity mappers and reducers effectively don't do anything, so your records will stay the same. The records get sent to random reducers, and then they will get written out, one file per reducer into part-r-xxxx files. Each of those files will contain your sequence file(s) split into somewhat even chunks.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜