开发者

Processing paraphragraphs in text files as single records with Hadoop

Simplifying my problem a bit, I have a set of text files with "records" that are delimited by double newline characters. Like

'multiline text'

'empty line'

'multiline text'

'empty line'

and so forth.

I need to transform each multiline unit separately and then perform mapreduce on them.

However, I am aware that with the default wordcount setting in the hadoop code boilerplate, the input to the value variable in the following function is just a singl开发者_运维问答e line and there are no guarantees that the input is contiguous with the previous input line.

public void map(LongWritable key, Text value, 
                OutputCollector<Text, IntWritable> output, 
                Reporter reporter) throws IOException ;

And I need it to be that the input value is actually one unit of the double newline delimited multiline text.

Some searching turned up a RecordReader class and a getSplits method but no simple code examples that I could wrap my head around.

An alternative solution is to just replace all newline characters in the multiline text with space characters and be done with it. I'd rather not do this because there's quite a bit of text and it's time consuming in terms of runtime. I also have to modify a lot of code if I do this so dealing with it through hadoop would be most attractive for me.


If your files are small in size, then they won't get split. Essentially each file is one split assigned to one mapper instance. In this case, I agree with Thomas. You can build your logical record in your mapper class, by concatenating strings. You can detect your record boundary by looking for an empty string coming in as value to your mapper.

However, if the files are big and get split, then I don't see any other option but to implement your own text input format class. You could clone existing Hadoop LineRecordReader and LineReader java classes. You have to make a small change in your version of LineReader class so that the record delimiter will be two new lines, instead of one. Once this done, your mapper will receive multiple lines as input value.


What's the problem with it? Just put the previous lines into a StringBuilder and flush it when you reach a new record.
When you are using textfiles, they won't get split. For these cases it uses FileInputFormat, which only parallelizes to the number of files available.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜