开发者

Objects from memory as input for Hadoop/MapReduce?

I am working on the parallelization an algorithm, which roughly does the following:

  1. Read several text documents with a total of 10k words.
  2. Create an objects for every word in the text corpus.
  3. Create a pair between all word-objects (yes, O(n)). And return the most frequent pairs.

I would like to parallelize the 3. step by creating the开发者_高级运维 pairs between the first 1000 word-objects the rest on the fist machine, the second 1000 word-objects on the next machine, etc.

My question is how to pass the objects created in the 2. step to the Mapper? As far as I am aware I would require input files for this and hence would need to serialize the objects (though haven't worked with this before). Is there a direct way to pass the objects to the Mapper?

Thanks in advance for the help

Evgeni

UPDATE Thank you for reading my question before. Serialization seems to be the best way to solve this (see java.io.Serializable). Furthermore, I have found this tutorial useful to read data from serialized objects into hadoop: http://www.cs.brown.edu/~pavlo/hadoop/).


How about parallelize all steps? Use your #1 text documents as input to your Mapper. Create the object for every word in the Mapper. In the Mapper your key-value pair will be the word-object pair (or object-word depending on what you are doing). The Reducer can then count the unique pairs.

Hadoop will take care of bringing all the same keys together into the same Reducer.


Use twitter protobufs ( elephant-bird ) . Convert each word into a protobuf object and process it however you want. Also protobufs are much faster and light compared to default java serialization. Refer Kevin Weil's presentation on this. http://www.slideshare.net/kevinweil/protocol-buffers-and-hadoop-at-twitter

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜