开发者

Hadoop Streaming Multiline Input

I'm using Dumbo for some Hadoop Streaming jobs. I have a bunch of JSON dictionaries each containing an article (multiline text) and some meta data. I know Hadoop performs best when give large files, so I want to concat all the JSON dictionaries into a single file.

The problem is that I don't know how to make Hadoop read each dictionary/article as a separate value instead of splitting on newlines. How can I tell Hadoop to use a custom record separator? Or maybe I can put all of the JSON dictionaries into a list data structure and have Hadoop read that in?

Or maybe encoding the string (开发者_开发知识库base64?) would remove all of the new lines and the normal "reader" would be able to handle it?


You can just replace all newlines with spaecs in each dictionary when concatenating your JSON files. Newline doesn't have any special meaning in JSON besides being a whitespace character.


concatenated-json-mapreduce is a custom input format and record reader will split the JSON objects based on push/pop on the open/closing brackets.

It was written to handle streaming JSON (rather than newline-separated JSON) so as long as it's well formed JSON objects using \n instead of actual new lines it should work.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜