开发者

Hadoop gzip compressed files

I am new to hadoop and trying to process wikipedia dump. It's a 6.7 GB gzip compressed xml file. I read that hadoop supports gzip compressed files but can only be processed by mapper on a single job as only one mapper can decompress it. This see开发者_JAVA百科ms to put a limitation on the processing. Is there an alternative? like decompressing and splitting the xml file into multiple chunks and recompressing them with gzip.

I read about the hadoop gzip from http://researchcomputing.blogspot.com/2008/04/hadoop-and-compressed-files.html

Thanks for your help.


A file compressed with the GZIP codec cannot be split because of the way this codec works. A single SPLIT in Hadoop can only be processed by a single mapper; so a single GZIP file can only be processed by a single Mapper.

There are atleast three ways of going around that limitation:

  1. As a preprocessing step: Uncompress the file and recompress using a splittable codec (LZO)
  2. As a preprocessing step: Uncompress the file, split into smaller sets and recompress. (See this)
  3. Use this patch for Hadoop (which I wrote) that allows for a way around this: Splittable Gzip

HTH


This is one of the biggest miss understanding in HDFS.

Yes files compressed as a gzip file are not splitable by MapReduce, but that does not mean that GZip as a codec has no value in HDFS and cannot be made splitable.

GZip as a Codec can be used with RCFiles, Sequence Files, Arvo Files, and many more file formats. When the Gzip Codec is used within these splitable formats you get the great compression and pretty good speed from Gzip plus the splitable component.


GZIP files cannot be partitioned in any way, due to a limitation of the codec. 6.7GB really isn't that big, so just decompress it on a single machine (it will take less than an hour) and copy the XML up to HDFS. Then you can process the Wikipedia XML in Hadoop.

Cloud9 contains a WikipediaPageInputFormat class that you can use to read the XML in Hadoop.


Why not ungzip it and use Splittable LZ compression instead?m

http://blog.cloudera.com/blog/2009/11/hadoop-at-twitter-part-1-splittable-lzo-compression/

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜