Why is the elephantbird Pig JsonLoader only processing part of my file?
I'm using Pig on Amazon's Elastic Map-Reduce to do batch analytics. My input files are on S3 and contain events that are represented by one JSON dictionary per line. I use the elephantbird JsonLoader library to parse the input files. So far so g开发者_如何学Goood.
I'm running into problems processing a large file stored on the local filesystem or hdfs in an interactive Pig session. It looks like if the input file is large enough to get split, only one of the splits is ever processed by elephantbird, and the processing stops with no error message at the end of the split. I don't have the same problem if I stream the input from S3 (no file splitting on S3 input), or if I convert the file to a format readable by Pig directly.
For a concrete example: a file with 833,138 lines is only processed up to 379,751 lines (and if I watch the completion percentage in Pig it goes smoothly up to 50% then jumps to 100%). I also tried with a file with 400,000 lines and it got processed fine.
So my question is: why is only one split processed by elephantbird? Am I misunderstanding how Pig in interactive mode is supposed to work or is there something wildly wrong going on?
Katia, you'll get help much faster if you email the Pig user list :).
Please try Pig 0.8.1 (the current release) and let us know if you still get errors. For what it's worth I've been using the EB Json loader for over a year on hundred-gig files and they process fine, so perhaps there's something about your data.
Spike Gronim -- that's been fixed, local mode is now mostly identical (except for things like distributed cache and skewed joins) to non-local mode. Upgrade.
精彩评论