Can Hadoop read arbitrary key binary file
It looks like Hadoop MapReduce requires a key value pair structure in the text or binary text. In reality we might have files to be split into chunks to be processed. But the keys may be spread across the file. It may not be a clear cut that one key followed by one value. Is there any InputFileFormatter that can read such type of binary files? I don't want to use Map Reduce and Map Reduce. That will slow down the perfor开发者_JAVA百科mance and defeat the purpose of using map reduce. Any suggestions? Thanks,
According to the Hadoop : The Definitive Guide
The logical records that FileInputFormats define do not usually fit neatly into HDFS blocks. For example, a TextInputFormat’s logical records are lines, which will cross HDFS boundaries more often than not. This has no bearing on the functioning of your program—lines are not missed or broken, for example—but it’s worth knowing about, as it does mean that data-local maps (that is, maps that are running on the same host as their input data) will perform some remote reads. The slight overhead this causes is not normally significant.
If the file is split by HDFS between boundaries, then Hadoop framework will take care of it. But if you split the file manually, then boundaries have to be taken into consideration.
In reality we might have files to be split into chunks to be processed. But the keys may be spread across the file. It may not be a clear cut that one key followed by one value.
What's the scenario, we can look at a workaround for this?
精彩评论