开发者

How to handle a datanode that dies during map/reduce

What开发者_运维百科 happens when the datanode the map/reduce is using goes down? Shouldnt the job be redirected to another datanode? How should my code handle this exceptional condition?


If datanode goes down, the tasks running on that node ( assuming you are using it as tasktracker as well ) will fail and these failed tasks will be assigned to other tasktrackers for re-execution. The data blocks that are lost in dead datanode will be available in other datanodes as there will replication of data across cluster. So even if a datanode goes down, there won't be any loss except for very brief delay in re-execution of failed tasks. All this will be handled by framework. Your code need not to worry about this.


This depends mainly on your HDFS replication. If it is greater than 1, the job will ask for a block that is not on the "downed" server. If there is a valid replication it will be streamed to the job and the job can run again with the new block.

How should my code handle this exceptional condition?

You won't face any exception like that, just if the whole job would fail. In this case you could reschedule your job and hope that the datanode goes back up.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜