What is the default task memory?
Hadoop map-reduce configuration provides the mapred.task.limit.maxvmem
and mapred.task.default.maxvmem.
According to the documentation both of these are values of typ开发者_开发技巧e long that is anumber, in bytes, that represents the default/upper VMEM task-limit associated with a task. It appears that meaning of "long" in this context is 32bit and setting values higher than 2GB may lead to negative values being used as the limit.
I am running on 64 bit system and 2GB is much lower limit than I actually want to impose.
Is there any way around this limitation?
I am using hadoop version 0.20.1
The long in this context refers to the amount of space required to store the setting not the actual amount of memory that can be addressed. So, you can use a minimum value of -9,223,372,036,854,775,808 and a maximum value of 9,223,372,036,854,775,807 inclusive. But, usually a long represents 64bits of data anyway.
精彩评论