What is the memory overhead for analyzing a heap with jhat?
jhat is a great tool for analyzing Java heap dumps, but for large heaps its easy to waste a lot of time. Give jhat a runtime heap too small, and it may take 15 minutes to fail and run out of memory.
What I'd like to know is: is there is a rule of thumb for how much -Xmx heap I should give jhat based on the size of the heapdump file? Only considering binary heap dumps for now.
Some very limited experimentation indicates that its at least 3-4 times the size of the heap dump. I was able to analyze a three-and-change gigabyte heap file with -J-mx12G.
Does anyone else have more conclusive experimental data, or an understanding of how jhat represents heap objects at runtime?
data points:
- this thread indicates a 5x overhead, but my experimentation on late model jhats (1.6.0_26) indicates its not quite that bad
- this thread indicates a ~10x overhead
- a colleague backs up the 10x theory: 2.5gb heap file fails with a -J-mx23G
- yet another colleauge 开发者_运维问答got a 6.7 gb dump to work with a 30 gb heap, for a 4.4x overhead.
精彩评论