开发者

Unexpectedly large heap size in comparison to % used

Have a memory allocation question I'd like your help with. We've analysed some of our services in top and we note that they have a RES value of about 1.8GB, which as far as I understand things means they're holding on to 1.8GB of memory at that time. Which would be fine if we'd just started them (they essentially read from a cache, do processing, and push off to another cache) but seeing as we still see this after CPU-intensive processing is completed, we're wondering if it means something isn't being GC'ed as we expected.

We run the program with the following parameters: -Xms256m -Xmx3096m which as I understand means an initial heap size of 256, and a maximum heap size of 3096.

Now what I'd expect to see is the heap grow as needed initially, and then shrink as needed as the memory becomes deallocated (though this could be my first mistake). What we actually see with jvisualvm is the following:

  • 3 mins in: used heap is 1GB, heap size is 2GB
  • 5 mins in: we've done processing, so used heap drops dramatically to near enough zilch, heap size however only drops to about 1.5GB
  • 7 mins ->: small bits of real time processing periodically, used heap only ever between 100-200MB or so, heap size however remaining constant at about 1.7GB.

My question would be, why hasn't my heap shrunk as I perhaps expected it to? Isn't this robbing other processes on the linux box of valuable memory, and if so how could I fix it? We do see out of memory errors on it sometimes, and with these processes being allocated the most 'unexpected' memory size, I thought it best to start with them.

Cheer开发者_Go百科s, Dave.

(~please excuse possible lack of understanding on JVM memory tuning!)


You might want to see this answer about tuning heap expansion and shrinking. By default the JVM is not too aggressive about shrinking the heap. Furthermore if the heap has enough free space for a long period of time it won't trigger a GC, which I believe is the only time is considers to shrink it.

Ideally you configure the maximum to a value that gives your application enough headroom under full load, yet is acceptable to OS performance if it were always all in use. It's not uncommon to set the minimum to the maximum for predictability and potentially better performance (I don't have anything to reference for that offhand).


I don't have a complete answer, but a similar question has come up before. From the earlier discussion you should investigate -XX:MaxHeapFreeRatio= as your tuning parameter to force heap release back to the operating system. There's documentation here, and I believe the default value allows a very large amount of unused heap to remained owned by the JVM.


Well, the GC does not always run when you think and it does not always collect what is elligible. It might as well only start to collect objects from the old gen space if it nearly runs out of heap space (since collecting from old gen normally would involve a stop-the-world collection which the GC tries to avoid until it really needs to do it).


Maybe you could try some profiling, with TPTP, visualvm or JProbe (commercial, but trial available), to find out exactly what happens.

Another thing to look out for is file handlers; I don't have the details but one of my colleagues encountered a couple of years ago a heap saturation problem caused by a process that opened many files, and found out that each time he opened one a 4kb buffer was allocated in the native heap, freed at the end of its processing. I hope this quite vague indications may help...

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜