开发者

Breaking Down Native Memory Usage Within a JVM Process on SLES

I have a WebSphere Portal application running four instances on a single box and after about 7 days of runtime there is only 130-150mb of address space free in native memory (using PMAP). Somewhere in another 7-10 days the figure drops well below 100mb (which we deem dangerous开发者_JS百科 and we start to recycle the JVM). If we don't do the recycle, the JVM will eventually crash with a SIGSEGV signal.

We've done some initial investigation into class counts and the size of JIT code. Class counts grow, but slowly from 50k onwards...about a couple hundred per day. JITC sizes get to about 210 MB after 7 days and grow about 1 MB per day after that so. In our previous experience we don't find these to be sinister values.

What we need to to be able to break down what is in the native heap, whether it is threads (all thread counts appear normal and we have fixed thread pools), String pools, constant pools, bytecode, or whatever else.

One lead we are trying now is reducing the reflection threshold to 0 (shutting off the bytecode accessors for reflectively created classes). This app uses a lot of pointcutting and a lot of reflection, so we're hoping there's a good chance this helps.

Any advice is welcome.


Might be a bit of back-and-forth, but have you GC logged and ensured it's not growing Java heap over time? Looked at your perm space? The SIGSEGV is an interesting one though, I'd expect a more JVM-ish crash for any in-Java mem issues.


After lengthy investigation, this ended up being a WebSphere bug: PK72252: CALLS TO CLASSLOADER.GETRESOURCEASSTREAM ARE SLOW. Fixed in 6.0.2.33.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜