开发者

java.io.ReadObject intermittently taking extreme time

We are trying to deserialize a pretty hefty object with simple nested objects. This normally takes approximately 5 - 10 ms. However, recently we are experiencing random latency up to 3000ms during this call. I can run requests over and over and get the exact same content length from the call but one out of every 20 is a huge lag hit.

When running a profiler it seems that the CPU time is eaten up in java.io.ReadObject. I am really stum开发者_Go百科ped as to what could cause such a jump in time taken given the calls are identical.

Any ideas would be hugely appreciated.

JVM has 4gb of ram running on centos with a 1.6 version. There doesn't appear to be correlated GC events.


Standard Java Serialization creates a lot of garbage. If you are seeing spikes in the time to perform this it will be because a) it is waiting for IO (which can be any amount of time) or b) performing a minor or full GC.

I would run your application with

-verbosegc

to see when a full gc is being performed. Otherwise you can use jstat

jstat -gccause 5s {process-id}

Most collections are stop-the-world so when a GC is being performed, the readObject will appear to have used lots of CPU when actually it was waiting for the GC to clean up some data.

When profiling your application I suggest enabling memory profiling.

All the same to have this sort of delays you must be de serializing a lot of data. To get a 3s delay it would typically take about 3 GB of garbage. If its one per 20 requests, each request might be averaging 150 MB of garbage.

Perhaps its time to optimise your serialization. You can get it to almost nothing but it takes more and more work to do this the lower you go. I would start by implementing readObject/writeObject as its the simplest change which could give you the most benefit.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜