Measuring Java method refactoring performance
I was given a weird task. I have to refactor certain methods in a huge code base (the easy part) and provide performance gain reports. I should focus on speed of execution and memory usage. They want know the performance improvement by method!
So I have a method like this:
public void processHugeFile(File f) {
long start = java.lang.System.currentTimeMillis();
// lots of hashmaps, lots of arrays, weird logic,...
long end = java.lang.System.currentTimeMillis();
logger.log("performance comparison - exec time: " + (end - start));
}
Then I have to refactor it:
public void processHugeFile(File f) {
long start = java.lang.System.currentTimeMillis();
// just lists, some primitives, simple logic,...
long end = java.lang.System.currentTimeMill开发者_运维百科is();
logger.log("performance comparison - exec time: " + (end - start));
}
In the end I just have to process the logs.
But what about memory usage? I have tried getRuntime().totalMemory() - getRuntime().freeMemory()
and the getHeapMemoryUsage().getUsed()
but they don't seem to work. Also, JVM Profilers focus on objects not on methods and I am speaking of a fairly large code base.
Can someone provide me some hints?
Thank you very much.
Broadly, refactoring is not a means to increase performance, but to improve readability and maintainability of a code base. (Which may then help you make optimizations and architectural changes with more confidence and ease.) I assume you're aware of this, and you mean that you're trying to clean up some slow code in the process.
This is really a tool for a profiler, not for hand-instrumentation. You will never get precise measurements this way. For example, System.currentTimeMillis()
calls add their own overhead and for short-lived methods could take longer than the method itself. Runtime
can only help you get a crude picture of memory usage.
I don't agree that profilers can't help. JProfiler for instance will happily graph heap size over time, include generation sizes. It will break down memory usage by allocation site, object type. It will show you performance bottlenecks by inclusive/exclusive time. And all of this without touching your code.
You really, really want to use a profiler like JProfiler, not hand-coded stuff.
You could use a profiler but also of use would be enabling Garbage Collection logging and then analysing the GC logs after you program has run.
This will tell you how much memory was being used AND how much time was being spent doing Garbage Collection which is more useful than just knowing how much memory was used - for example if you notice any excessive GC or stop the world collections that affect throughput & latency requirements.
Use profiler!!! Results may be distorted if you are doing it this way.
If you are using for example Netbeans, it has built-in profiler.
I'm puzzled by this requirement. I agree with Sean that refactoring is done for design reasons rather than performance reasons. There is no a priori reason to expect any performance increase in general, and if there is enough doubt about the benefits maybe it shouldn't be done at all.
And maybe you should do an algorithmic analysis instead: this is easy enough most of the time, which gives you an a analytic result in Big-O terms: if there isn't a clearly visible advantage in Big-O then there probably isn't much of a performance advantage at all.
精彩评论