Is there any technique for profile testing application in isolation?
I am a dummy in profiling, please tell me what you people do to profile your application. Which one is the better, Profiling the whole application or make an isolation? If the choice is make an isolation how you do that?
As far as possible, profile the entire application, running a real (typical) workload. Anything else and you risk getting results that lead you to focus your optimization efforts in the wrong place.
EDIT
Isn't that too hard to get a correct result when profiling the whole application? so the test result is depends on the user interaction (button clicking etc) and not using automatic task? Tell me if I'm wrong.
Getting the "correct result" depends on how you interpret the profiling data. For instance, if you are profiling an interactive application, you should figure out which parts of the profile correspond to waiting for user interaction, and ignore them.
There are a number of problems with profiling your application in parts. For example:
By deciding beforehand which parts of the application to profile, you don't get a good picture of the relative contribution of the different parts, and you risk wasting effort on the wrong parts.
You pretty much have to use artificial workloads. Whenever you do that there is a risk that the workloads are not representative of "normal" workloads, and your profiling results are biased.
In many applications, the bottlenecks are due to the way that the parts of the application interact with each other, or with I/O or garbage collection. Profiling different parts of the application separately is likely to miss these interactions.
... what i am looking for is the technique
Roughly speaking, you start with the biggest "hotspots" identified by the profile data and drill down until you've figured out why the so much is being spent in a certain area. It really helps if your profiling tool can aggregate and present the data top down and bottom up.
But, at the end of the day going from the profiling evidence (hotspots, stack snapshots, etc) to the root cause and the remedy is often down to the practical knowledge and intuition that comes from experience.
(Yea ... I'm waffling a bit. But my point is that there is no magic formula for doing this. Ultimately, you've got to use your brain ... like you have to when debugging a complex application.)
First I just time it with a watch to get an overall measurement.
Then I run it under a debugger and take stackshots. What these do is tell me which lines of code are responsible for large fractions of time. In particular, this means lines where functions are called without really needing to be, and I/O that I may not have been aware of.
Since it shows me lines of code that take time and can be done a better way, I fix those.
Then I start over at the top and see how much time I actually saved. I repeat these steps until I can no longer find things that a) take significant % of time, and b) I can fix.
This has been called "poor man's profiling". The little secret is not only is it cheap, but it is very effective, because it avoids the common myths about profiling.
P.S. If it is an interactive application, do all this just to the part of it that is slow, like if you press a "Do Useful Stuff" button, and it finishes a few seconds later. There's no point to taking stackshots when it's waiting for YOU.
P.P.S. Suppose there is some activity that should be faster, but finishes too quickly to take stackshots, like if it takes a second but should take a fraction of a second. Then what you can do is (temporarily) wrap a for loop around it, of 10 or 100 iterations. That will make it take long enough to get samples. After you've speeded it up, remove the loop.
Take a look http://www.ej-technologies.com/products/jprofiler/overview.html
精彩评论