Simple Python code taking strangely long to run, incorrectly reported by Hotshot
I'm trying to optimize some code using hotshot, which I expected to be great since it's the high-performance profiler and all. But for some reason, I'm getting wild开发者_Python百科ly inaccurate results from it. On my most recent profiling run, hotshot reported the cumulative time of the top-level function I called as 7.946 seconds. But without even timing it I can tell it's taking much longer than this. Timing the run time myself simply using time.time() gives me a runtime of 42.465 seconds. I'm guessing this is because hotshot doesn't count system IO time or something and my program is batch-processing a lot of binary files?
However, using some more time.time() blocks I narrowed most of the extra time usage (which hotshot fails to notice) down to a lower-level processing function that doesn't do any IO. hotshot reported the total and cumulative times that this function took as 4.414 and 6.185 seconds, respectively. However, again using time.time() statements I found that its cumulative time was over 30 seconds. The cumulative time spent in one relatively simply block of code was 7.32 seconds, longer than hotshot said was spent in the entire function. The block looks like this:
format = element[1]
if isinstance(format, tuple):
format, operator, operand = format
else:
operator, operand = (None, None)
That's it. This block was run over 9 million times, so maybe I shouldn't be surprised that this much time was spent in it. But there is plainly no IO being done here. Why is hotshot so underestimating the time this function (and the whole program) is taking?
And before you ask, no, I'm not using multiple threads or anything like that.
If you are using Python 2.5+, try switching to cProfile
. It appears that hotshot
is no longer supported.
You want to find out where all the time is going, right? Don't think of it as measuring time. Think of it as finding lines of code that are on the stack a large fraction of the time, regardless of the total. Here's an example.
Many profilers fall into the gprof traps, including ignoring blocked time, thinking that lines of code don't matter, thinking that "self time" does, and thinking that measurement has to be accurate.
精彩评论