开发者

Stopwatch weird behavior

            Stopwatch sw = new Stopwatch();
            for (int i = 0; i < lines.Length; i++)
            {
                sw.Start();
                fn(); //call function
                sw.Stop();

            }
            Console.WriteLine(sw.ElapsedMilliseconds);


开发者_StackOverflow社区            long total =0;
            for (int i = 0; i < lines.Length; i++)
            {
                Stopwatch sw = Stopwatch.StartNew();
                fn(); //call function
                sw.Stop();
                total += sw.ElapsedMilliseconds;

            }
            Console.WriteLine(total);

The output is not the same, do you have any explanation for that?


Leaving aside things like the fact that you're creating loads of objects in the second loop, which could easily cause garbage collection within fn() or something else to actually make it take longer while timing, you're also just taking the elapsed milliseconds each iteration in the second case.

Suppose each iteration takes 0.1 milliseconds. Your total for the second loop would be 0, because on each iteration it would round down the elapsed time to 0 milliseconds. The first loop keeps track of the elapsed ticks.

Leaving all this aside, you shouldn't be starting and stopping the timer this frequently anyway - it will mess with your results. Instead, start the stopwatch once before the loop, and stop it after the loop.

If you want to take out the overhead of the looping, simply time an empty loop to find the overhead, and subtract that from the time taken with a loop containing actual work. Now it's not really quite that simple, because of the various complexities of real world CPUs - things like cache misses - but microbenchmarking is frankly never particularly accurate in that respect. It should be used as a guide more than anything else.


Because the StartNew() and Stop() create overhead. That's the reason you normally do these kinds of tests with 100s or 1000s of iterations: to minimize the performance overhead of the actual performance measurements.


You are probably running into the granularity of the system timer. Sometimes timing a trivial function will return 0 or 10 ms. This error could be adding up on your test.

You would probably see a similar result if you ran the first loop twice or the second loop twice as well.


The overhead of the loop is going to be considerably smaller than the overhead of stopping/starting the timer repeatedly and even less so in the case of creating a new one repeatedly. As such, I'd start the timer before the loop and end it after the loop and divide your elapsed time by the number of iterations. It's going to give you far more accurate results.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜