开发者

Is this popular StackOverflow answer incorrect?

The post I am referring to is this. Not only does the first one run faster for me (563 MS compared to 630) when increasing the size to 100000000, but also in the past using properties in for loops has caused considerable speed slowdowns.

Once when writing a waveform viewer every frame took 23 milliseconds on average to process. The for loop that looped through the entire picture's pixels was the cause of the slowdown. Instead of storing the picture's width and height before the for loop I accessed it during each iteration. Upon changing it the time to process a single frame went from 23 milliseconds to a mere 3.

Also, I made this sample class:

class LOL
{
    private int x;

    public LOL(int x)
    {
        this.x = x;
    }

    public int X
    {
        get
        {
            return x;
        }
    }
}

I then made a for loop that iterates 500000000 times. One test stored the X in an integer variable before the loop even started and one accessed the X property during each iteration. The former took approximately 1,500 milliseconds and the latter around 8,000.

Three tests and each concludes with the result that storing the limit before-hand being the optimal solution for performance gains. Am I missing something? Currently in my program optimization is required as large pictures are processed and in all performance-critical areas I always store the bounds of the loop before hand for performance g开发者_如何转开发ains and these tests seem to confirm that.


No, it looks right to me; if I run this:

int[] values = new int[100000000];

var watch = Stopwatch.StartNew();
int length = values.Length;
for (int i = 0; i < length; i++)
    values[i] = i;
watch.Stop();
var hoisted = watch.ElapsedMilliseconds;

watch = Stopwatch.StartNew();
for (int i = 0; i < values.Length; i++)
    values[i] = i;
watch.Stop();
var direct = watch.ElapsedMilliseconds;

and build with optimisations, running from the console; I get direct as 71 and hoisted as 163, which ties into what I would expect from the JIT eliminating the out-of-bounds check on a vector (but only when accessed directly).


Compiled in release mode, the cached length is faster. On debug it is the opposite.

Test method

public static class Performance
{
    public static void Test(string name, decimal times, bool precompile, Action fn)
    {
        if (precompile)
        {
            fn();
        }

        GC.Collect();
        Thread.Sleep(2000);

        var sw = new Stopwatch();

        sw.Start();

        for (decimal i = 0; i < times; ++i)
        {
            fn();
        }

        sw.Stop();

        Console.WriteLine("[{0,15}: {1,-15}]", name, new DateTime(sw.Elapsed.Ticks).ToString("HH:mm:ss.fff"));
        Debug.WriteLine("[{0,15}: {1,-15}]", name, new DateTime(sw.Elapsed.Ticks).ToString("HH:mm:ss.fff"));
    }
}

Test code:

var testAmount = 100;

int[] a1 = new int[10000000];
int[] a2 = new int[10000000];

Performance.Test
(
    "Direct", testAmount, true,
    () =>
    {
        for (int i = 0; i < a1.Length; ++i)
        {
            a1[i] = i;
        }
    }
);

Performance.Test
(
    "Cache", testAmount, true,
    () =>
    {
        var l = a2.Length;
        for (int i = 0; i < l; ++i)
        {
            a2[i] = i;
        }
    }
);

Debug results

[   Direct: 00:00:06.474   ]
[   Cache:  00:00:06.907   ]

Release results

[   Direct: 00:00:05.382   ]
[   Cache:  00:00:04.714   ]
0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜