开发者

Does initialization of local variable with null impacts performance?

Lets compare two pieces of code:

String str = null;
//Possibly do something...
str = "Test";
Console.WriteLine(str);

and

String str;
//Possibly do something...
str = "Test";
Console.WriteLine(str);

I was always thinking that 开发者_C百科these pieces of code are equal. But after I have build these code (Release mode with optimization checked) and compared IL methods generated I have noticed that there are two more IL instructions in the first sample:

1st sample code IL:

.maxstack 1

.locals init ([0] string str)

IL_0000: ldnull

IL_0001: stloc.0

IL_0002: ldstr "Test"

IL_0007: stloc.0

IL_0008: ldloc.0

IL_0009: call void [mscorlib]System.Console::WriteLine(string)

IL_000e: ret

2nd sample code IL:

.maxstack 1

.locals init ([0] string str)

IL_0000: ldstr "Test"

IL_0005: stloc.0

IL_0006: ldloc.0

IL_0007: call void [mscorlib]System.Console::WriteLine(string)

IL_000c: ret

Possibly this code is optimized by JIT compiller? So does the initialization of local bethod variable with null impacts the performence (I understand that it is very simple operation but any case) and we should avoid it? Thanks beforehand.


http://www.codinghorror.com/blog/2005/07/for-best-results-dont-initialize-variables.html

To summarize from the article, after running various benchmarks, initializing an object to a value (either as part of a definition, in the class' constructor, or as part of an initialization method) can be anywhere from roughly 10-35% slower on .NET 1.1 and 2.0. Newer compilers may optimize away initialization on definition. The article closes by recommending to avoid initialization as a general rule.


It is slightly slower, as Jon.Stromer.Galley's link points out. But the difference is amazingly small; likely on the order of nanoseconds. At that level, the overhead from using a high-level language like C# dwarfs any performance difference. If performance is that much of an issue, you may as well be coding in C or ASM or something.

The value of writing clear code (whatever that means to you) will far outweigh the 0.00001ms performance increase in terms of cost vs. benefit. That's why C# and other high-level languages exist in the first place.

I get that this is probably meant as an academic question, and I don't discount the value of understanding the internals of the CLR. But in this case, it just seems like the wrong thing to focus on.


Today (2019) both the .NET Framework and the .NET Core compilers are smart enough to optimize unneeded initializations away. (Along with the useless stloc.0 - ldloc.0 pair.)

Both versions compile as

        .maxstack 8

        ldstr "Test"
        call void [System.Console]System.Console::WriteLine(string)
        ret

See my SharpLab experiment as reference.

But of course implementations change, but Justin's answer is timeless: I did this experiment out of curiosity, in a real situation focus on code clarity and expressiveness, and ignore micro-optimizations.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜