开发者

In C# between >0 and >=1 which is faster and better? [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, in开发者_开发问答complete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center. Closed 12 years ago.

In C# between >0 and >=1 which is faster and better?


Neither; they both should compile down to the same thing if one is faster or better.

More importantly, most programmers will probably find > 0 more readable, and readability is more important than sub micro optimizations like this.


The one that is better is the one that most clearly expresses your intent.

If you are testing to see if an integer is in the range [1, 6] then you should write it as:

 if (x >= 1 && x <= 6) { ... }

Writing this would work, but doesn't so obviously fulfil the specification:

 if (x > 0 && x < 7) { ... }

I'm also assuming that you are talking about integer types here. If you are dealing with floating point or decimal numbers then they are not equivalent.


Unless you've profiled your code and found this to be the bottleneck, you shouldn't worry about micro-optimisations. Even so it can be interesting to inspect the code generated by the C# compiler in each case to see if they compiled to the same IL or not. This can be done by using .NET Reflector.

if (x >= 1)
{
    Console.WriteLine("True!");
}

Results in:

L_000b: ldloc.0          // Load the value of x
L_000c: ldc.i4.1         // Load the constant 1 
L_000d: blt.s L_0019     // Branch if less than
L_000f: ldstr "True!"
L_0014: call void [mscorlib]System.Console::WriteLine(string)

Whereas:

if (x > 0)
{
    Console.WriteLine("True!");
}

results in the following IL:

L_000b: ldloc.0          // Load the value of x
L_000c: ldc.i4.0         // Load the constant 0
L_000d: ble.s L_0019     // Branch if less than or equal
L_000f: ldstr "True!"
L_0014: call void [mscorlib]System.Console::WriteLine(string)

In both cases the compiler has reversed the comparison. The "greater than or equal to" test was compiled to a "less than" instruction and the "greater than" test was compiled to "less than or equal to". In general the compiler is free to make such modifications and running a different version of the compiler might produce different (but equivalent) bytecode.

Given that they don't compile to the same IL, the best way to see which is fastest is to actually run the code in a loop and see how long it takes each version to execute. I tried doing this but I did not see any measurable performance difference between the two ways to write the code.


Not defined. You explicitly ask for C# - but that is something that dependso on the processor architecture, i.e. the CLR runtime compiler.


The performance difference is going to be negligible between the two (if there even is one). I'm working on proving exactly what it might be (it will be platform dependent since any different would probably come down to the code emitted and executed by JIT).

Keep in mind, though, that performance wise this is an extreme micro-optimization is most likely unwarranted.

The better choice is going to be which ever is more readable and conveys your intent the best in your code.


I agree with the other responses that micro-optimizations should not be taken into account usually. However it can be interesting to see which of the two versions has smaller/apparently_faster IL.

So :

using System;

namespace IL_Test
{
    class Program
    {
        static void Main(string[] args)
        {
            int i = 3;
            if (i > 0)
            {
                Console.Write("i is greater than zero");
            }
        }
    }
}

Translates into :

(DEBUG)

.method private hidebysig static void Main(string[] args) cil managed
{
    .entrypoint
    .maxstack 2
    .locals init (
        [0] int32 i,
        [1] bool CS$4$0000)
    L_0000: nop 
    L_0001: ldc.i4.3 
    L_0002: stloc.0 
    L_0003: ldloc.0 
    L_0004: ldc.i4.0 
    L_0005: cgt 
    L_0007: ldc.i4.0 
    L_0008: ceq 
    L_000a: stloc.1 
    L_000b: ldloc.1 
    L_000c: brtrue.s L_001b
    L_000e: nop 
    L_000f: ldstr "i is greater than zero"
    L_0014: call void [mscorlib]System.Console::Write(string)
    L_0019: nop 
    L_001a: nop 
    L_001b: ret 
}

(RELEASE)

.method private hidebysig static void Main(string[] args) cil managed
{
    .entrypoint
    .maxstack 2
    .locals init (
        [0] int32 i)
    L_0000: ldc.i4.3 
    L_0001: stloc.0 
    L_0002: ldloc.0 
    L_0003: ldc.i4.0 
    L_0004: ble.s L_0010
    L_0006: ldstr "i is greater than zero"
    L_000b: call void [mscorlib]System.Console::Write(string)
    L_0010: ret 
}

while

using System;

namespace IL_Test
{
    class Program
    {
        static void Main(string[] args)
        {
            int i = 3;
            if (i >= 1)
            {
                Console.Write("i is greater than zero");
            }
        }
    }
}

into

(DEBUG)

.method private hidebysig static void Main(string[] args) cil managed
{
    .entrypoint
    .maxstack 2
    .locals init (
        [0] int32 i,
        [1] bool CS$4$0000)
    L_0000: nop 
    L_0001: ldc.i4.3 
    L_0002: stloc.0 
    L_0003: ldloc.0 
    L_0004: ldc.i4.1 
    L_0005: clt 
    L_0007: stloc.1 
    L_0008: ldloc.1 
    L_0009: brtrue.s L_0018
    L_000b: nop 
    L_000c: ldstr "i is greater than zero"
    L_0011: call void [mscorlib]System.Console::Write(string)
    L_0016: nop 
    L_0017: nop 
    L_0018: ret 
}

(RELEASE)

.method private hidebysig static void Main(string[] args) cil managed
{
    .entrypoint
    .maxstack 2
    .locals init (
        [0] int32 i)
    L_0000: ldc.i4.3 
    L_0001: stloc.0 
    L_0002: ldloc.0 
    L_0003: ldc.i4.1 
    L_0004: blt.s L_0010
    L_0006: ldstr "i is greater than zero"
    L_000b: call void [mscorlib]System.Console::Write(string)
    L_0010: ret 
}

As far as I can see the i>=1 is marginally faster than i>0 IN DEBUG MODE

In release mode all the diference is at offset 0004 a BLE vs a BLT. I suppose these two IL ops translate into equally CPU consuming native ops..


Of course, it depends on the CPU architecture that your program will be run on. On x86 the jge and jg instructions, which are relevant here, take the same number of cycles IIRC. In the specific case of testing for >0, if you're using unsigned integers it may (I really don't know) be faster to use the test instruction instead of cmp, since for unsigned integers >0 is equivalent to != 0. Other architectures may be different. The point is that this is so low-level that, even in the rare case that it is worth optimizing, there's no hardware-independent way to optimize it.

Edit: Forgot to mention: Any compiler or VM worth its salt should be able to figure out that testing >= 1 is equivalent to testing >0 and perform such a trivial optimization if it even makes a difference at the assembly language level.


There is no difference because the cpu internally does a subtraction of the two numbers and inspects the result and overflow. There is no extra step involved for either instruction.

When it comes to code it depends on what you are trying to document. >= 1 means that 1 is the lowest possible number. > 0 means that 0 is not allowed. There is a small semantic difference that pros will notice. They will choose the right operator to document their intent.

If you think that >= n and >= n + 1 are the same you are mistaken: >= int.MaxValue and > (int.MaxValue + 1) are different^^


To answer for the faster one, I'm not sure, but I think they are equivalent. And to answer for the better, I think it depend on the context.


You won't notice any difference unless possibly in a very very tight loop that is performance critical in your application. Then you need to profile your code anyway to decide which is better.

Use the one that makes most sense in your application.


So usually when I compare something to > 0 or >= 1, I am trying to see if an array/collection contains any elements. If that's the case, instead of using .Count > 0, try using the helper method Enumerable.Any() in System.Linq which should be much faster.

Otherwise, I don't know :)


If there would be a difference between the two, then I'd say that this would be such a micro-optimization, which shouldn't affect the overall performance of the application.

Moreover, when one is really figuring out whether he has to use > 0 or >= 1, then I'd say the cost for figuring out which one is faster, doesn't outweigh the (minimal) performance benefit.

Therefore, I'd also say that you should use the option that most expresses the intention.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜