Is a += b more efficient than a = a + b in C?
I know in some languages the following:
a += b
is more efficient than:
a = a + b
because it removes the need 开发者_开发知识库for creating a temporary variable. Is this the case in C? Is it more efficient to use += (and, therefore also -=
*=
etc)
So here's a definitive answer...
$ cat junk1.c
#include <stdio.h>
int main()
{
long a, s = 0;
for (a = 0; a < 1000000000; a++)
{
s = s + a * a;
}
printf("Final sum: %ld\n", s);
}
michael@isolde:~/junk$ cat junk2.c
#include <stdio.h>
int main()
{
long a, s = 0;
for (a = 0; a < 1000000000; a++)
{
s += a * a;
}
printf("Final sum: %ld\n", s);
}
michael@isolde:~/junk$ for a in *.c ; do gcc -O3 -o ${a%.c} $a ; done
michael@isolde:~/junk$ time ./junk1
Final sum: 3338615082255021824
real 0m2.188s
user 0m2.120s
sys 0m0.000s
michael@isolde:~/junk$ time ./junk2
Final sum: 3338615082255021824
real 0m2.179s
user 0m2.120s
sys 0m0.000s
...for my computer and my compiler running on my operating system. Your results may or may not vary. On my system, however, the time is identical: user time 2.120s.
Now just to show you how impressive modern compilers can be, you'll note that I used the expression a * a
in the assignment. This is because of this little problem:
$ cat junk.c
#include <stdio.h>
int main()
{
long a, s = 0;
for (a = 0; a < 1000000000; a++)
{
s = s + a;
}
printf("Final sum: %ld\n", s);
}
michael@isolde:~/junk$ gcc -O3 -S junk.c
michael@isolde:~/junk$ cat junk.s
.file "junk.c"
.section .rodata.str1.1,"aMS",@progbits,1
.LC0:
.string "Final sum: %ld\n"
.text
.p2align 4,,15
.globl main
.type main, @function
main:
.LFB22:
.cfi_startproc
movabsq $499999999500000000, %rdx
movl $.LC0, %esi
movl $1, %edi
xorl %eax, %eax
jmp __printf_chk
.cfi_endproc
.LFE22:
.size main, .-main
.ident "GCC: (Ubuntu 4.4.3-4ubuntu5) 4.4.3"
.section .note.GNU-stack,"",@progbits
The compiler figured out my loop and unrolled it to the point of calculating the cumulative sum and just embedded that as a constant which it proceeded to print out, skipping any kind of looping construct entirely. In the face of optimizers that clever do you really think you're going to find any meaningful edge in distinguishing between s = s + a
and s += a
?!
This is a compiler specific question really, but I expect all modern compilers would give the same result. Using Visual Studio 2008:
int main() {
int a = 10;
int b = 30;
a = a + b;
int c = 10;
int d = 50;
c += d;
}
The line a = a + b has disassembly
0014139C mov eax,dword ptr [a]
0014139F add eax,dword ptr [b]
001413A2 mov dword ptr [a],eax
The line c += d has disassembly
001413B3 mov eax,dword ptr [c]
001413B6 add eax,dword ptr [d]
001413B9 mov dword ptr [c],eax
Which is the same. They are compiled into the same code.
It depends on what a
is.
a += b
in C is by definition equivalent to a = a + b
, except that from the abstract point of view a
is evaluated only once in the former variant. If a
is a "pure" value, i.e. if evaluating it once vs. evaluating it many times makes no impact on programs behavior, then a += b
is strictly equivalent to a = a + b
in all regards, including efficiency.
In other words, in situations when you actually have a free choice between a += b
and a = a + b
(meaning that you know that they do the same thing) they will generally have exactly the same efficiency. Some compilers might have difficulties when a
stands for a function call (for one example; probably not what you meant), but when a
is a non-volatile variable the machine code generated for both expressions will be the same.
For another example, if a
is a volatile variable, then a += b
and a = a + b
have different behavior and, therefore, different efficiency. However, since they are not equivalent, your question simply does not apply in such cases.
In the simple cases shown in the question, there is no significant difference. Where the assignment operator scores is when you have an expression such as:
s[i]->m[j1].k = s[i]->m[jl].k + 23; // Was that a typo?
vs:
s[i]->m[j1].k += 23;
Two benefits - and I'm not counting less typing. There's no question about whether there was a typo when the first and second expressions differ; and the compiler doesn't evaluate the complex expression twice. The chances are that won't make much difference these days (optimizing compilers are a lot better than they used to be), but you could have still more complex expressions (evaluating a function defined in another translation unit, for example, as part of the subscripting) where the compiler may not be able to avoid evaluating the expression twice:
s[i]->m[somefunc(j1)].k = s[i]->m[somefunc(j1)].k + 23;
s[i]->m[somefunc(j1)].k += 23;
Also, you can write (if you're brave):
s[i++]->m[j1++].k += 23;
But you cannot write:
s[i++]->m[j1++].k = s[i]->m[j1].k + 23;
s[i]->m[j1].k = s[i++]->m[j1++].k + 23;
(or any other permutation) because the order of evaluation is not defined.
a += b
is more efficient than
a = a + b
because the former takes you 6 keystrokes and the latter takes you 9 keystrokes.
With modern hardware, even if the compiler is stupid and uses slower code for one than the other, the total time saved over the lifetime of the program may possibly be less than the time it takes you to type the three extra key strokes.
However, as others have said, the compiler almost certainly produces exactly the same code so the former is more efficient.
Even if you factor in readability, most C programmers probably mentally parse the former more quickly than the latter because it is such a common pattern.
In virtually all cases, the two produce identical results.
Other than with a truly ancient or incompetently written compiler there should be no difference, as long as a
and b
are normal variables so the two produce equivalent results.
If you were dealing with C++ rather than C, operator overloading would allow there to be more substantial differences though.
精彩评论