Will an optimizing compiler remove calls to a method whose result will be multiplied by zero?
Suppose you have a computationally expensive method, Compute(p)
, which returns some float, and another method, Falloff(p)
, which return开发者_如何转开发s another float from zero to one.
If you compute Falloff(p) * Compute(p)
, will Compute(p)
still run when Falloff(p)
returns zero? Or would you need to write a special case to prevent Compute(p) from running unnecessarily?
Theoretically, an optimizing compiler could determine that omitting Compute when Falloff returns zero would have no effect on the program. However, this is kind of hard to test, since if you have Compute output some debug data to determine whether it is running, the compiler would know not to omit it because of that debug info, resulting in sort of a Schrodinger's cat situation.
I know the safe solution to this problem is just to add the special case, but I'm just curious.
Generally speaking a compiler will know that a function call could have side-effects (infinite loops, exceptions, etc.) and will not optimize it out. On the other hand, there are such things as whole-program optimizers that can determine that a function has no side-effects and thus omit it when its return value is not used.
Note that if your function returns an IEEE float and it is multiplied by 0, you cannot safely omit the function call unless you can determine that it always returns a real number. If it can return an Inf
or NaN
, the multiplication by 0 is not a nop and must be performed.
Unless the term is known to be 0 at compile time, nothing will be optimized out.
Also, note that the branches are expensive, so unless Fallout()
returns 0 frequently enough, the program will became slower if you put an explicit test, so the best answer is "Test it!".
EDIT: then if the Falloff
returns 0 frequently, doesn't it have special
if (something) return 0.0f;
? If so, then a good idea would be to call Compute()
conditionally from Falloff()
.
Also, if Falloff()
is declared as inline
, then
if (rez = Falloff())
rez *= Compute();
could do that automatically.
This isn't really a problem of figuring out side effects: you can, after all, mark functions pure (i.e., without side effects) with at least gcc and clang. It isn't really even a situation of compiler writers being just about to add this optimization but dammit they remembered floating-point gotchas at the last minute: after all, they don't even have this optimization for integers.
The real problem is that most compilers are not smart enough to take context into account, so they can only somewhat nearsightedly say "I have a multiplication here, what optimizations apply to multiplications in general?" Since your optimization will almost always severely degrade performance in situations where programs multiply (comparison and branching takes time, after all), it won't even have that listed as an optimization to consider.
Since you can consider the context, you'll almost certainly have to add that optimization manually.
精彩评论