开发者

Can a calculation of floating point differ on different processors? (+passing doubles between C# and C)

I have an application written in C# that invokes some C code as well. The C# code gets some double as an input, performs some calculations on it, pass it to the native layer that perform its own calculations on it, and then passes back to the C# layer.

If i run the same exe/dlls on differ开发者_如何学JAVAent machines (all of them are x64 by Intel), is it possible that the final result i'll get will be different on different machines?


If you use the same executable(s) the results should be the same. However, it is worth noting that floating-point calculations are usually highly customizable by a number of persistent settings (infinity mode, rounding mode etc.). This means that the same floating-point instruction can produce different results depending on the current combination of settings. If your application makes sure that all these settings are reset to the same values at the beginning of the execution, then the results should be the same. However, if some of these settings are not reset, or depend on external parameters (like environment variables), then in some circumstances you might end up observing different results on different machines.


Hardware itself should all should all do it the same way, assuming that they implement IEEE floating point operations, and I think most (all?) do.

http://en.wikipedia.org/wiki/IEEE_754-2008


Most modern hardware is standardised, as is the definition of double. You can check that both are using the same type by checking the memory footprint of each variable - eg sizeof(x).

The should also be some information to poll in float.h.

From what I remember, int tends to be more problematic in consistency terms. Some default to 2 bytes, others to 4, but you could always use long to ensure sizes.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜