Is .NET “double” arithmetic independent of platform/architecture?
If I run a complex calculation involving System.Double
on .NET under Windows (x86 and x64) and then on Mono (Linux, Unix, whatever), am I absolutely guaranteed to get exactly 开发者_运维知识库the same result in all cases, or does the specification allow for some leeway in the calculation?
From MSDN
In addition, the loss of precision that results from arithmetic, assignment, and parsing operations with Double values may differ by platform. For example, the result of assigning a literal Double value may differ in the 32-bit and 64-bit versions of the .NET Framework
Hope that helps.
No its not the same. It might compile to x87 or SSE instructions which work differently(for example regarding denorm support). I found no way to force .net to use reproducible floating point math.
There are some alternatives, but all of them are slow and some are a lot of work:
- Implement your own floating/fixed-point numbers.
- 32-bit fixed aren't too difficult to code. But their limited range and precision make them hard to work with.
Log
andSqrt
will be slow. If you want I can dig out my unfinished code for this. - 64-bit fixed-point are better to work with. But you can't easily implement them in high performance way in byte-code, since some intermediate values are 96-128 bit for which the CLR doesn't offer support.
- floating-point (I'd look into 32 bit mantissa and 16 bit exponent) are nice to work with, but hard to implement. Since to avoid precision loss you need a quick way to find the highest non zero bit. And there are no BitScanForward/Reverse intrinsics in C#/.net.
- 32-bit fixed aren't too difficult to code. But their limited range and precision make them hard to work with.
- Move all your math code into native libraries, since from what I read you can force most C++ compilers into creating reproducible floating-point code.
Decimal
is implemented in software and thus probably reproducible too, but it isn't fast either.
I do not believe so. Such phrases as:
The size of the internal floatingpoint representation is implementation-dependent, can vary, and shall have precision at least as great as that of the variable or expression being represented
and:
This design allows the CLI to choose a platform-specific high-performance representation for floating-point numbers until they are placed in storage locations. For example, it might be able to leave floating-point variables in hardware registers that provide more precision than a user has requested. At the same time, CIL generators can force operations to respect language-specific rules for representations through the use of conversion instructions
from section 12.1.3 of MS Partition I would tend to indicate that rounding differences might occur if all operations occur within the internal representation
精彩评论