Is .NET “decimal” arithmetic independent of platform/architecture?
I asked about System.Double
recently and was told that computations may differ depending on platform/architecture. Unfortunately开发者_如何转开发, I cannot find any information to tell me whether the same applies to System.Decimal
.
Am I guaranteed to get exactly the same result for any particular decimal
computation independently of platform/architecture?
Am I guaranteed to get exactly the same result for any particular decimal computation independently of platform/architecture?
The C# 4 spec is clear that the value you get will be computed the same on any platform.
As LukeH's answer notes, the ECMA version of the C# 2 spec grants leeway to conforming implementations to provide more precision, so an implementation of C# 2.0 on another platform might provide a higher-precision answer.
For the purposes of this answer I'll just discuss the C# 4.0 specified behaviour.
The C# 4.0 spec says:
The result of an operation on values of type decimal is that which would result from calculating an exact result (preserving scale, as defined for each operator) and then rounding to fit the representation. Results are rounded to the nearest representable value, and, when a result is equally close to two representable values, to the value that has an even number in the least significant digit position [...]. A zero result always has a sign of 0 and a scale of 0.
Since the calculation of the exact value of an operation should be the same on any platform, and the rounding algorithm is well-defined, the resulting value should be the same regardless of platform.
However, note the parenthetical and that last sentence about the zeroes. It might not be clear why that information is necessary.
One of the oddities of the decimal system is that almost every quantity has more than one possible representation. Consider exact value 123.456. A decimal is the combination of a 96 bit integer, a 1 bit sign, and an eight-bit exponent that represents a number from -28 to 28. That means that exact value 123.456 could be represented by decimals 123456 x 10-3 or 1234560 x 10-4 or 12345600 x 10-5. Scale matters.
The C# specification also mandates how information about scale is computed. The literal 123.456m would be encoded as 123456 x 10-3, and 123.4560m would be encoded as 1234560 x 10-4.
Observe the effects of this feature in action:
decimal d1 = 111.111000m;
decimal d2 = 111.111m;
decimal d3 = d1 + d1;
decimal d4 = d2 + d2;
decimal d5 = d1 + d2;
Console.WriteLine(d1);
Console.WriteLine(d2);
Console.WriteLine(d3);
Console.WriteLine(d4);
Console.WriteLine(d5);
Console.WriteLine(d3 == d4);
Console.WriteLine(d4 == d5);
Console.WriteLine(d5 == d3);
This produces
111.111000
111.111
222.222000
222.222
222.222000
True
True
True
Notice how information about significant zero figures is preserved across operations on decimals, and that decimal.ToString knows about that and displays the preserved zeroes if it can. Notice also how decimal equality knows to make comparisons based on exact values, even if those values have different binary and string representations.
The spec I think does not actually say that decimal.ToString() needs to correctly print out values with trailing zeroes based on their scales, but it would be foolish of an implementation to not do so; I would consider that a bug.
I also note that the internal memory format of a decimal in the CLR implementation is 128 bits, subdivided into: 16 unused bits, 8 scale bits, 7 more unused bits, 1 sign bit and 96 mantissa bits. The exact layout of those bits in memory is not defined by the specification, and if another implementation wants to stuff additional information into those 23 unused bits for its own purposes, it can do so. In the CLR implementation the unused bits are supposed to always be zero.
Even though the format of floating point types is clearly defined, floating point calculations can indeed have differing results depending on architecture, as stated in section 4.1.6 of the C# specification:
Floating-point operations may be performed with higher precision than the result type of the operation. For example, some hardware architectures support an “extended” or “long double” floating-point type with greater range and precision than the double type, and implicitly perform all floating-point operations using this higher precision type. Only at excessive cost in performance can such hardware architectures be made to perform floating-point operations with less precision, and rather than require an implementation to forfeit both performance and precision, C# allows a higher precision type to be used for all floating-point operations.
While the decimal
type is subject to approximation in order for a value to be represented within its finite range, the range is, by definition, defined to be suitable for financial and monetary calculations. Therefore, it has a higher precision (and smaller range) than float
or double
. It is also more clearly defined than the other floating point types such that it would appear to be platform-independent (see section 4.1.7 - I suspect this platform independence is more because there isn't standard hardware support for types with the size and precision of decimal
rather than because of the type itself, so this may change with future specifications and hardware architectures).
If you need to know if a specific implementation of the decimal
type is correct, you should be able to craft some unit tests using the specification that will test the correctness.
The decimal
type is represented in what amounts to base-10 using a struct (containing integers, I believe), as opposed to double
and other floating-point types, which represent non-integral values in base-2. Therefore, decimal
s are exact representations of base-10 values, within a standardized precision, on any architecture. This is true for any architecture running a correct implementation of the .NET spec.
So to answer your question, since the behavior of decimal
is standardized this way in the specification, decimal
values should be the same on any architecture conforming to that spec. If they don't conform to that spec, then they're not really .NET.
"Decimal" .NET Type vs. "Float" and "Double" C/C++ Type
A reading of the specification suggests that decimal
-- like float
and double
-- might be allowed some leeway in its implementation so long as it meets certain minimum standards.
Here are some excerpts from the ECMA C# spec (section 11.1.7). All emphasis in bold is mine.
The
decimal
type can represent values including those in the range 1 x 10−28 through 1 x 1028 with at least 28 significant digits.The finite set of values of type
decimal
are of the form (-1)s x c x 10-e, where the sign s is 0 or 1, the coefficient c is given by 0 <= c < Cmax, and the scale e is such that Emin <= e <= Emax, where Cmax is at least 1 x 1028, Emin <= 0, and Emax >= 28. Thedecimal
type does not necessarily support signed zeros, infinities, or NaN's.For
decimal
s with an absolute value less than1.0m
, the value is exact to at least the 28th decimal place. Fordecimal
s with an absolute value greater than or equal to1.0m
, the value is exact to at least 28 digits.
Note that the wording of the Microsoft C# spec (section 4.1.7) is significantly different to that of the ECMA spec. It appears to lock down the behaviour of decimal
a lot more strictly.
精彩评论