开发者

Why does a C# System.Decimal remember trailing zeros?

Is there a reason that a C# System.Decimal remembers the number of trailing zeros it was entered with? See the following example:

public void DoSomething()
{
    decimal dec1 = 0.5M;
    decimal dec2 = 0.50M;
    Console.WriteLine(dec1);            //Output: 0.5
    Console.WriteLine(dec2);            //Output: 0.50
    Console.WriteLine(dec1 == dec2);    //Output: True
}

The decimals are classed as equal, yet dec2 remembers开发者_开发技巧 that it was entered with an additional zero. What is the reason/purpose for this?


It can be useful to represent a number including its accuracy - so 0.5m could be used to mean "anything between 0.45m and 0.55m" (with appropriate limits) and 0.50m could be used to mean "anything between 0.495m and 0.545m".

I suspect that most developers don't actually use this functionality, but I can see how it could be useful sometimes.

I believe this ability first arrived in .NET 1.1, btw - I think decimals in 1.0 were always effectively normalized.


I think it was done to provide a better internal representation for numeric values retrieved from databases. Dbase engines have a long history of storing numbers in a decimal format (avoiding rounding errors) with an explicit specification for the number of digits in the value.

Compare the SQL Server decimal and numeric column types for example.


Decimals represent fixed-precision decimal values. The literal value 0.50M has the 2 decimal place precision embedded, and so the decimal variable created remembers that it is a 2 decimal place value. Behaviour is entirely by design.

The comparison of the values is an exact numerical equality check on the values, so here, trailing zeroes do not affect the outcome.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜