开发者

How to decide what to use - double or decimal? [duplicate]

This question already has answers here: Closed 11 years ago.

Possible Duplicate:

decimal vs double! - Which one should I use and when?

I'm using double type for price in my trading software. I've noticed that some开发者_Python百科times there are a odd errors. They occur if price contains 4 digits after "dot", like 2.1234.

When I sent from my program "2.1234" on the market order appears at the price of "2.1235".

I don't use decimal because I don't need "extreme" precision. I don't need to distinguish for examle "2.00000000003" from "2.00000000002". I need maximum 6 digits after a dot.

The question is - where is the line? When to use decimal?

Should I use decimal for any finansical operations? Even if I need just one digit after the dot? (1.1 1.2 etc.)

I know decimal is pretty slow so I would prefer to use double unless decimal is absolutely required.


Use decimal whenever you're dealing with quantities that you want to (and can) be represented exactly in base-10. That includes monetary values, because you want 2.1234 to be represented exactly as 2.1234.

Use double when you don't need an exact representation in base-10. This is usually good for handling measurements, because those are already approximations, not exact quantities.

Of course, if having or not an exact representation in base-10 is not important to you, other factors come into consideration, which may or may not matter depending on the specific situation:

  • double has a larger range (it can handle very large and very small magnitudes);
  • decimal has more precision (has more significant digits);
  • you may need to use double to interact with some older APIs that are not aware of decimal;
  • double is faster than decimal;
  • decimal has a larger memory footprint;


When accuracy is needed and important, use decimal.

When accuracy is not that important, then you can use double.

In your case, you should be using decimal, as its financial matter.


For financial operation I always use the decimal type


Use decimal it's built for representing powers of 10 well (i.e. prices).


Decimal is the way to go when dealing with prices.


If it's financial software you should probably use decimal. This wiki article summarises quite nicely.


A simple response is in this example:

decimal d = 0.3M+0.3M+0.3M;
            bool ret = d == 0.9M; // true
            double db = 0.3 + 0.3 + 0.3;
            bool dret = db == 0.9; // false

the test with the double fails since 0.3 in its binary representation ( base 2 ) is periodic, so you loose precision the decimal is represented by BCD, so base 10, and you did not loose significant digit unexpectedly. The Decimal are unfortunately dramattically slower than double. Usually we use decimal for financial calculation, where any digit has to be considered to avoid tolerance, double/float for engineering.


Double is meant as a generic floating-point data type, decimal is specifically meant for money and financial domains. Even though double usually works just fine decimal might prevent problems in some cases (e.g. rounding errors when you get to values in the billions)


There is an Explantion of it on MSDN


As soon as you start to do calculations on doubles you may get unexpected rounding problems because a double uses a binary representation of the number while the decimal uses a decimal representation preserving the decimal digits. That is probably what you are experiencing. If you only serialize and deserialize doubles to text or database without doing any rounding you will actually not loose any precision.

However, decimals are much more suited for representing monetary values where you are concerned about the decimal digits (and not the binary digits that a double uses internally). But if you need to do complex calculations (e.g. integrals as used by actuary computations) you will have to convert the decimal to double before doing the calculation negating the advantages of using decimals.

A decimal also "remembers" how many digits it has, e.g. even though decimal 1.230 is equal to 1.23 the first is still aware of the trailing zero and can display it if formatted as text.


If you always know the maximum amount of decimals you are going to have (digits after the point). Then the best practice is to use fixed point notation. That will give you an exact result while still working very fast.

The simplest manner in which to use fixed point is to simply store the number in an int of thousand parts. For example if the price always have 2 decimals you would be saving the amount of cents ($12.45 is stored in an int with value 1245 which thus would represent 1245 cents). With four decimals you would be storing pieces of ten-thousands (12.3456 would be stored in an int with value 123456 representing 123456 ten-thousandths) etc etc.

The disadvantage of this is that you would sometimes need a conversion if for example you are multiplying two values together (0.1 * 0.1 = 0.01 while 1 * 1 = 1, the unit has changed from tenths to hundredths). And if you are going to use some other mathematical functions you also has to take things like this into consideration.

On the other hand if the amount of decimals vary a lot using fixed point is a bad idea. And if high-precision floating point calculations are needed the decimal datatype was constructed for exactly that purpose.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜