开发者

C++ floating point precision [duplicate]

This question already has answers here: Closed 12 years ago.

Possible Duplicate:

Floating point inaccuracy examples

double a = 0.3;
std::cout.precision(20);
std::cout << a << std::endl;

result: 0.2999999999999999889

double a, b;
a = 0.3;
b = 0;
for (char i = 1; i <= 50; i++) {
  b = b + a;
};
std::cout.precision(20);
std::cout << b << std::endl;

result: 15.000000000000014211

So.. 'a' is smaller than it should be. But if we take 'a' 50 times - result will be bigger than it should be.

Why is this? And how to get correct result开发者_运维技巧 in this case?


To get the correct results, don't set precision greater than available for this numeric type:

#include <iostream>
#include <limits>
int main()
{
        double a = 0.3;
        std::cout.precision(std::numeric_limits<double>::digits10);
        std::cout << a << std::endl;
        double b = 0;
        for (char i = 1; i <= 50; i++) {
                  b = b + a;
        };
        std::cout.precision(std::numeric_limits<double>::digits10);
        std::cout << b << std::endl;
}

Although if that loop runs for 5000 iterations instead of 50, the accumulated error will show up even with this approach -- it's just how floating-point numbers work.


Why is this?

Because floating-point numbers are stored in binary, in which 0.3 is 0.01001100110011001... repeating just like 1/3 is 0.333333... is repeating in decimal. When you write 0.3, you actually get 0.299999999999999988897769753748434595763683319091796875 (the infinite binary representation rounded to 53 significant digits).

Keep in mind that for the applications for which floating-point is designed, it's not a problem that you can't represent 0.3 exactly. Floating-point was designed to be used with:

  • Physical measurements, which are often measured to only 4 sig figs and never to more than 15.
  • Transcendental functions like logarithms and the trig functions, which are only approximated anyway.

For which binary-decimal conversions are pretty much irrelevant compared to other sources of error.

Now, if you're writing financial software, for which $0.30 means exactly $0.30, it's different. There are decimal arithmetic classes designed for this situation.

And how to get correct result in this case?

Limiting the precision to 15 significant digits is usually enough to hide the "noise" digits. Unless you actually need an exact answer, this is usually the best approach.


Computers store floating point numbers in binary, not decimal.

Many numbers that look ordinary in decimal, such as 0.3, have no exact representation of finite length in binary.
Therefore, the compiler picks the closest number that has an exact binary representation, just like you write 0.33333 for 1⁄3.

If you add many floating-point numbers, these tiny difference add up, and you get unexpected results.


It's not that it's bigger or smaller, it's just that it's physically impossible to store "0.3" as an exact value inside a binary floating point number.

The way to get the "correct" result is to not display 20 decimal places.


To get the "correct" result, try

List of Arbitrary-precision arithmetic Libraries from Wikipedia: http://en.wikipedia.org/wiki/Arbitrary-precision

or

http://speleotrove.com/decimal

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜