开发者

Why do I need 17 significant digits (and not 16) to represent a double?

Can someone give me an example of a floating point number (double precision), that needs more than 16 significant decimal digits to represent it?

I have found in this thread that 开发者_Go百科sometimes you need up to 17 digits, but I am not able to find an example of such a number (16 seems enough to me).

Can somebody clarify this?


My other answer was dead wrong.

#include <stdio.h>

int
main(int argc, char *argv[])
{
    unsigned long long n = 1ULL << 53;
    unsigned long long a = 2*(n-1);
    unsigned long long b = 2*(n-2);
    printf("%llu\n%llu\n%d\n", a, b, (double)a == (double)b);
    return 0;
}

Compile and run to see:

18014398509481982
18014398509481980
0

a and b are just 2*(253-1) and 2*(253-2).

Those are 17-digit base-10 numbers. When rounded to 16 digits, they are the same. Yet a and b clearly only need 53 bits of precision to represent in base-2. So if you take a and b and cast them to double, you get your counter-example.


The correct answer is the one by Nemo above. Here I am just pasting a simple Fortran program showing an example of the two numbers, that need 17 digits of precision to print, showing, that one does need (es23.16) format to print double precision numbers, if one doesn't want to loose any precision:

program test
implicit none
integer, parameter :: dp = kind(0.d0)
real(dp) :: a, b
a = 1.8014398509481982e+16_dp
b = 1.8014398509481980e+16_dp
print *, "First we show, that we have two different 'a' and 'b':"
print *, "a == b:", a == b, "a-b:", a-b
print *, "using (es22.15)"
print "(es22.15)", a
print "(es22.15)", b
print *, "using (es23.16)"
print "(es23.16)", a
print "(es23.16)", b
end program

it prints:

First we show, that we have two different 'a' and 'b':
a == b: F a-b:   2.0000000000000000     
using (es22.15)
1.801439850948198E+16
1.801439850948198E+16
using (es23.16)
1.8014398509481982E+16
1.8014398509481980E+16


I think the guy on that thread is wrong, and 16 base-10 digits are always enough to represent an IEEE double.

My attempt at a proof would go something like this:

Suppose otherwise. Then, necessarily, two distinct double-precision numbers must be represented by the same 16-significant-digit base-10 number.

But two distinct double-precision numbers must differ by at least one part in 253, which is greater than one part in 1016. And no two numbers differing by more than one part in 1016 could possibly round to the same 16-significant-digit base-10 number.

This is not completely rigorous and could be wrong. :-)


Dig into the single and double precision basics and wean yourself of the notion of this or that (16-17) many DECIMAL digits and start thinking in (53) BINARY digits. The necessary examples may be found here at stackoverflow if you spend some time digging.

And I fail to see how you can award a best answer to anyone giving a DECIMAL answer without qualified BINARY explanations. This stuff is straight-forward but it is not trivial.


The largest continuous range of integers that can be exactly represented by a double (8-byte IEEE) is -253 to 253 (-9007199254740992. to 9007199254740992.). The numbers -253-1 and 253+1 cannot be exactly represented by a double.

Therefore, no more than 16 significant decimal digits to the left of the decimal point will exactly represent a double in the continuous range.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜