Largest representable floating point / Tiny mistake in "What Every Computer Scientist Should Know About Floating-Point Arithmetic".
I believe there is a tiny mistake in the paper "What Every Computer Scientist Should Know About Floating-Point Arithmetic".
It claims that
A less common situation is that a real number is out of range, that is, its absolute value is larger than
http://img219.imageshack.us/img219/7396/screenshot2011052714105.png http://img21开发者_如何学JAVA9.imageshack.us/img219/7396/screenshot2011052714105.png.
This is almost exact, the maximum representable floating point number is slightly less than that, and the real number is out of range when it is larger than
http://img707.imageshack.us/img707/9236/screenshot2011052714045.png http://img707.imageshack.us/img707/9236/screenshot2011052714045.pngRight?
I cannot be bothered with images, so I will write b for "beta" and m for "e_max".
So say b is the base, p is the precision, and m is the max exponent.
Then I think the expression you want is:
(1 - b^(-p)) * b^m
For example, for base-10 with 4 digits of precision and a max exponent of 12, this gives:
.9999 * 10^12
...which is correct.
Note that this is not exactly right for IEEE floating point, because there the leading "1" bit is implicit. And I vaguely recall some oddities when the exponent is all 1's.
精彩评论