开发者

Addition vs Subtraction in loss of significance with floating-points

While learning about precision in floating point arithmetic and different methods to avoid it (using a conjugate, taylor series,...) books frequently mention the subtraction of two very similar numbers or one large and one small number as the biggest cause of error. How come it is only subtraction that causes this and not addition? As I see it you wou开发者_运维百科ld still lose just as many significant bits while shifting.


When subtracting two nearly-equal numbers, the difference will have fewer significant bits than the original numbers. A decimal example is:

 1.23456789    9 significant digits
-1.23456785    9 significant digits
───────────
 0.00000004    1 significant digit


There is no difference between addition or subtraction, subtraction is addition with the negated operand. You are correct, in order to add or subtract you have to shift the number with the smaller exponent off into the bit bucket in order to perform the operation resulting in fewer significant bits for that operand. If the exponents are more different than the size of the mantissa then addition or subtraction you get the number with the larger exponent as the result, all the bits of the smaller number have shifted off into the bit bucket N + 0 = N - 0.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜