开发者

Dealing with small numbers and accuracy

I have a program where I deal with a lot of very small numbers (towards the lower end of the Double limits).

During the execution of my application, some of these numbers progressively get smaller meaning their "estimation" is less accurate.

My solution at the moment is scaling开发者_如何学运维 them up before I do any calculations and then scaling them back down again?

...but it's got me thinking, am I actually gaining any more "accuracy" by doing this?

Thoughts?


Are your numbers really in the region between 10^-308 (smallest normalized double) and 10^-324 (smallest representable double, denormalized i.e. losing precision)? If so, then by scaling them up you do indeed gain accuracy by working around the limits of the exponent range of the double type.

I have to wonder though: what kind of application deals with numbers that extremely small? I know of no physical discipline that needs anything like that.


A double has a fixed number of significant digits, and another fixed number of bytes to represent the "power"-part.

In fact you may, therefore, have two issues:

  1. Regarding the power-part: that is what approaching the limit of small doubles is about. Scaling them up (by powers of 2) helps avoid that your number becomes no longer representable.

  2. when you write about the the accuracy of "estimation", I assume you refer to the number of significant digits: that is not related to the small-number-limit. A number that is very small, but not too small in the sense of the lower limit for doubles, has the same number of significant digits as any "more normal" number. Concerns about numerical precision of a number should, generally speaking, focus on how the number is computed, rather than on the absolute size of the result.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜