开发者

Why are there mistakes in calculations of floats and doubles?

I always wondered why the floats aren't really acurate when computers should give the precise answer. I read in a book somewhere that it is better to compare a variable to a number around the value we want, since the calculate value may not always be a whole number as we expect. How开发者_StackOverflow中文版 do machines caluclate these divisions? Any links to websites are welcome :)


Jon Skeet mentions it here (scroll down till you see "double d=0.3;" drawn on a slide): http://msmvps.com/blogs/jon_skeet/archive/2009/11/02/omg-ponies-aka-humanity-epic-fail.aspx

A more detailed answer here: http://download.oracle.com/docs/cd/E19957-01/806-3568/ncg_goldberg.html


a simple answer would be that a computer uses a limited amount of digits to represent a number.

If you try to represent i.e. the number 1/7 in decimal it would be 0.14285714... and so on infinitely. The same happens for computer i.e. trying to represent the number 1/10 (0.1 in decimal) in binary which becomes an infinite series as-well.

Therefor sometimes you don't get the most accurate number.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜