开发者

Avoiding problems with JavaScript's weird decimal calculations

I just read on MDN that one of the quirks of JS's handling of numbers due to everything being "double-precision 64-bit format IEEE 754 values" is that when you do something like .2 + .1 you get 0.30000000000000004 (that's what the article reads, but I get 0.29999999999999993 in Firefox). Therefore:

(.2 + .1) * 10 == 3

evaluates to false.

This seems like it would be very problematic. So what can be done to avoid bugs due to the imprecise decimal calculations in JS?

I've noticed that if you do 1.2 + 1.1 you get the right answer. So should you just avoid any kind of math that involves values less than 1? Because that seems very impractical. Are there any other dangers to doing math in JS?

Edit:

I understand that many decimal fractions can't be stored as binary, but the way most other languages I've encountered appear to deal with the error (like JS handles numbers greater than 1) seems more intu开发者_JS百科itive, so I'm not used to this, which is why I want to see how other programmers deal with these calculations.


1.2 + 1.1 may be ok but 0.2 + 0.1 may not be ok.

This is a problem in virtually every language that is in use today. The problem is that 1/10 cannot be accurately represented as a binary fraction just like 1/3 cannot be represented as a decimal fraction.

The workarounds include rounding to only the number of decimal places that you need and either work with strings, which are accurate:

(0.2 + 0.1).toFixed(4) === 0.3.toFixed(4) // true

or you can convert it to numbers after that:

+(0.2 + 0.1).toFixed(4) === 0.3 // true

or using Math.round:

Math.round(0.2 * X + 0.1 * X) / X === 0.3 // true

where X is some power of 10 e.g. 100 or 10000 - depending on what precision you need.

Or you can use cents instead of dollars when counting money:

cents = 1499; // $14.99

That way you only work with integers and you don't have to worry about decimal and binary fractions at all.

2017 Update

The situation of representing numbers in JavaScript may be a little bit more complicated than it used to. It used to be the case that we had only one numeric type in JavaScript:

  • 64-bit floating point (the IEEE 754 double precision floating-point number - see: ECMA-262 Edition 5.1, Section 8.5 and ECMA-262 Edition 6.0, Section 6.1.6)

This is no longer the case - not only there are currently more numerical types in JavaScript today, more are on the way, including a proposal to add arbitrary-precision integers to ECMAScript, and hopefully, arbitrary-precision decimals will follow - see this answer for details:

  • Difference between floats and ints in Javascript?

See also

Another relevant answer with some examples of how to handle the calculations:

  • Node giving strange output on sum of particular float digits


In situations like these you would tipically rather make use of an epsilon estimation.

Something like (pseudo code)

if (abs(((.2 + .1) * 10) - 3) > epsilon)

where epsilon is something like 0.00000001, or whatever precision you require.

Have a quick read at Comparing floating point numbers


(Math.floor(( 0.1+0.2 )*1000))/1000

This will reduce the precision of float numbers but solves the problem if you are not working with very small values. For example:

.1+.2 =
0.30000000000000004

after the proposed operation you will get 0.3 But any value between:

0.30000000000000000
0.30000000000000999

will be also considered 0.3


There are libraries that seek to solve this problem but if you don't want to include one of those (or can't for some reason, like working inside a GTM variable) then you can use this little function I wrote:

Usage:

var a = 194.1193;
var b = 159;
a - b; // returns 35.11930000000001
doDecimalSafeMath(a, '-', b); // returns 35.1193

Here's the function:

function doDecimalSafeMath(a, operation, b, precision) {
    function decimalLength(numStr) {
        var pieces = numStr.toString().split(".");
        if(!pieces[1]) return 0;
        return pieces[1].length;
    }

    // Figure out what we need to multiply by to make everything a whole number
    precision = precision || Math.pow(10, Math.max(decimalLength(a), decimalLength(b)));

    a = a*precision;
    b = b*precision;

    // Figure out which operation to perform.
    var operator;
    switch(operation.toLowerCase()) {
        case '-':
            operator = function(a,b) { return a - b; }
        break;
        case '+':
            operator = function(a,b) { return a + b; }
        break;
        case '*':
        case 'x':
            precision = precision*precision;
            operator = function(a,b) { return a * b; }
        break;
        case '÷':
        case '/':
            precision = 1;
            operator = function(a,b) { return a / b; }
        break;

        // Let us pass in a function to perform other operations.
        default:
            operator = operation;
    }

    var result = operator(a,b);

    // Remove our multiplier to put the decimal back.
    return result/precision;
}


Understanding rounding errors in floating point arithmetic is not for the faint-hearted! Basically, calculations are done as though there were infinity bits of precision available. The result is then rounded according to rules laid down in the relevant IEEE specifications.

This rounding can throw up some funky answers:

Math.floor(Math.log(1000000000) / Math.LN10) == 8 // true

This an an entire order of magnitude out. That's some rounding error!

For any floating point architecture, there is a number that represents the smallest interval between distinguishable numbers. It is called EPSILON.

It will be a part of the EcmaScript standard in the near future. In the meantime, you can calculate it as follows:

function epsilon() {
    if ("EPSILON" in Number) {
        return Number.EPSILON;
    }
    var eps = 1.0; 
    // Halve epsilon until we can no longer distinguish
    // 1 + (eps / 2) from 1
    do {
        eps /= 2.0;
    }
    while (1.0 + (eps / 2.0) != 1.0);
    return eps;
}

You can then use it, something like this:

function numericallyEquivalent(n, m) {
    var delta = Math.abs(n - m);
    return (delta < epsilon());
}

Or, since rounding errors can accumulate alarmingly, you may want to use delta / 2 or delta * delta rather than delta.


You need a bit of error control.

Make a little double comparing method:

int CompareDouble(Double a,Double b) {
    Double eplsilon = 0.00000001; //maximum error allowed

    if ((a < b + epsilon) && (a > b - epsilon)) {
        return 0;
    }
    else if (a < b + epsilon)
        return -1;
    }
    else return 1;
}


As I found it while working with monetary values, I found a solution just by changing the values to cents, so I did the following:

result = ((value1*100) + (value2*100))/100;

Working with monetary values, we have only two decimal houses, thats why I multiplied and dived by 100. If you're going to work with more decimal houses, you'll have to multiply the amount of decimal houses by then, having:

  • .0 -> 10
  • .00 -> 100
  • .000 -> 1000
  • .0000 -> 10000 ...

With this, you'll always dodge working with decimal values.


Convert the decimals into integers with multiplication, then at the end convert back the result by dividing it with the same number.

Example in your case:

(0.2 * 100 + 0.1 * 100) / 100 * 10 === 3

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜