开发者

JavaScript Integer math incorrect results

I am just trying to implement a simple RNG in JS.

What's happening is javascript evaluates 119106029 * 1103515245 to be 131435开发者_开发知识库318772912110 rather than 131435318772912105. We know it's wrong since two odd numbers multiplied does not give an even number.

Anyone know what's up? I just want a reliable repeatable RNG, and because of these incorrect values I can't get results to match up with my C implementation of the same thing.


Per the ECMAScript standard, all numbers in JavaScript are (64-bit IEEE 754) floating-point numbers by default.

However all 32-bit integers can be exactly represented as floating-point numbers. You can force a result to 32 bits by using the appropriate bitwise operator, like this:

x = (a * b) >>> 0;  // force to unsigned int32
x = (a * b) | 0;    // force to signed int32

Weird, but that's the standard.

(Incidentally this rounding behavior is one of the most frequently reported "bugs" against Firefox's JavaScript engine. Looks like it's been reported 3 times so far this year...)

If you want real integer math, you can use BigInt values, a different type of number, written with an n at the end:

> 119106029n * 1103515245n
131435318772912105n

This is a relatively recent JS feature, and may not be implemented in old browsers.


As for reproducible random numbers in JavaScript, the V8 benchmark uses this:

// To make the benchmark results predictable, we replace Math.random
// with a 100% deterministic alternative.
Math.random = (function() {
  var seed = 49734321;
  return function() {
    // Robert Jenkins' 32 bit integer hash function.
    seed = ((seed + 0x7ed55d16) + (seed << 12))  & 0xffffffff;
    seed = ((seed ^ 0xc761c23c) ^ (seed >>> 19)) & 0xffffffff;
    seed = ((seed + 0x165667b1) + (seed << 5))   & 0xffffffff;
    seed = ((seed + 0xd3a2646c) ^ (seed << 9))   & 0xffffffff;
    seed = ((seed + 0xfd7046c5) + (seed << 3))   & 0xffffffff;
    seed = ((seed ^ 0xb55a4f09) ^ (seed >>> 16)) & 0xffffffff;
    return (seed & 0xfffffff) / 0x10000000;
  };
})();


When an integer in javascript is too big to fit in a 32 bit value, some browsers will convert it to a floating point. Since the value of floating points is only save to a limited precision, some rounding can occur on big values.


If done in C/C++ (double), the last numbers will be ...112 instead of 105 (which is correct). If performed with 'long double', the result will be as expected (...105). So it looks like the Javascript interpreter converts the numbers to 8-byte-double internally, does the calculation and does some unknown rounding which leads to a marginally better result than the C/C++ standard double calculation.

GCC 4.5:

 int main(int argc, char** argv)
{
 long double a = 119106029;
 long double b = 1103515245;
 long double c = a * b;
 printf("%.Lf\n", c);

 return 0;
}

Result:

131435318772912105

Expected:

131435318772912105

So I don't see a chance in Javascript without the aid of a BIGNUM library (if any).

Regards

rbo


With the arrival of BigInt, you can now perform these calculations with accuracy:

console.log((119106029n * 1103515245n).toString());

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜