Array of increasing floats that have unprecise binary representation
I want a function that creates an array of increasing floats. I run into problems when the incremenent cannot be represented precisely in binary. An example in an increment of 0.1, whose binary representation is infinitely repeating.
var incrementalArray = function(sta开发者_运维知识库rt, end, step) {
var arr = [];
for (var i = start; i <= end; i += step) {
// 0.1 cannot be represented precisely in binary
// not rounding i will cause weird numbers like 0.99999999
if (step === 0.1) {
arr.push(i.toFixed(1));
} else {
arr.push(i);
}
}
return arr;
};
There are two problems with this implementation. A) It only covers the case of having an increment of 0.1. Aren't there numerous other magic numbers that have repeating binary representation? B) The returned array does not include the end value when the increment is 0.1. However, it does include the end value when the increment is otherwise, making this function unpredictable.
incrementalArray(0.0, 3.0, 0.1);
// [0.0, 0.1, 0.2, .... 2.8, 2.9]
incrementalArray(2,10,2);
// [2, 4, 6, 8, 10]
How can this function work for all special increments and be predictable?
If you limit yourself to rational numbers you can express the steps with integers and divide the integer index by a single denominator as you convert each entry to a float (if you have to have it in that form).
I think this might work:
var incrementalArray = function(start, end, step) {
var arr = [];
// convert count to an integer to avoid rounding errors
var count = +((end - start) / step).toFixed();
for (var j = 0; j <= count; j++) {
var i = start + j * step;
arr.push(i);
}
return arr;
};
Output: 0, 0.1, 0.2, 0.30000000000000004, 0.4, 0.5, 0.6000000000000001, 0.7000000000000001, 0.8, 0.9, 1, 1.1, 1.2000000000000001, 1.3, 1.4000000000000001, 1.5, 1.6, 1.7000000000000001, 1.8, 1.9000000000000001, 2, 2.1, 2.2, 2.3000000000000002, 2.4000000000000003, 2.5, 2.6, 2.7, 2.8000000000000002, 2.9000000000000003, 3
If you want the output truncated to the same number of decimals as the input, you can use this:
var incrementalArray = function(start, end, step) {
var prec = ("" + step).length - ("" + step).indexOf(".") - 1;
var arr = [];
// convert count to an integer to avoid rounding errors
var count = +((end - start) / step).toFixed();
for (var j = 0; j <= count; j++) {
var i = start + +(j * step).toFixed(prec);
arr.push(i);
}
return arr;
};
Output: 0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2.0, 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 2.9, 3.0
Just FYI the problem is actually more complex than you may realize because in the IEEE754 binary floating point representation the number line is not evenly spaced — as the magnitude increases the least-detectable difference (LDD) also increases (i.e., the representable numbers get farther apart).
What this means is fairly simple: you can easily call this routine with parameters that will result in an infinite loop if the absolute magnitude of step is less than the LDD on the interval bounded by start and end.
The simplest way I know to determine the least-detectable difference at any arbitrary point in the real number line is this example (with the usual caveats about matching data type sizes) — there is no significance to the value, it was simply a random number:
real_32 A = 999.111f;
uint_32 B = (*(uint_32*) &A) ^ 1; //create a 1-bit difference
real_32 LDD = abs(A - *(real_32*) &B);
FWIW you can easily use a variant of this mechanism to determine if two real numbers are within the LDD of each other and consider that “equal” since direct comparison of reals is problematic.
How about scaling them up to integers and then back down? (demo)
function incrementalArray(start, end, step) {
step += "";
var scale = Math.pow(10, step.length - step.indexOf(".") - 1);
start *= scale;
end *= scale;
step *= scale;
var values = new Array();
for (var x = start; x < end; x += step) {
values.push(x / scale);
}
return values;
}
精彩评论