Compiling with gcc c99, not with ideone c99. Integer Overflow
int rng(int min, int max){
int result = min + ( ( (max - (min - 1) ) * rand() ) / (RAND_MAX + 1) );
return result;
}
This won't compile with ideone using c99 mode, but does so when I'm using gcc on CodeBloc开发者_高级运维ks while on -std=c99
. I figured using long int would do the trick, but apparently not so. I use ideone when I'm trying some stuff in school since they just have VC++ and I'm doing C C99. I wouldn't want to not use this function when I need it, and so here I am asking for what is wrong.
Actually, on my compiler this produces warning: integer overflow in expression [-Woverflow]
.
The problem is caused by RAND_MAX + 1
. RAND_MAX
often happens to be the same as INT_MAX
. Adding 1
to INT_MAX
of course yields undefined behavior (except the compiler catches at compile time and replaces the code with something else).
If you want to see in action why it's bad, do this:
scanf("%d", &dummy); /* So the compiler won't catch it. */
dummy += INT_MAX;
gcc -ftrapv -Wall -Wextra # ftrapv is the key point
Check this out in ideone.
#include <stdlib.h>
#include <stdio.h>
int main(void) { printf("%x\n", RAND_MAX); return 0; }
You'll notice that it prints 7fffffff
, which is probably also MAX_INT
. What happens when you compute MAX_INT + 1
? Computing MAX_INT + 1
is wrong, whether or not you get an error.
There are many ways to write a uniform random number generator given a uniform RNG with a different range, but doing so correctly is a bit tricky.
This should work whenever max - min <= RAND_MAX
and max > min
...
int rng(int min, int max)
{
int range = max - min + 1;
int factor = ((unsigned) RAND_MAX + 1) / range;
int x;
do {
x = rand() / factor;
} while (x >= range);
return x + min;
}
This should be both uniform and give relatively high quality numbers (using %
is known to cause severe quality problems with some PRNGs and certain ranges).
Without the actual error text, it's hard to tell, but the presence of "Integer Overflow" in the title points to ..., well, an integer overflow :-)
The most likely culprit there is your RAND_MAX +1
bit, if RAND_MAX
is actually the same as INT_MAX
.
If you're willing to forego perfect distribution of your random numbers (a), you could opt for something like:
int rng (int minVal, int maxVal) {
int result = min + (rand() % (maxVal + 1 - minVal));
return result;
}
(a) The "damage" done by this is usually minimal but it does slightly skew the distribution away from the highest value. If that's important, there are other ways but, for most people other than statisticians, the damage is acceptable.
This can indeed produce integer overflow, if (max - min + 1) * RAND_MAX > INT_MAX
. You have a couple of options on how to avoid that:
- cast the intermediate values to a type that can store them without overflow, such as
uint64_t
ordouble
, or - use the modulus operator
%
instead, like this:
int rng ( int min, int max ) {
return min + rand() % (max - min + 1);
}
Note that using %
has a few drawbacks: if max - min + 1
is not a power of two, it will slightly bias the results towards the low end of the range (because it will not divide RAND_MAX+1
evenly); and if it is a power of two, it may reduce the period and quality of the random number stream (because the low bits of a LCRNG are less random than the high bits).
There are ways to avoid these issues through careful coding. For an example, see the standard Random.nextInt()
implementation in Java, which can be quite straightforwardly translated into C:
int rng (int min, int max) {
int n = max - min + 1;
int bits, val;
if ((n & -n) == n) // i.e., n is a power of 2
return min + (int)((n * (uint64_t)rand()) / ((uint64_t)RAND_MAX + 1));
do {
bits = rand();
val = bits % n;
} while(bits - val > RAND_MAX - (n-1));
return min + val;
}
(I think this translation should work as intended. It turned out a bit trickier than I though, since the original Java code relies, in effect, on the assumption that INT_MAX = RAND_MAX = (1<<31)-1
, which might not hold in C.)
精彩评论