开发者

Is (x == x + 1) always return false for integer x?

I saw this in 开发者_StackOverflow中文版an interview preparation book - Algorithms for Interviews. It did not say what the answer was.

Based on my knowledge it does return false. Am I missing something ?


Javascript comes to my mind, it has a special number value Infinity.

So this in fact will return true:

var x = Infinity;
alert(x == x + 1);


With integers, x != x + 1 for all x. With floating-point numbers, it's not guaranteed to be true; with a large enough exponent the 1 will fade into insignificance and be lost off the end of the mantissa - the easiest case to see from being the largest possible exponent which makes the value infinity. And infinity plus one is infinity. But it would work with smaller exponents as well.

Then in languages that support operator overloading it's quite possible to break such trivial mathematical laws. For example, in Python,

>>> class X(object):
...    def __eq__(self, other):
...        return True
...    def __add__(self, other):
...        return self
...
>>> x = X()
>>> x == x + 1
True

Unless the type (and implementation) is specified in the question, it is not safe to assume that it is always true. But as it has been specified that it's an integer, you can know that x != x + 1 for all x. Integers are stored as a series of bits and adding one is done by toggling the last bit and then carrying if it had been 1 et cetera, which will never get the same result (regardless of how many bits there are in the integer type, provided it's greater than zero).


It depends on the language and it's operator precedence. It's possible that the == operator has an equal or higher precedence than the + operator. In that case, your expression would expand to ((x == x) + 1) which could either give an error (because you're adding 1 to a boolean) or could evaluate to 2 (since in many languages TRUE equals 1).

I don't know which, if any, language where == has equal or higher precedence than + though.


it should :)

it's also true for

x = Int32.MaxValue;
result = x == x + 1;


C and C++ don't define the behaviour when the maximum signed integral value their integral types store overflow. For example, the biggest legal value of type int is known as INT_MAX (available in a Standard header) - those languages don't require INT_MAX + 1 to be anything in particular, i.e. they don't require compilers or deployment environments to ensure it won't still be INT_MAX. But they're extremely unlikely (for performance reasons) to generate extra machine code testing for overflow from every arithmetic operation... rather, they'll typically let the CPU react to the overflow and produce whatever result it feels like.

At the CPU level, there's no real reason for a CPU not to bail out on detecting potential overflow, ignoring an addition or generating some manner of exception: if the latter's not propagated to the OS/app, or is ignored by them, then the original value might be seen. But most CPUs treat "+ 1" for signed types the same as they would for unsigned types, which C/C++ require to simply wrap round modulo 2^#bits... how that's then interpreted as a signed number depends on which of the signed-number representations is in use. For some languages and CPUs, integral operations on BCD values may be possible - how they react to overflow is even harder to speculate on.


In Java for instance, x == x + 1 might also hold true if the code is executed in several threads.

There is a chance that another thread will modify x in the meantime, so the condition evetually might end up true. The x variable must be declared volatile in order not to get cached.


Of course it must be false, unless you do what Paul says ;) x+1 evaluates to some value which is greater by 1 than x. So, for example 45 == 45 + 1 is false. No assiging takes place in this expression.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜