开发者

Test for overflow in integer addition [duplicate]

This question already has answers here: Closed 12 years ago.

Possible Duplicate:

Best way to detect integer overflow in C/C++

i have tried to implement simple program which tests if overflow occurs during integer addition:

#include <climits>
#include <iostream>
#include <string>

using namespace std;

string overflow(long a,long b){
    return ((a+b)>UINT_MAX)?"true":"false";
}

int main(){
    long a, b;
    co开发者_StackOverflow中文版ut << "enter a and b: ";
    cin >> a >> b;
    string m = overflow(a,b);
    cout << m;

    return 0;
}

UINT_MAX=65535 so i have entered 65535 and 20 but it wrote false why?


Given unsigned int a, b, this expression can never evaluate to true:

(a+b) > UINT_MAX

The reason why it can't ever be true is because UINT_MAX is the max, so whatever result (a+b) is can never be greater than UINT_MAX, because that would be a contradiction.

You tried to use a bigger data type long to get around this, but this is actually not guaranteed to be bigger than int in C++. You may want to try long long instead (although this type does not exist in standard C++).

Unlike, say, Java, C++ does not actually specify how large each data type must be, so the claim that UINT_MAX=65535 is not universally true. It's only true for processors where an unsigned integer is held in 16 bits. For 32 bits, UINT_MAX is 4294967295.

Related questions

  • Best way to detect integer overflow in C/C++

See also

  • Wikipedia/Limits.h
  • cplusplus.com/reference/library/climits


unsigned int a = UINT_MAX;
unsigned int b = 20;

unsigned int c = a+b;

cout << c << endl;

//output
19

There is already some questions on SO about overflow detection (e.g. Best way to detect integer overflow in C/C++ ). May be you'll find them helpful


What compiler are you using?

On most systems an integer is a 32bit signed type, which gives +/- 2.1 billion (give or take a couple).

65535 is a 16bit value so adding 20 to it wouldn't do much, but I am surprised that it gives false.


Lots of problems:

65535 is 2^16 or 2 bytes where as a long is generally 4 bytes long or 2^32. You could change your program to use shorts which are generally 2 bytes long

Some compilers now provide 2^64 for longs (a long must be at least 2^32 but can be larger). Similar rules apply to shorts.

You are using signed long which means one bit is used to denote a sign so even if you were using shorts entering 65535 would actually be a value of -1 (so change to use unsigned shorts)

if you add two unsigned shorts together and put the answer into a short the total can never be more than 65535 because a short can't denote that large a value.


Two things

  1. check size of long - if it is same as int, this will (a+b) will always be smaller than or equal to UINT_MAX

  2. UINT_MAX = 4294967295 for a 4 bit integer

It is all platform dependent


Try:

string overflow(unsigned int a,unsigned int b)
{ 
    return ((a > (std::numeric_limits<unsigned int>::max() - b))?"true":"false"; 
} 

Remember there is no gurantee that long is bigger than int so any code that assumes the long will store a larger number is flawed. So we can not do the sum to see if it overflows because by the time it overflows it is too late. So rearrange the symbols so the part of the expression is on the right hand side.


Try this:

return ((a+b)>(long)UINT_MAX)?"true":"false";

Also, make sure UINT_MAX really is 65535 and not something larger.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜