开发者

Super odd C++ black hole in int and float

Basically im trying to enter a value into the console, and output the decimal point as a whole number, and thats what needs to essentially occur.

I've developed a way to do this, using float, int and simple maths. I'm still new to C++ but this error is not making sense.

If you enter 0.01, 0.02, 0.03, 0.04, 0.06 or 0.08 you get the wrong output. I basically want to make it as simple as 0.06 * 100 = 6.

I'm pretty sure its a simple mistake, but why is this so, when clearly I'm entering a whole float number anyway.

#include <iostream>
using namespace std;

int main()
{
    float input = 0;

    while (input <= 0 || input > 999.99)
    {
        cout &开发者_JS百科lt;< "Please enter a number with decimal: ";
        cin >> input;
    }

    int whole_num = input;
    float to_decimal = input - whole_num;
    int decimal = to_decimal * 100;

    cout << decimal << endl;

    return 0;
}

EDIT: I found the solution for my problem


There was a problem with the float accuracy. So far adding 0.5f to the int can fix the problem. I know it does it properly to input of 2 decimal places, not sure for other types.

Thanks to Frederik Slijkerman!

#include <iostream>
using namespace std;

int main ()
{
float asfloat = 0.03;
int asint = asfloat * 100;
int asint_fix = 0.5f + asfloat * 100;
cout << "0.03 * 100 = " << asint << endl;
cout << "0.03 * 100 (with the +0.5f fix) = " << asint_fix << endl;
return 0;
}

Returns:

0.03 * 100 = 2
0.03 * 100 (with the +0.5f fix) = 3


That's because floating point numbers cannot represent decimal quantities exactly.

The floating-point number format your computer uses is binary. That means it can exactly represent 1/2, 1/4, 1/8, 1/16, ..., and combinations thereof. So, you can say 0.5, or 0.25, or even 0.75 (0.5 + 0.25) and those will be exact in floating point. But, 0.01 cannot be created with combinations of those fractions; therefore, its value is approximate. Similar story with the other numbers you tested.

This is an inherent limitation with using binary floating point. It's not "super odd"; this is Floating Point 101. :-)


I would add to Chris' answer that this is a classic source of errors in scientific computing. As reals cannot be exactly represented (the precision in a computer is finite), you accumulate rounding errors along the way of your computation. It is a very serious issue when you compute trajectories on a long time, for sattelites for instance.

Thus, there exists static analysis tools (such as Astrée) that help you detect when such problems can cause issues in your code, or guarantees that you're safe.

So all in all, it is not "very odd", but it is certainly "very unfortunate".

In your particular case, maybe using double instead of float can help, it will increase the precision of the binary representation of your number.


I just want to bring to your notice that you are loosing the precision when you are storing float result into int at line int decimal = to_decimal * 100; If you declare it as float decimal = to_decimal * 100, then it should work for you.


The inherent problem with float precision is amplified by the default truncation rounding in C and C++, which rounds e.g. 0.999999 to 0. You can make things more robust by rounding instead of truncating:

int decimal = 0.5f + to_decimal * 100;

You could also use a small value like 1e-6 instead of 0.5 to get a more robust truncation. It all depends on your particular situation.


As others have mentioned, the problem is that floating point numbers in C++ conform to the IEEE 754 standard which has the unfortunate inability to exactly represent common numbers like 0.01, 0.03, etc. (You can verify with a simple loop like for (float f=0.0; f<=1.0; f+=0.1) printf("%.010f\n",f); and see how quickly the error accumulates.)

However, you can often work around such problems using integers and division/multiplication for input/output. Also, the GNU GMP Arbitrary Precision Arithmetic Library might help.


You need to decide how many significant bits you are interested in (it looks like 2), after you've done that, this is the best that you can do.:

const int significantBits = 2;
int decimal = (int)(pow(10, significantBits) * div(input, 1.0));


Read the Floating Point Guide. This is what every programmer should know when doing floating-point math.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜