开发者

Casting floating point (double) to string

I'm running into something I've never seen before and would very much like to understand what is going on. I'm trying to round a double to 2 decimal places and cast that value to a string. Upon completion of the cast things go crazy on me.

Here is the code:

void formatPercent(std::string& percent, const std::string& value, Config& config)
{
    double number = boost::lexical_cast<double>(value);
    if (config.total == 0)
    {
        std::ostringstream err;
        err << "Cannot calculate percent from zero total.";
        throw std::runtime_error(err.str());
    }
    number = (number/config.total)*100;
    // Format the string to only return 2 decimals of precision
    number = floor(number*100 + .5)/100;
    percent = boost::lexical_cast<std::string>(number);

    return;
}

I wasn't getting quite what I expected so I did some investigation. I did the following:

std::cout << std::setprecision(10) << "number = " 开发者_Go百科<< number << std::endl;
std::cout << "percent = " << percent << std::endl;

...and get the following:

number = 30.63
percent = 30.629999999999999

I suspect that boost is doing something funny. Does anyone have any insight here?

Seriously, how strange is this?!? I ask for 10 digit precision on a double and get 4 digits. I ask to cast those 4 digits to a string and get that mess. What is going on?


The decimal number 30.63 cannot be stored in an object of type double. The closest valid double value, the one that is actually stored, is 8621578536647393 * 2^-48, which, in decimal notaion, is 30.629999999999999005240169935859739780426025390625.

You can see that if you do std::cout << std::setprecision(100) << "number = " << number << std::endl;


std::precision sets the maximum number of significant digits to display

On the default floating-point notation, the precision field specifies the maximum number of meaningful digits to display in total counting both those before and those after the decimal point. Notice that it is not a minimum and therefore it does not pad the displayed number with trailing zeros if the number can be displayed with less digits than the precision.

30.629999999999999 is the actually floating point representation of 30.63


You're asking for ten digits of precision, but the actual "inaccuracy" is further down, so when it's rounded into ten digits it becomes a neat 30.63. Your lexical_cast takes all the digits into account, therefore resulting in the precise floating point value (perhaps not the kind of precision you want, but it's the most precise representation of what's in the computer's memory).


The problem is that floating point numbers cannot accurately represent arbitrary decimal values. What you're seeing is floating point error. Boost is accurately displaying the value you provide to it.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜