开发者

StringBuilder.Append with float

StringBuilder.Append using float is truncating the value. What is it converting to and how can I stop it from truncating?

AttributeOrder is type float and I am losing precision when building the string.

if ( AttributeOrder != 0 ) 
{ 
    if ( Result.Length > 0 ) 
    { 
        Result.Append( " AND " ); 
    } 
    Result.Append( COLUMN_ATTRIBUTE_ORDER ); 
    Result.Append( "=" ); 
    Result.Append( AttributeOrder ); 
} 

EDIT: This is legacy code, I cannot change the underlying datatypes. The column in SQL Server is a real and the datatyp开发者_JAVA百科e is float. I need to present the float as it is filled, as a string for other purposes and not loose precision as is.


You can convert the float to a string using a format string, like this:

Result.Append(AttributeOrder.ToString("G9"))

Or, alternatively,

Result.AppendFormat("{0}={1:G9}", COLUMN_ATTRIBUTE_ORDER, AttributeOrder)

However, you will not be able to get more than 9 digits of precision from a float, and digits 8 and 9 will be inaccurate.


The float datatype can only hold seven digits of precision. As soon as you put the value into a float variable, you've irrecoverably lost the precision. In order to store all 24 digits of precision from SQL Server, you need to use the decimal datatype.

Once you change the variable to a decimal, Append will give you as many digits as you put in.


If you need to provide a representation of a floating-point number then you should use System.Decimal - System.Single cannot accurately represent the precision you are looking to display.

This simple example shows the difference:

using System;   

class Test
{
    static void Main()
    {
        Single s = 1.23456789f;
        Decimal d = 1.23456789m;

        Console.WriteLine(s);
        Console.WriteLine(d);
    }
}


Let's have some fun with floats, and see why this isn't working.

Say the 24-digit number 1.23456789012345678901234 is read from a SQL real into a .Net float.

The float value looks like this in binary: 0 01111111 00111100000011001010010

The first 0 is the sign bit, indicating the number is positive. 01111111 is the biased exponent, indicating that the significand is multiplied by 2^0 (or 1). 00111100000011001010010 is the digits of the number, minus the first bit.

So the float variable now encodes the number 1.00111100000011001010010 in binary.

Let's see what happens when we convert that float into decimal.

1 * 1 =                         1
.
0 * 0.5 =                       0
0 * 0.25 =                      0
1 * 0.125 =                     0.125
1 * 0.0625 =                    0.0625
1 * 0.03125 =                   0.03125
1 * 0.015625 =                  0.015625
0 * 0.0078125 =                 0
0 * 0.00390625 =                0
0 * 0.001953125 =               0
0 * 0.0009765625 =              0
0 * 0.00048828125 =             0
0 * 0.000244140625 =            0
1 * 0.0001220703125 =           0.0001220703125
1 * 0.00006103515625 =          0.00006103515625
0 * 0.000030517578125 =         0
0 * 0.0000152587890625 =        0
1 * 0.00000762939453125 =       0.00000762939453125
0 * 0.000003814697265625 =      0
1 * 0.0000019073486328125 =     0.0000019073486328125
0 * 0.00000095367431640625 =    0
0 * 0.000000476837158203125 =   0
1 * 0.0000002384185791015625 =  0.0000002384185791015625
0 * 0.00000011920928955078125 = 0
                                ------------------------
                                1.2345678806304931640625

So if we display all of the digits of the floating point number we get 1.2345678806304931640625. So isn't this the number that we want to display? Why is it rounding this number? And why is this value different from the number we started with?

To see why, let's step through a few adjacent floating point values:

binary floating point representation   decimal representation
------------------------------------   ----------------------
0 01111111 00111100000011001010000   = 1.2345676422119140625
0 01111111 00111100000011001010001   = 1.23456776142120361328125
0 01111111 00111100000011001010010   = 1.2345678806304931640625
0 01111111 00111100000011001010011   = 1.23456799983978271484375
0 01111111 00111100000011001010100   = 1.234568119049072265625

As you can see, the exact same floating point number is used to represent all values in this range: [ 1.234567821025848388671875 , 1.234567940235137939453125 )

Therefore, any decimal digits after the eighth are lost during the conversion to a float, and any that might be displayed are completely meaningless, and are unrelated to the actual value that is being represented.

All decimal digits beyond the eighth are an artifact of rounding to 24-bit binary and converting to decimal, and they carry no actual meaning.


You could always use the ToString() method on your float, like this:

if ( AttributeOrder != 0 ) 
{ 
    if ( Result.Length > 0 ) 
    { 
        Result.Append( " AND " ); 
    } 
    Result.Append( COLUMN_ATTRIBUTE_ORDER ); 
    Result.Append( "=" ); 
    Result.Append( AttributeOrder.ToString("0.000000") ); 
} 

For example, the format specifier above shows 6 decimal places. Replace the 0 with # if you don't want to show trailing zeros.


You can kill multiple birds with string.Format(). Reduce the lines of code, and display the number to an arbitrary precision.

Result.Append(string.Format("{0}={1:0.000000}", COLUMN_ATTRIBUTE_ORDER, AttributeOrder)); 
0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜