开发者

Why can't c# calculate exact values of mathematical functions

Why can't c# do any e开发者_运维知识库xact operations.

Math.Pow(Math.Sqrt(2.0),2) == 2.0000000000000004

I know how doubles work, I know where the rounding error is from, I know that it's almost the correct value, and I know that you can't store infinite numbers in a finite double. But why isn't there a way that c# can calculate it exactly, while my calculator can do it.

Edit

It's not about my calculator, I was just giving an example:

http://www.wolframalpha.com/input/?i=Sqrt%282.000000000000000000000000000000000000000000000000000000000000000000000000000000001%29%5E2

Cheers


Chances are your calculator can't do it exactly - but it's probably storing more information than it's displaying, so the error after squaring ends up outside the bounds of what's displayed. Either that, or its errors happen to cancel out in this case - but that's not the same as getting it exactly right in a deliberate way.

Another option is that the calculator is remembering the operations that resulted in the previous results, and applying algebra to cancel out the operations... that seems pretty unlikely though. .NET certainly won't try to do that - it will calculate the intermediate value (the root of two) and then square it.

If you think you can do any better, I suggest you try writing out the square root of two to (say) 50 decimal places, and then square it exactly. See whether you come out with exactly 2...


Your calculator is not calculating it exactly, it just that the rounding error is so small that it's not displayed.


I believe most calculators use binary-coded decimals, which is the equivalent of C#'s decimal type (and thus is entirely accurate). That is, each byte contains two digits of the number and maths is done via logarithms.


What makes you think your calculator can do it? It's almost certainly displaying less digits than it calculates with and you'd get the 'correct' result if you printed out your 2.0000000000000004 with only five fractional digits (for example).

I think you'll probably find that it can't. When I do the square root of 2 and then multiply that by itself, I get 1.999999998.

The square root of 2 is one of those annoying irrational numbers like PI and therefore can't be represented with normal IEEE754 doubles or even decimal types. To represent it exactly, you need a system capable of symbolic math where the value is stored as "the square root of two" so that subsequent calculations can deliver correct results.


The way calculators round up numbers vary from model to model. My TI Voyage 200 does algebra to simplify equations (among other things) but most calculators will display only a portion of the real value calculated, after applying a round function on the result. For example, you may find the square root of 2 and the calculator would store (let's say) 54 decimals, but will only display 12 rounded decimals. Thus when doing a square root of 2, then do a power of that result by 2 would return the same value since the result is rounded. In any case, unless the calculator can keep an infinite number of decimals, you'll always have a best approximate result from complexe operations.

By the way, try to represent 10.0 in binary and you'll realize that you can't represent it evenly and you'll end up with (something like) 10.00000000000..01


Your calculator has methods which recognize and manipulate irrational input values.

For example: 2^(1/2) is likely not evaluated to a number in the calculator if you do not explicitly tell it to do so (as in the ti89/92).

Additionally, the calculator has logic it can use to manipulate them such as x^(1/2) * y^(1/2) = (x*y)^1/2 where it can then wash, rinse, repeat the method for working with irrational values.

If you were to give c# some method to do this, I suppose it could as well. After all, algebraic solvers such as mathematica are not magical.


It has been mentioned before, but I think what you are looking for is a computer algebra system. Examples of these are Maxima and Mathematica, and they are designed solely to provide exact values to mathematical calculations, something not covered by the CPU.

The mathematical routines in languages like C# are designed for numerical calculations: it is expected that if you are doing calculations as a program you will have simplified it already, or you will only need a numerical result.


2.0000000000000004 and 2. are both represented as 10. in single precision. In your case, using single precision for C# should give the exact answer

For your other example, Wolfram Alpha may use higher precision than machine precision for calculation. This adds a big performance penalty. For instance, in Mathematica, going to higher precision makes calculations about 300 times slower

k = 1000000;
vec1 = RandomReal[1, k];
vec2 = SetPrecision[vec1, 20];
AbsoluteTiming[vec1^2;]
AbsoluteTiming[vec2^2;]

It's 0.01 second vs 3 seconds on my machine

You can see the difference in results using single precision and double precision introduced by doing something like the following in Java

public class Bits {
    public static void main(String[] args) {
    double a1=2.0;
    float a2=(float)2.0;
    double b1=Math.pow(Math.sqrt(a1),2);
    float b2=(float)Math.pow(Math.sqrt(a2),2);
    System.out.println(Long.toBinaryString(Double.doubleToRawLongBits(a1)));
    System.out.println(Integer.toBinaryString(Float.floatToRawIntBits(a2)));
    System.out.println(Long.toBinaryString(Double.doubleToRawLongBits(b1)));
    System.out.println(Integer.toBinaryString(Float.floatToRawIntBits(b2)));
    }
}

You can see that single precision result is exact, whereas double precision is off by one bit

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜