I\'m running into an issue with floating point exceptions turned on in Visual Studio 2005. If I have code like this:
When I try to take the N th root of a small number using C# I get a wrong number. For example, when I try to take the third root of 1.07, I get 1, which is clearly not true.
I\'m on Linux, x86-64 compiling with GCC (11.1) and I\'d like to use their 128-bit decimal type: https://gcc.gnu.org/onlinedocs/gccint开发者_如何学JAVA/Decimal-float-library-routines.html
Closed. This question needs details or clarity. It is not currently accepting answers. 开发者_JAVA技巧
Consider the following code: 0.1 + 0.2 == 0.3->false 0.1 + 0.2->0.30000000000000004 Why do these i开发者_如何学JAVAnaccuracies happen?Binary floating point math is like this. In most progr