Will Decimal or Double work better for translations that need to be accurate up to .00001?
I'm an inspector at a machine shop. I have an html report generated by another inspector that has some problems I need to fix. This isn't the first time: I need something better than PowerShell
and RegEx
. (Fear not internet warriors, I know I shouldn't use RegEx
for ht开发者_如何学编程ml. I'm using HtmlAgilityPack
now.)
I'm aware there are a lot of similar discussions on SO and on the internet in general. I didn't find anything quite this specific. I can write some small experiment apps to test some of this (and I plan to) but, I want to have some idea of if it will be future safe before I implement all of it. Even though I'm not a programmer by trade I have a good grasp of the concepts we're talking about; Don't worry about talking over my head.
Over a series of transformations is it likely I will have more than .0001 error? What about .00001?
-If a report's alignment is off, I may need to rotate and translate it multiple times. -I've only implemented rotation and translation at this time but, I plan on adding more transformations that may increase the number and complexity of operations. -The integer component can go into the thousands. -Our instruments are certified to .0001 typically. Normal significant digit rules for scientific measurements apply.Will the overhead of Decimal
and writing the trig functions manually be incredibly time consuming (edit: at runtime)?
Nominal
(as modeled) and Actual
(as measured.)
-Easiest to test, but I want to know before implementing math functions for Decimal.
Side question:
I have a point class,Point3D
, that holds x
, y
and z
. Since each data point is two of these (the Nominal
and Actual
.) I then have a class, MeasuredPoint
, with two Point3D
instances. There has to be a better name than MeasuredPoint
that isn't annoyingly long.
Oh yeah, this is C#/.Net. Thanks,
Don't implement trig functions with Decimal! There's a reason why the standard library doesn't provide them, which is that if you're doing trig, Decimal doesn't provide any added benefit.
Since you're going to be working in radians anyway, your values are defined as multiples/ratios of PI, which isn't representable in any base system. Forcing the representation to base ten is more likely to increase than decrease error.
If precision (minimum error in ulps) is important for your application, then you must read What Every Computer Scientist Should Know About Floating-Point Arithmetic by David Goldberg. That article does a much better job explaining than I would.
The upshot however is that if your desired precision is only 5 decimal places, even a 32-bit float (IEEE-754 single-precision) is going to be plenty. A 64-bit double IEEE-754 double-precision will help you be more precise with your error term, but a 128-bit base-10 floating-point value is just performance-killing overkill, and almost certainly won't improve the precision of your results one iota.
If you need accuracy to be maintained over multiple operations then you really ought to consider using Decimal. While it may be okay for holding numbers for a short time, no IEEE754-backed float format can sustain its value indefinitely as the number of operations applied increases.
Try looking for a library that would handle your needs. I stumbled across W3b.sine in a half-hearted search. I've definitely encountered others in the past.
Since you are talking about rotations as well as translations, as well as trigonometry functions, it seems safe to assume the values you are talking about are not exact multiples of 0.0001.
Based on this assumption:
With decimals, you will be essentially be rounding to 0.0001 (or your chosen precision) after each step, and these rounding errors will cumulate.
Double values would generally be more accurate: you would store internally with all available precision, and round to four decimals when displaying results.
For example, as the result of a rotation or transformation, you want to move by a distance of 1/3 (0.333....). And you want to repeat this movement three times.
If you store the distances as decimal with four decimal places (0.3333), the sum will be 0.9999, an error of 0.0001.
If you store as doubles, you can achieve much higher precision, and as a bonus performance will be better.
Really decimals are usually only used for financial calculations, where results are need to be exactly rounded to a fixed number of base ten decimal places.
Floats and Doubles are fast approximations, that's it.
Apart from 0.0
and 1.0
, you won't even get exact representations for most constants (0.1
for instance). So if you have to guarantee a certain precision, using floating point arithmetic is not an option.
But if the goal is to achieve a certain precision, give or take a bit, then double might do. Just watch out for Loss of significance.
Frankly, I think FLOAT was a wrong turn in data processing. Because people work in decimal, and output is always translated into decimal, float just causes continual problems. I used to use float when a variable could hold a wide range of values, as in from 1E-9 to 1E9 kind of range, and use integers with a fixed number of decimal places managed in code otherwise. These days with the Java BigDecimal class and similar functionality in other languages, there's almost no reason to use float. Perhaps in an environment where you are doing a lot of calculations and performance is an issue, you'd accept the rounding problems. I don't think I've used a float in a program in at least ten years.
精彩评论