开发者

Unit Testing with expected errors

For my ColorJizz library, I'm expecting slight errors when you do multiple conversions between different color formats. The errors are all very small (i.e. 0.0001 out).

What do you think I should do about these?

开发者_Python百科

I feel like there are 2 real options:

  1. Leave them as they are, with almost 30% of tests failing
  2. Put some kind of 'error range' in my unit tests and pass them if they're within that range. But how do I judge what level or error I should have?

Here's an example of the kind of failures I'm getting:

http://www.mikeefranklin.co.uk/tests/test/

What would be the best solution?


It seems you are using floating point values, for which rounding errors are a fact of life. I recommend applying an error margin for comparison checks in your unit tests.

Leaving even some of your unit tests failing is not a realistic option - unit tests should pass 100% under normal circumstances. If you let some of them fail regularly, you won't easily notice when there is a new failure, signifying a real bug in your code.


The error range is the standard approach for floating point "equality" tests.

NUnit uses "within":

Assert.That( 2.1 + 1.2, Is.EqualTo( 3.3 ).Within( .0005 );

Ruby's test/unit uses assert_in_delta:

assert_in_delta 0.05, (50000.0 / 10**6), 0.00001

And most other test frameworks have something similar. Apparently qunit is one that does not have something similar, but it would be easy enough to modify the source to include one of your design.

As for the actual delta to use, it depends on your application. I would think that 0.01 would be actually pretty restrictive for humans to visually identify color differences, but it would be a fairly lax requirement mathematically.


Gallio/MbUnit has a dedicated assertion for that specific test case (Assert.AreApproximatelyEqual)

double a = 5;
double b = 4.999;
Assert.AreApproximatelyEqual(a, b, 0.001); // Pass!
0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜