Testing functions with random output
- I am working on a test project to test a neural networks library...
- The problem is that this library sometimes uses random numbers..
- I need to derive test cases (input,expected output,actual output)...
Does anybody have an idea how to derive test cases (input,expected output,actual output) to a function that uses 开发者_JS百科random numbers when taking actions and evaluating outputs??
Yes, you either have to run a large enough number of cases so that the randomness averages out, or you make the random source another input to your function or method so you can test it independently.
An example of the first kind (this is Python, but the principle can apply in any language).
def test_random_number():
total = sum(random.uniform(0, 1) for _ in xrange(1000))
assert 100 < total < 900
So this test can fail if you're unlucky, but it's still a reasonable test since it'll pass nearly all the time, and it's pretty simple to make this kind of test.
To do things 'properly', you need to inject the random source.
class DefaultRandomBehavior(object):
def pick_left_or_right(self):
return random.choice(['left', 'right'])
class AardvarkModeller(object):
def __init__(self, random_source=None):
self.random_source = random_source or DefaultRandomBehavior()
def aardvark_direction(self):
r = self.random_source.pick_left_or_right()
return 'The aardvark faces ' + r
Now you can unit test this by either mocking out or faking the DefaultRandomBehavior class, thus completely side-stepping the non-determinism.
It's unlikely that the library is really using random numbers as computers just aren't very good at generating those. Instead it's probably using a pseudo-random number generator seeded in some way, possibly from a 'real' random source or maybe from the current time. One way to make your results reproducible would be to teach the library to be able to accept a user supplied PRNG seed and set this to some constant for your test cases. The internal sequence of random numbers would then always be the same for your tests.
The second (and maybe more useful) approach would be to compare the expected output and actual output in an approximate way. If the use of random numbers makes such a big difference to your calculation that the results are really not reproducible you may want to think about the usefulness of the calculation. The trick would be to find some properties of the output of the library which can be compared numerically, with an allowable error, so I suspect you would want to compare the results of doing something with the neural network rather than compare the networks directly.
精彩评论