开发者

How to write a unit-test where each test case has different input but does the same?

I need to create a unit-test for some python class. I have a database of inputs and expected results which should be generated by the UUT for those inputs.

Here is the pseudo-code of what I want to do:

for i=1 to NUM_TEST_CASES:
    Load input for test case i
    execute UUT on the input and save output of run
    Load expected result for test case i
    Compare output of run with the expected result

Can I achieve this using the unittest pack开发者_如何学Goage or is there some better testing package for this purpose?


The way you describe testing is an odd match for Unit Testing in general. Unit testing does not -- typically -- load test data or rest results from external files. Generally, it's simply hard-coded in the unit test.

That's not to say that your plan won't work. It's just to say that it's atypical.

You have two choices.

  1. (What we do). Write a little script that does the "Load input for test case i", and "Load expected result for test case i". Use this to generate the required unittest code. (We use Jinja2 templates to write Python code from source files.)

    Then delete the source files. Yes, delete them. They'll only confuse you.

    What you have left is proper Unittest files in the "typical" form with static data for the test case and expected results.

  2. Write your setUp method to do the "Load input for test case i", and "Load expected result for test case i". Write your test method to exercise the UUT.

It might look like this.

class OurTest( unittest.TestCase ):
    def setUp( self ):
        self.load_data()
        self.load_results()
        self.uut = ... UUT ...
    def runTest( self ):
        ... exercise UUT with source data ...
        ... check results, using self.assertXXX methods ...

Want to run this many times? One way it to do something like this.

class Test1( OurTest ):
    source_file = 'this'
    result_file = 'that'

class Test2( OutTest ):
    source_file= 'foo'
    result_file= 'bar'

This will allow the unittest main program to find and run your tests.


We do something like this in order to run what are actually integration (regression) tests within the unittest framework (actually an in-house customization thereof which gives us enormous benefits such as running the tests in parallel on a cluster of machines, etc, etc -- the great added value of that customization is why we're so keen to use the unittest framework).

Each test is represented in a file (the parameters to use in that test, followed by the expected results). Our integration_test reads all such files from a directory, parses each of them, and then calls:

def addtestmethod(testcase, uut, testname, parameters, expresults):
  def testmethod(self):
    results = uut(parameters)
    self.assertEqual(expresults, results)
  testmethod.__name__ = testname
  setattr(testcase, testname, testmethod)

We start with an empty test case class:

class IntegrationTest(unittest.TestCase): pass

and then call addtestmethod(IntegrationTest, ... in a loop in which we're reading all the relevant files and parsing them to get testname, parameters, and expresults.

Finally, we call our in-house specialized test runner which does the heavy lifting (distributing the tests over available machines in a cluster, collecting results, etc). We didn't want to reinvent that rich-value-added wheel, so we're making a test case as close to a typical "hand-coded" one as needed to "fool" the test runner into working right for us;-).

Unless you have specific reasons (good test runners or the like) to use unittest's approach for your (integration?) tests, you may find your life is simpler with a different approach. However, this one is quite viable and we're quite happy with its results (which mostly include blazingly-fast runs of large suites of integration/regression tests!-).


To me it seems like pytest has just the thing you need.

You can parametrise tests so that the same tests is run for as many times as you have inputs and all it takes is a decorator (no loops etc.).

Here's a plain example:

import pytest
@pytest.mark.parametrize("test_input,expected", [
    ("3+5", 8),
    ("2+4", 6),
    ("6*9", 42),
])
def test_eval(test_input, expected):
    assert eval(test_input) == expected

Here parametrise takes two arguments - the names of the parameters as a string, and the values of those parameters as an iterable.

test_eval will then be called once for each element of list.


Maybe you could use doctest for this. Knowing your inputs and outputs (and being able to map the case number to a function name) you should be able to produce a text file like this:

>>> from XXX import function_name1
>>> function_name1(input1)
output1
>>> from XXX import function_name2
>>> function_name2(input2)
output2
...

And then just use doctest.testfile('cases.txt'). It could be worth trying.


You might also want to take a look at my answer to this question. Again I'm trying to do regression testing rather than unit testing per-se but the unittest framework good for both.

In my case, I had about a dozen input files, covering a fair spread of different use cases, and I had about half a dozen test functions I wanted to call on each.

Instead of writing 72 different tests most of which were identical apart from the input parameters and results data, I created a dictionary of results (with the key being the input parameters and the value being a dictionary of results for each function under test). I then wrote a single TestCase class to test each of the 6 functions and replicated that over the 12 test files by adding teh TestCase to the test suite multiple times.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜