开发者

Test framework allowing tests to depend on other tests

I'm wondering if there is a test framework that allows for tests to be declared as being dependent on other tests. This would imply that they should not be run, or that their results should not be prominently displayed, if the tests that they depend on do not pass.

The point of such a setup would be to allow the root cause to be more readily determined in a situation where there are many test failures.

As a bonus, it would be great if there some way to use an object created with one test as a fixture for other tests.

Is this feature set provided by any of the Python testing frameworks? Or would such an approach be antithetic开发者_如何学Pythonal to unit testing's underlying philosophy?


Or would such an approach be antithetical to unit testing's underlying philosophy?

Yep...if it is a unit test, it should be able to run on its own. Anytime I have found someone wanting to create dependencies on tests was due to the code being structured in a poor manner. I am not saying this is the instance in your case but it can often be a sign of code smell.


Proboscis is a Python test framework that extends Python’s built-in unittest module and Nose with features from TestNG.

Sounds like what you're looking for. Note that it works a bit differently to unittest and Nose, but that page explains how it works pretty well.


This seems to be a recurring question - e.g. #3396055

It most probably isn't a unit-test, because they should be fast (and independent). So running them all isn't a big drag. I can see where this might help in short-circuiting integration/regression runs to save time. If this is a major need for you, I'd tag the setup tests with [Core] or some such attribute.

I then proceed to write a build script which has two tasks

  • Taskn : run all tests in X,Y,Z dlls marked with tag [Core]
  • Taskn+1 depends on Taskn: run all tests in X,Y,Z dlls excluding those marked with tag [Core]

(Taskn+1 shouldn't run if Taskn didn't succeed.) It isn't a perfect solution - e.g. it would just bail out if any one [Core] test failed. But I guess you should be fixing the Core ones instead of proceeding with Non-Core tests.


It looks like what you need is not to prevent the execution of your dependent tests but to report the results of your unit test in a more structured way that allows you to identify when an error in a test cascades onto other failed tests.


The test runners py.test, Nosetests and unit2/unittest2 all support the notion of "exiting after the first failure". py.test more generally allows to specify "--maxfail=NUM" to stop running and reporting after NUM failures. This may already help your case especially since maintaining and updating dependencies for tests may not be that interesting a task.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜