开发者

Framework for automatically planning execution order of automated regression-tests?

I'm currently working on the implementation of a testsuite for a relatively complex application. The application is Java & Spring based with a web frontend. The Frontend-Tests can be written in Java too (using Silk4J and their automation-client). Actually writing the tests is not the issue, this is the easy part. Where it starts getting tricky, is the order in which individual tests can be executed.

Currently we are writing our tests using JUnit. As JUnit is a Unit-Testing tool the order in which the tests are executed is not fixed. If we simply create the tests for every module of the application we quickly run into trouble. Some tests have to rely on other parts to be working correctly and that certain data from other modules is available. I could write the tests for every module in a way that is initializes the applications state to a pre-defined state and then executes its tests, but having to clean and prepare the state would be quite an effort. The more complex tests require a vast amount of preparation and test scenarios that go accross multiple modules.

What I'm looking for is a testing framework, in which each test can somehow define its requirements and what service it tests/provides (A test of the create-user-feature can actually create users ... at least it should). Now I don't want to hard-code which test is run with which data and in which order, because it is extremely complex to determin the order and changes to the application would make it neccesary to completely refactor the tests.

For example my "create-user-test" creates users as a side-effect of actually checking that users are correctly created. To me it doesn't matter if this functionality is tested using userA, userB or userC, just as long as it is tested. If I now have another test "create-account-test" that requires a user that only userC sattisfies, then the test system should know "Oh ... create-account-test needs userC, that has not been created yet but by passing userC to my "create-user-test", that would crea开发者_开发技巧te it. So in the final execution it runs "create-user-test" with userC before "create-account-test" and hereby uses the side-effect of "create-user-test" to create the state needed by the "create-account-test".

By inspecting the requirements and the services of my tests. Such a system should be able to create a non-cyclic graph containing each test at least once (hereby testing the entire functionality) but without having to prepare/teardown the applications state for every test or fire an Error if it is somehow not possible to create such a graph. At least this way I could create huge test scenarios that would still stay maintainable.

I know this is a somewhat complex. I googled a while, if somebody allready worked on such a framework. Unfortunately I couldn't find anything similar.

Now I'm hoping for someone here to guide me to a tool OR tell me why this is a totally bad idea. A response of "Hey ... great idea ... nobody created such a thing yet" ... would certainly kill my after-work leasure time dramatically, because in that case I would propably start developing such a tool ;-)

Chris


Tools like jUint don't typically support ordering the running of tests because it's generally considered a bad practice for unit testing. In unit testing, you want to ensure each test is completely independant of other tests, and has no external dependencies.

But you're not doing unit testing, so attempting to use jUnit is going to cause conflicts between what you are trying to achieve, and what the tool implementers designed jUnit to do...

You seem to be trying to do an awful lot though. You want to be able to run a test and have a test tool figure out and create and data you need. That's a really tall order there... I don't know of any tool that does everything you are asking, but there are plenty of tools that can provide most of what you want, with some effort.

A test framework like Robotframework allows you to specify the order in which tests run. It's probably more suited to what you are trying to achieve.

But there will always be some work to do in order to set up your environment before testing. I generally collect tests together that require a particular configuration or set of data. I then run those steps just before running that set of tests. It cuts down on the need to do configuration and data setup before every test. But it also reduces complexity so it is manageable.


We use TestNG with Silk4J to create sequenced regression tests (Some tests are as short as 10 minutes and some are over 12 hours long). All of the tests execute in a specific order - and some tests trigger a 'skip all remaining tests' when a super-critical error occurs.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜