开发者

How to organize integration tests?

When writing unit tests, I usually have one test class per production class, so my hierarchy will look something like that:

src/main
  -package1
    -classA
    -classB
  -package2
    -classC
src/test
  -package1
    -classATests
    -classBTests
  -package2
    -classCTests

However when doing integration tests the organization becomes less rigid. For example, 开发者_如何转开发I may have a test class that tests classA and classB in conjunction. Where would you put it? What about a test class that tests classA, classB and classC together?

Also, integration tests usually require external properties or configuration files. Where do you place them and do you use any naming convention for them?


Our integration tests tend to be organised the same way our specifications are. And they tend to be gathered by categories and/or feature.


I'd concur with f4's answer. Such kind of tests (level higher than UT) usually has no correlation with particular classes. Your tests should stick to project requirements and specifications.

In case you really need to develop a testing project tailored to your test requirements, I'd recommend the following approach: a separate project with packages per requirement or user story (depending on your approach to manage requirements). For example:

src/itest
  -package1 - corresponds to story#1
    -classA - test case1
    -classB - test case2
  -package2 - corresponds to story#1
    -classC - test case2
  -packageData - your common test data and utilities

However keep in mind - doing an integration or system-level tests is usually complicated task and its scope could easily be broader than testing software project can cover. You should be ready to consider a third-party test automation tools, because at the level of integration or system test it's often a more efficient approach than developing a tailored testing package.


Maybe create an integration tests directory under src/test? Sure, for integration tests the separation becomes less clear, but there's something that groups A,B and C together, no? I'd start with this and see how things go. It's tough to come up with a perfect solution right away and an "OK" solution is better than no solution.


It seems that your integration tests are higher level unit tests since you still bind them to one or more classes. Try to pick class that depends on all others (transitively) from the group and associate test with such class.

If you have true integration tests then association with concrete classes is of little value. Then tests are classified by application subject areas (domains) and by types of functionality. For example, domains are orders, shipments, invoices, entitlements, etc. and functionality types are transactional, web, messaging, batch, etc. Their permutations would give you nice first cut of how to organize integration tests.


I have found that when doing TDD it is not always the case that in unit tests there is a 1:1 relationship between classes and tests. If you do that you will have a hard time refactoring. In fact after some refactoring I usually end up with about 50% 1:1 couplings and 50% tests that you could link to several classes or clusters of tests that link to a single class.

Integration tests happen if you try to prove that something is or isn't working. This happens either when you're worried because you need to deliver something, or if you find a bug. Trying to get full coverage from integration tests is a bad idea (to put it mildly).

The most important thing is that a test needs to tell a story. In BDD'ish terms: given you have such, when doing this, that should happen. The tests should be examples of how you intend people to use the unit, API, application, service, ...

The granularity and organisation of your tests will follow from your storyline. It should not be designed with simplistic rules up front.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜