开发者

Efficiency pitfalls of doing both Integration and Acceptance testing (automated) [closed]

Closed. This question is opinion-based. It is not currently accepting answers.

Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.

Closed 5 years ago.

Improve this question

The advantages of unit-testing are obvious to me, they are done by developers 开发者_开发问答themselves (either test or code-first) and are automated.

What I am a bit unsure about is whether developers should also do integration testing when the team already consists of a dedicated tester, who does automate as much as possible and does black box testing of the whole system (End-to-End test or more common termed Acceptance testing).

For short background more details:

Example Integration Test (MVC webapp)

  • Setup: Only the controller itself and the layers below controller are bootstrapped during test setup. Nothing is mocked or stubbed.
  • Test Entry: Bare Controller, most often Controllers entry points are methods with parameters(e.g. Spring MVC) and can be natively executed. No browser is involved during test fixture
  • Assert targets: Model data and View-name are asserted as direct outputs. Indirect outputs (e.g. data written to database) could be asserted also. The rendered payload (most often HTML) is ignored completely.

Example Acceptance Test (MVC webapp)

  • Setup: The whole webapp is bootstrapped (like it would be seen from end-user).
  • Test Entry: HTTP call itself. Browser can be involved as test executer (e.g. Selenium)
  • Assert Targets: The test output is the complete rendered response (HTML and other artifacts like javascript). Asserts on the database (e.g. data got inserted) can also be included.

Pitfalls double testing (both Integration + Acceptance)

I see major problems when including both test styles:

  • Controller tests are close to general system behaviour (e.g. submit login form, password validation, successful login). This is very close what an Acceptance test would do. In the end "double-testing" could happen, which is highly inefficient.
  • Controller are more white-boxed tests and tend to be brittle because they rely on many dependencies of lower layers (in difference to very fine grained unit-tests). Beause of this setting up maintaining Controller tests is high effort, Acceptance test where the whole application is started as black box is more trivial and have advantage being closer to production.

Above two points lead to my conclusion that if you're having good automation strategy of your tester you should skip Integration tests done by developers. They should more focus on unit-tests.

What do you think? Can you explain your test strategy? Do you have good/bad experiences including both test styles?

Thanks for reading my long question ;)

EDIT: Acceptance testing seems to be more common jargon as End-to-End so I switched the terms.


We do Acceptance TDD at my work.

When I first started I was told I could implement whatever policies I wanted so long as the work was completed in a timely and predictable fashion. Having done unit testing in the past I realized that one of the problem we always ran into were integration bugs. Some could take quite a long time to fix and were often a surprise. We would run into subtle bugs we introduced while extending the app's functionality.

I decide to avoid those issue I had run into in the past by focusing more on the the end result features that we were suppose to deliver. We would write tests that tested the acceptance behavior, not just at the unit level, but at the whole system level. I wanted to do that because at the end of the day I don't care of the unit works correctly, I care that the entire system works correctly. We found the following benefits to doing automated acceptance tests.

  • We NEVER regress end user functionality because it is explicitly tested for.
  • Refactors are easier because we don't have to update a bunch of unit tests. We just have to make sure our acceptance test still passes.
  • The integration of the "units" are implicitly covered.
  • The tests become a very clear definition of required end user functionality.
  • Integration issues are exposed earlier and are less of a surprise.

Some of the trade offs to doing it this way

  • Tests can be more complex in terms of usage of mocks, stubs, fixtures, etc.
  • Tests are less useful for narrowing down which "unit" has the defect.

We also make our test suite runnable via a Continuous Integration server which tags and packages for deployment. It runs with every commit as with most CI setups.

With regard to your points/concerns:

Setup: The whole webapp is bootstrapped (like it would be seen from end-user).

One compromise we do tend to make is to run the test in the same process space ala unit tests. Our entry point is the top of the app stack. We don't bother to try and run the app as a server because that adds to the complexity and doesn't add much in terms of coverage.

Test Entry: HTTP call itself. Browser can be involved as test executer (e.g. Selenium)

All of our automated tests are driven by a simulating a HTTP GET, POST, PUT, or DELETE. We don't actually use a browser for this though, a call into the top of the app stack the way the particular HTTP call get's mapped in works just fine.

Assert Targets: The test output is the complete rendered response (HTML and other artifacts like javascript). Asserts on the database (e.g. data got inserted) can also be included.

I think this where automated acceptance tests really shine. What you assert is the end user functionality you want to guarantee that you are implementing.

Controller tests are close to general system behaviour (e.g. submit login form, password validation, successful login). This is very close what an End-to-End test would do. In the end "double-testing" could happen, which is highly inefficient.

We actually do very little unit testing and rely almost solely on our automated acceptance tests. As a result we don't have much in the way of double testing.

Controller are more white-boxed tests and tend to be brittle because they rely on many dependencies of lower layers (in difference to very fine grained unit-tests). Beause of this setting up maintaining Controller tests is high effort, End-to-End test where the whole application is started as black box is more trivial and have advantage being closer to production.

They may have more dependencies, but those can be mitigated through the usage of mocks and fixtures. We also usually implement our test with 2 modes of execution. Unmanaged mode where the tests runs fully wired to the network, dbs, etc. And Managed mode where the test runs with the unmanaged resources mocked out. Although you are correct in your assertion that the tests can be alot more effort to create and maintain.


Developer should do integration tests of the part that he changed/implemented. Under integration tests, I meant that they should see if the functionality they implemented really works as expected. If you don't do this, how do you know that what you just finished really works? Unit tests by itself is not the final goal - it is the product that matters.

This should be done in order to speed up bugs finding. After all, integration tests takes long to execute (at least in my company because of complexity it takes 1-2 days to execute all integration tests). Finding bugs earlier is better then later.


Having integration tests (and, indeed, unit tests) that test behaviour that is also tested by a system test helps debugging, by narrowing the location of a defect. If your system has components A-B-C and fails a system test-case, but the assembly A-B passes a similar integration test-case, the defect is probably in component C.


Considering that this post is dealing with testing pitfalls, I would like to make you aware of my most recent book, Common System and Software Testing Pitfalls, which was published last month by Addison Wesley. It documents 92 testing pitfalls organized into 14 categories. Each pitfall includes description, potential applicability, characteristic symptoms, potential negative consequences, potential causes, and recommendations for avoiding the pitfall and climbing out if you have already fallen in. Check it out on Amazon.com at: http://www.amazon.com/Common-System-Software-Testing-Pitfalls/dp/0133748553/ref=la_B001HQ006A_1_1?s=books&ie=UTF8&qid=1389613893&sr=1-1

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜