开发者

At what point do you reach Unit Testing overkill?

I'm currently working on a project where I'm unit testing with NUnit, mocking with Moq, writing specifications with MSpec and playing around with testing the UI with WebAii.

While I'm enjoying the experience on the whole and 开发者_开发知识库learning plenty about what and how to test, I can help wondering if all four of these tools is going a bit overboard.

Is there a point at which unit testing becomes a bit absurd? Is it possible to overdo it? What are reasonable tests to write and what - in your view - is just unnecessary detail?

Edit:

To be clear, it's not so much the quantity of tests I'm writing so much as it's the breadth of tools I'm using. Four seems a lot, but if other people are using this sort of line-up to good effect, I want to hear about it.


Is it okay to use many testing frameworks at once?

Some open-source software projects do use several testing frameworks. A common setup would be the use unit-testing framework with mocking framework if the developers of the project don't want to roll their own mocks.

So when do you reach unit-testing overkill?

You reach unit testing "overkill" quickly and you might have reached it already. There are several ways to overdo testing in general that defeats the purpose of TDD, BDD, ADD and whatever driven approach you use. Here is one of them:

Unit testing overkill is reached when you start writing other types of tests as if they were unit tests. This is supposed to be fixed by using mocking frameworks (to test interactions isolated to one class only) and specification frameworks (to test features and specified requirements). There is a confusion among a lot of developers who seem to think it is a good idea to treat all the different types of tests the same way, which leads to some dirty hybrids.

Even though TDD focuses on unit testing you will still find yourself writing functional, integration and performance tests. However you have to remind yourself that their scope are vastly different from unit tests. This is why there are so many testing tools available as there are different types of tests. There is nothing wrong with using many testing frameworks and most of them are compatible with each other.

So when writing unit tests there are a couple of sweet spots to think about when writing tests:

unit test                 dirty hybrids               integration test
---------                 -------------               ----------------
* isolated                                            * using many classes 
* well defined                  |                     * tests a larger feature
* repeatable                    |                     * tests a data set
                                |
    |                           |                              |
    |                           |                              |
    v                           v                              v

    O  <-----------------------------------------------------> O 

    ^                           ^                              ^
    |                           |                              |

sweet spot              world full of pain                sweet spot

Unit tests are easy to write and you want to write a lot of them. But if you write a test that has too many dependencies you'll end up with a lot of work once requirements start to change. When code breaks in a unit test that has too many dependencies you have to check through the code of many classes rather than one and only one class. This means you have to check all of its dependencies to see where the problem is which defeats the purpose of unit-testing in TDD sense. In a large project this would be incredibly time consuming.

The moral of this story is, do not mix up unit tests with integration tests. Because simply put: they are different. This is not to say that the other types tests are bad, but they should be treated more as a specifications or sanity checks instead. Just because the test breaks they may not be an indication of the code being wrong. For example:

  • If an integration test breaks, there may be a problem with some requirement that you have and need to revise the requirement, remove, replace or modify the test.
  • If a performance test breaks, depending on how it was implemented the stochastic nature of that test may lead you to think it was just running slow on that instance.

The only thing to keep in mind is to organize the tests in a way that they are easy to distinguish and find.

Do you need to write tests all the time?

There are times when it is okay to omit test cases usually because verification through manual smoke testing is just easier to do and doesn't take a lot of time. Manual smoke test in this sense is the action of you starting up your application to test the functionality yourself or someone else who hasn't coded your stuff. That is if the automated test you're going to write is all of the following:

  • way too complicated and convoluted
  • will take a lot of your work time to write
  • there is no ready and easy to use testing framework to handle it
  • won't give much payoff such as having little chance of regression
  • can be done manually with greatly less effort than writing an automated test

…then write it and test it as a manual test case. It's not worth it if the test case will take several days to write when smoke testing it manually only takes a minute.


You are overdoing it, if you are testing the same input over and over again. 

As long as each new test case tests something different, you are fine. 

Of course, there are bugs that you will find quickly. And there are some rare and wired cases that are hard to detect. You have to ask yourself how expensive it will be if that bug shows up in production and compare that to the price it costs you to find it before production.

I usually test for the boundaries. If I wrote a fibunacci function, I'd test it for the values -1, 0, 1, 10 and Maxvalue of an integer. Testing it for 20 or 509 would not test anything not yet covered.   


If you've wasted more time on tests, than on the coding, maybe you're overdoing it. But that's my personal opinion. It might be interesting for you to have a look at Test-Driven Development as a nice approach to start with the tests, ensuring that you'll write the code you'll need, and you'll write it as it should work (http://en.wikipedia.org/wiki/Test-driven_development)

Good luck!


There is no such thing as overtesting! Of course, you don't want do this. Given the method

public void tripleInt(int i);

you do not want to test it for infinite number of ints. That will not be practical. You probably want to test for positive, negative, Integer.Max etc.


Writing highly granular unit tests is sometimes an overkill.

The point of unit tests is:

  1. Test a representative set of inputs for the given unit A (if one unit tests a larger unit B containing the unit A, it may happen that the set of inputs thought of as representative for the unit B, will not result in a representative set of inputs for the unit A).
  2. Determine with maximum precision where the code breaks.

But if you have a larger unit containing several small units, such that for the larger unit you have a set of inputs which results in representative sets of inputs also for the smaller units, and with the characteristic that if the larger unit breaks, it is easy to determine where exactly the breaking point is, there is little reason to write unit tests for each of the smaller units.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜