To what extent do you unit test functionality?
I have a question regarding unit testing. I'll outline it using an example.
Say I'm writing a command line application which has a variety of acceptable commands, and each command has its acceptable arguments. For example: myapp create -DprojectName=newProject -DprojectVersion=1.0.
If I have a class called Command, which has a validateArguments method, which takes in a list of valid argument names and compares it against the list of arguments that were specified by the user, how would you go about unit testing this?
The possibilities I see are:
You write two unit tests - one that ensures no error is thrown when a valid argument is passed in, and one that ensures an error is thrown when an invalid argument is passed.
You write a unit test that ensures that all acceptable arguments are not rejected. I would end up having a unit test for every acceptable argument:
@Test public void validateArgments_ProjectNameArgPassed_NoExcepton thrown { ... }
@Test public void validateArguments_ProjectVersi开发者_如何学编程onArgPassed_NoException thrown { ... }
and so on.
To me, the first approach makes sense. But it doesn't ensure that every argument that should be accepted, is.
It's hard to suggest without knowing the logic of the underlying code (there's a reason unit tests are white box tests and not black box) but my approach to that code would be a suite of unit tests that contain tests along the lines of:
- All parameters are invalid
- All parameters are valid
- Different combinations of valid and invalid parameters (to test un-tested code paths from the above)
- Different types of invalid parameters, e.g. not specified, incorrect format (-projectVersion=Hippopotamus), incorrect values (-projectVersion=99.0), etc.
- Any other failure conditions I might think of.
I find the real benefit of unit testing isn't in testing the success scenario, as some simple integration testing can also provide that benefit. I find the real benefit is in testing numerous erroneous scenarios, because it is often the code that is rarely run (i.e. error handling) that contains bugs that may slip through the levels of testing unnoticed.
It depends a little bit on what testing framework and language you are using. Most of them (at least in C#) allow you to write so-called data-driven tests. This allows you to feed a test method with the arguments you want to test, while at the same time specifying the expected outcome.
For example, such a test with Gallio would look like this:
[Row("prj1", "1.0", true)]
[Row("blah", "Hippopotamus", false)]
[Row(null, "1.0", false, ExpectedException = typeof(NullReferenceException))]
public void TestArguments(string arg1, string arg2, bool expectedResult)
{
var result = myApp.ValidateArguments(args);
Assert.AreEqual(expectedResult, result);
}
That way, you can easily test for all the argument combinations that need to be tested, and it doesn't require too much code.
It's impossible by design to test for all invalid arguments, and this is often (mostly) also true for the vailid ones - they might be theoretically finite, but practically there are far too much possible combinations. All you can do in such a case is to test for the likely, important, and meaningful combinations that one can think of.
It helps a lot to have input from the end user perspective/business side for this - they will often come up with use cases that are far beyond the phantasy of the developer...
HTH.
Thomas
Agree with dlanod, scope of UNIT testing should confine to testing the smallest compilable component of a system. Your test approach is more towards greybox testing, not saying that is not the right approach. Again its hard to determine what method should be used depends on the size and complexity of the code/class.
Another major objective of UNIT testing is to determine level of code coverage, not functional coverage. In reality less than 50% of the code services 90% of the use-case scenarios. If your application is small use the grey box approach where you can mesh UNIT and functional testing, else its good idea to have a clear separation. Hope this helps.
I tend to break it down by importance of the code.
If it's code that deals with money, or some other function where it's unacceptable that your code fails, then I try to test every possible code path and result.
A step down from that would be code that is widely used, prone to refactoring or additional features, possibly brittle code, then I test what I think are common use cases. If I find out later that something is broken because of a use case I missed, I'll add a test for that case.
At the bottom is code that is low impact - formatting/display code; straightforward value comparisons, etc., where the cost of maintaining those tests outweighs the benefits of having 100% correct code.
精彩评论