开发者

Unit test documentation [closed]

Closed. This question is opinion-based. It is not currently accepting answers.

Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.

Closed 4 years ago.

开发者_如何学Go Improve this question

I would like to know from those who document unit tests how they are documenting it. I understand that most TDD followers claim "the code speaks" and thus test documentation is not very important because code should be self-descriptive. Fair enough, but I would like to know how to document unit tests, not whether to document them at all.

My experience as a developer tells me that understanding old code (this includes unit tests) is difficult.

So what is important in a test documentation? When is the test method name not descriptive enough so that documentation is justified?


As requested by Thorsten79, I'll elaborate on my comments as an answer. My original comment was:

"The code speaks" is unfortunately completely wrong, because a non-developer cannot read the code, while he can at least partially read and understand generated documentation, and this way he can know what the tests test. This is especially important in cases where the customer fully understands the domain and just can't read code, and gets even more important when the unit tests also test hardware, like in the embedded world, because then you test things that can be seen.

When you're doing unit tests, you have to know whether you're writing them just for you (or for your co-workers), or if you're also writing them for other people. Many times, you should be writing code for your readers, rather than for your convenience.

In mixed hardware/software development like in my company, the customers know what they want. If their field device has to do a reset when receiving a certain bus command, there must be a unit test that sends that command and checks whether the device was reset. We're doing this here right now with NUnit as the unit test framework, and some custom software and hardware that makes sending and receiving commands (and even pressing buttons) possible. It's great, because the only alternative would be to do all that manually.

The customer absolutely wants to know which tests are there, and he even wants to run the tests himself. If the tests are not properly documented, he doesn't know what the test does and can't check if all tests he think he'll need are there, and when running the test, he doesn't know what it will do. Because he can't read the code. He knows the used bus system better than our developers, but they just can't read the code. If a test fails, he does not know why and cannot even say what he thinks the test should do. That's not a good thing.

Having documented the unit tests properly, we have

  • code documentation for the developers
  • test documentation for the customer, which can be used to prove that the device does what it should do, i.e. what the customer ordered
  • the ability to generate the documentation in any format, which can even be passed to other involved parties, like the manufacturer

Properly in this context means: Write clear language that can be understood by non-developers. You can stay technical, but don't write things only you can understand. The latter is of course also important for any other comments and any code.

Independent of our exact situation, I think that's what I would want in unit tests all the time, even if they're pure software. A customer can ignore a unit test he doesn't care about, like basic function tests. But just having the docs there does never hurt.

As I've written in a comment to another answer: In addition, the generated documentation is also a good starting point if you (or your boss, or co-worker, or the testing department) wants to examine which tests are there and what they do, because you can browse it without digging through the code.


In the test code itself:

  • With method level comments explaining what the test is testing / covering.

  • At the class level, a comment indicating the actual class being tested (which could actually be inferred from the test class name so that's actually less important than the comments at the method level).

With test coverage reports

  • Such as Cobertura. That's also documentation, since it indicates what your tests are covering and what they're not.


Comment complex tests or scenarios if required but favour readable tests in the first place.

On the other hand, I try and make my tests speak for themselves. In other words:

[Test]
public void person_should_say_hello() {
     // Arrange.
     var person = new Person();
     // Act.
     string result = person.SayHello();
     // Assert.
    Assert.AreEqual("Hello", result, "Person did not say hello");
}

If I was to look at this test I'd see it used Person (though it would be in PersonTest.cs as a clue ;)) then that if anything breaks it will occur in the SayHello method. The assert message is useful as well, not only for reading tests but when tests are run it's easier to see them in GUI's.

Following the AAA style of Arrange, Act and Assert makes the test essentially document itself. If this test was more complex, you could add comments above the test function explaining what's going on. As always, you should ensure these are kept up to date.

As a side note, using underscore notation for test names makes them much more readably, compare this to:

public void PersonShouldSayHello()

Which for long method names, can make reading the test more difficult. Though this point is often subjective.


When I come back at an old test and don't understand it right away

  1. I refactor if possible
  2. or write that comment that would have made me understand it right away

When you are writing your testcases it is the same as when you are writing your code, everyhting is crystal clear to you. That makes it difficult to envision what you should write to make the code clearer.

Note that this does not mean I never write any comments. There still are plenty of situations when I just know that I will going to have a hard time figuring out what a particular piece of code does.

I usually start with point 1 in these situations...


Improving the unit tests as executable specification is the point of Behaviour-Driven Development : BDD is an evolution of TDD where unit-tests use an Ubiquitous Language (a language based on the business domain and shared by the developers and the stakeholders) and expressive names (testCannotCreateDuplicateEntry) to describe what the code is supposed to do. Some BDD frameworks pushed the idea very far, and show executable written with almost natural language, for example.


I would advice against any detailed documentation separate from code. Why? Because whenever you need it, it will most likely be very outdated. The best place for detailed documentation is the code itself (including comments). BTW, anything you need to say about a specific unit test is very detailed documentation.

A few pointers on how to achieve well self-documented tests:

  • Follow a standard way to write all tests, like AAA pattern. Use a blank line to separate each part. That makes it much easier for the reader to identify the important bits.
  • You should include, in every test name: what is being tested, the situation under test and the expected behavior. For example: test__getAccountBalance__NullAccount__raisesNullArgumentException()

  • Extract out common logic into set up/teardown or helper methods with descriptive names.

  • Whenever possible use samples from real data for input values. This is much more informative than blank objects or made up JSON.
  • Use variables with descriptive names.
  • Think about your future you/teammate, if you remembered nothing about this, would you like any additional information when the test fails? Write that down as comments.

And to complement what other answers have said:

  • It's great if your customer/Product Owner/boss has a very good idea as to what should be tested and is eager to help, but unit tests are not the best place to do it. You should use acceptance tests for this.

  • Unit tests should cover specific units of code (methods/functions within classes/modules), if you cover more ground, they will quickly turn into integration tests, which are fine and needed too, but if you do not separate them specifically, people will just get them confused and you will loose some of the benefits of unit testing. For example, when a unit test fails you should get instant bug detection (specially if you follow the naming convention above). When an integration test fails, you know there is a problem, and you know some of its effects, but you might need to debug, sometimes for a long time, to find what it is.

  • You can use unit testing frameworks for integration tests if you want, but you should know you are not doing unit testing, and you should keep them in separate files/directories.

  • There are good acceptance/behavior testing frameworks (FitNesse, Robot, Selenium, Cucumber, etc.) that can help business/domain people not just read, but also write the tests themselves. Sure, they will need help from coders to get them to work (specially when starting out), but they will be able to do it, and they do not need to know anything about your modules or classes of functions.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜