开发者

Am I allowed to check in a failing test [closed]

Closed. This question is opinion-based. It is not currently accepting answers.

Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.

Closed 3 years ago.

Improve this question 开发者_运维百科

Our team is having a heated debate as to whether we allow failing unit tests to be checked-in to source control.

On one side the argument is that yes you can as long as it is temporary - to be resolved within the current sprint. Some say even that in the case of bugs that may not be corrected within the current sprint we can check-in a corresponding failing test.

The other side of the argument is that those tests, if they are checked-in must be marked with the Ignore attribute - the reasoning being that the nightly build should not serve as a TODO list for a developer.

The problem with Ignore attribute however is that we tend to forget about the tests.

Does the community have any advice for us ?

We are a team of 8 developers, with a nightly build. Personally I am trying to practice TDD but the team tends to write unit tests after the code is written


I'd say not only should you not check in new failing tests, you should disable your "10 or so long-term failing tests". Better to fix them, of course, but between having a failing build every night and having every included test passing (with some excluded) - you're better off green. As things stand now, when you change something that causes a new failure in your existing suite of tests, you're pretty likely to miss it.

Disable the failing tests and enter a ticket for each of them; that way you'll get to it. You'll feel a lot better about your automated build system, too.


I discussed this with a friend and our first comment was a recent geek&poke :) (attached)

To be more specific - all tests should be written before (sa long as it's supposed to be TDD) but those checking an unimplemented functionality should have their value prepended with negation. If it's not implemented - it shouldn't work. [If it works - the test is bad] After implementing a test you remove the ! and it works [or fails, but then it's there to do so :) ].

You shouldn't think that tests are something written once and always right. Tests can have bugs too. So editing a test should be normal.

Am I allowed to check in a failing test [closed]


I'd say that checking in (known) failing tests should of course be only temporary, if at all. If the build is always failing, it loses its meaning (we've been there and that's not pretty).

I guess it would be ok to check in failing tests if you found a bug and could reproduce it quickly with a test, but the offending code is not "yours" and you don't have the time/responsibility to get into it enough to fix the code. Then give a ticket to someone who knows his way around and point to the test.

Otherwise I'd say use your ticket system as a TODO list, not the build.


It depends how you use tests. In my group, running tests is what you do before a commit in order to check that you (likely) have not broken anything.

When you are about to commit, it is painful to find failed tests that seem vaguely possibly related to your changes but still strange, investigate for a couple of hours, then realize it cannot possibly be because of your changes, do a clean checkout, compile, and find that indeed the test failures come from the trunk.

Obviously you do not use tests in the same way, otherwise you wouldn't even be asking.


If you were using a DVCS (e.g., git) this wouldn't be an issue as you'd commit the failing test to your local repository, make it work, and then push the whole lot (test and fix) back to the team's master repository. Job done, everyone happy.

As it seems you can't do that, the best you can do is to first make sure that the test is written and fails in your sandbox, and then fix that before committing. This might or might not be great TDD, but it's a reasonable compromise with the working practices of everyone else; working with your co-workers is more important than following some ivory tower principle in every aspect, since the author of the principle isn't located in the cubicle next door…


The purpose of your nightly build is to give you confidence that the code written the day before hasn't broken anything. If tests are often failing then you can't have that confidence.

You should first fix any failing tests you can and delete or comment out/ignore the other failing ones. Every nightly build should be green. If its not then there is a problem and that's immediately obvious now since it should have run green.

Secondly, you should never check in failing tests. You should never knowingly break the build. Ever. It causes unnecessary noise, distractions and lowers confidence. It also creates an atmosphere of laziness around quality and discipline. With respect to ignored tests which are checked in, these should be caught and addressed in your code reviews, which should be covering you test code as well.

If people want to write their code first and tests later, this is OK (though I prefer TDD), but only tested code which runs green should be checked in.

Finally, I would recommend changing the nightly build to a continuous integration build (run on each code check in) which might just change people's habits around code check-in.


I can see that you have a number of problems.

1) You are writing failing tests.

This is great! However, someone is intending to check those in "to be fixed within the current sprint". This tells me that it's taking too long to make those unit tests pass. Either the tests are covering more than one or two aspects of behaviour, or your underlying code is far too complex. Refactor the code to break it up and use mocks to separate the responsibilities of the class under test from its collaborators.

2) You tend to forget about tests with [Ignore] attributes.

If you're delivering code that people care about, either it works, or it has bugs which require the behaviour of the system to be changed. If your unit tests describe that behaviour but the behaviour doesn't work yet, then you won't forget because someone will have registered a bug. However, see point 1).

3) Your team is writing tests after the code is written.

This is fairly common for teams learning TDD. They might find it easier if they thought of the tests not as tests to check if their code is broken, but examples of how another developer might want to use their code, with a description of the value that their code provides. Perhaps you could pair with them and help them learn from what they already know about writing tests afterwards?

4) You're trying to practice TDD.

Do or do not. There is no try. Write a test first, or don't. Learning never stops even when you think you're doing TDD well.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜