开发者

Howto overcome Unit Test Regression Problems...?

I was looking for some kind of a solution for software development teams which spend too much time handling unit test regression problems (about 30% of the time in my case!!!), i.e., dealing with unit tests which fails on a day to day basis.

Followi开发者_Go百科ng is one solution I'm familiar with, which analyzes which of the latest code changes caused a certain unit test to fail:

Unit Test Regression Analysis Tool

I wanted to know if anyone knows similar tools so I can benchmark them. As well, if anyone can recommand another approach to handle this annoying problem.

Thanks at Advanced


You have our sympathy. It sounds like you have brittle test syndrome. Ideally, a single change to a unit test should only break a single test-- and it should be a real problem. Like I said, "ideally". But this type of behavior common and treatable.

I would recommend spending some time with the team doing some root cause analysis of why all these tests are breaking. Yep, there are some fancy tools that keep track of which tests fail most often, and which ones fail together. Some continuous integration servers have this built in. That's great. But I suspect if you just ask each other, you'll know. I've been though this and the team always just knows from their experience.

Anywho, a few other things I've seen that cause this:

  • Unit tests generally shouldn't depend on more than the class and method they are testing. Look for dependencies that have crept in. Make sure you're using dependency injection to make testing easier.
  • Are these truly unique tests? Or are they testing the same thing over and over? If they are always going to fail together, why not just remove all but one?
  • Many people favor integration over unit tests, since they get more coverage for their buck. But with these, a single change can break lots of tests. Maybe you're writing integration tests?
  • Perhaps they are all running through some common set-up code for lots of tests, causing them to break in unison. Maybe this can be mocked out to isolate behaviors.


Test often, commit often.

If you don't do that already, I suggest to use a Continuous Integration tool, and ask/require the developers to run the automated tests before committing. At least a subset of the tests. If running all tests takes too long, then use a CI tools that spawns a build (which includes running all automated tests) for each commit, so you can easily see which commit broke the build.

If the automated tests are too fragile, maybe they don't test the functionality, but the implementation details? Sometimes testing the implementation details is a good idea, but it can be problematic.


  1. Regarding running a subset of most probable test to fail - since it's usually fails due to other team members (at least in my case), I need to ask others to run my test - which might be 'politically problematic' in some of the development environments ;). Any other suggestions will be appriciated. Thanks a lot – SpeeDev Sep 30 '10 at 23:18

If you have to "ask others" to run your test then that suggests a serious problem with your test infrastructure. All tests (regardless of who wrote them) should be run automatically. The responsibility for fixing a failing test should lie with the person who committed the change not the test author.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜