开发者

How should I manage Test Harnesses in Git - Should they be in a separate repo?

I have a large(ish) project [90 files 650kb code] that I now manage in Git. I have a few independent test harnesses used to try/test new low level bits of computation which are later merged into the main code and its branches (currently via copy-paste!).

What is the recommended best practice for managing the Test Harnesses?

Should they be in a separate repository, or should I create an empty branch in the main repo to start it, or just cr开发者_运维技巧eate a "Test Harness" branch and overwrite the old code?

The hoped for benefit is that the tested code in the main branch would be demonstrably 'the same' as that that was tested.

I'm on Windows (msysgit) and I'm the lead 'explorer' for using Git in the company.


The usual structure I've seen in most projects is to include a test/ directory hierarchy parallel to src/, and store them there (in the same repo).


90 files and 650KB of source code is definitely not large. It is better to keep the test harness/test suite etc. along with your source code in the same repository. Check some of the repositories in github (for example: PLY) and decide on how you organize your source code and test suite.


The number and size of your files is well within git's ability to keep all that as one repo. Even if you bumped them up by an order of magnitude or two. So the real reasons to break them into two repos,or keep them in a single repo would have to do with ease of use, not technical limits of git.

I like to keep tests in the same repo as the code they test. When I update the code to include new functionality I update the unit tests, and it is nice to have the two in sync.

When I add code to fix a flaw and add a regression test having the two in sync is again nice.

With unit and regression tests in sync with the code when I check out a old revision I know the bundled tests should all pass. Any failures I can attribute to some other component in the system (say an OS or tool change), which helps me pin point that sort of thing without interference from guessing what tests might be "expected failures".

The downside is if I notice something missing from my unit tests it isn't easy to add it retroactive to where it "should have been". However I find that a smaller downside then having lots of "guess it might be ok" failures when checking to see if last april's code works with some new subsystem or other.

Your tradeoffs might be different though. Maybe your management chain doesn't give enough support for extensive unit tests to be added as new functionality is added, so you might have a higher percentage of tests that you want to apply retroactively. Maybe you are better at exporting functionality changes via some readable attribute and your test sets can just not run expected failures. Maybe your tests are managed by a different group then the code. Any of those might shift the balance.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜