开发者

Best Option for Retrospective application of TDD into C# codebase

I have an existing framework consisting of 5 C# libraries, the framework is well used since 2006 and is the main code base to the majority of my projects. My company wishes to roll out TDD for reasons of software quality; having worked through many tutorials and reading the theory I understand the benefits of TDD.

Time is not unlimited I need to make plans for a pragmatic approach to this. From what I know already, the options as I see them are:

A) One test project could be used in order to overlap objects from all 5 library components. A range of high level tests could be a starting point to what is first seen as a very large software library.

B) A test project for each of the 5 library components. The projects will be testing functions at the lowest level in isolation of the other library components.

C) As the code is widely regarded as working, only add unit tests to bug fixes or new features. Write a test that fails on the logic that has the bug in it with the steps to reproduce the bug. Then re-factor the code until the tests pass. Now you can have confidence that the bug is fixed and also it will not be introduced later on in the cycle

Whichever option is chosen, "Mocking" may be needed to replace external dependencies such as:

If anybody has any more input this would be very helpful. I plan to use Microsoft's inbuilt MSTest in Visual Studio 2010.


We have a million-and-a-half line code base. Our approach was to start by writing some integration tests (your option A). These tests exercise almost the whole system end-to-end: they copy database files from a repository, connect to that database, perform some operations on the data, and then output reports to CSV and compare them against known-good output. They're nowhere near comprehensive, but they exercise a large number of the things that our clients rely on our software to do.

These tests run very slowly, of course; but we still run all of them continuously, six years later (and now spread across eight different machines), because they catch things that we still don't have unit tests for.

Once we had a decent base of integration tests, we spent some time adding finer-grained tests around the high-traffic parts of the system (your option B). We were given time to do this because there was a perception of poor quality in our code.

Once we had improved the quality to a certain threshold, they started asking us to do real work again. So we settled into a rhythm of writing tests for new code (your option C). In addition, if we need to make changes to an existing piece of code that doesn't yet have unit tests, we might spend some time covering existing functionality with tests before we start making changes.

All of your approaches have their merits, but as you gain test coverage over time, the relative payoffs will change. For our code base, I think our strategy was a good one; integration tests will help catch any errors you make when trying to break dependencies to add unit tests.


Neither (A) nor (B) can properly be considered TDD. The code is already written; new tests will not drive its design. That does not mean there is not value in pursuing either of those paths, but it would be a mistake to consider them TDD. With respect to "the code is widely regarded to be working," I suspect that if you were to start (B) you would come to discover some holes in it. Untested code almost invariably contains bugs.

My advice would be to pursue (B), because I find greater value in unit tests than in integration tests (although much of that greater value lies in the design advantages for which you are too late). Integration tests are valuable too, and can tell you different important things about your code, but I like to start with unit tests. Pick one of the 5 components and start writing what we call characterization tests. Begin to discover the behaviors, build your experience at writing unit tests. Pick the easiest things to test first; build on what you learn with the easy methods to gradually ramp up to test the trickier bits. In writing these characterization tests you are almost certain to discover surprising behavior. Note it, for sure, and give some thought to whether it should be fixed (or whether the fixes are likely to break code that relies on the surprising behavior).

And of course, write tests for any new features or bug fixes before the code that implements them. Good luck!


By definition, if you are creating tests for an existing code base, this is not TDD.

I would take C) as a given: whenever you have a bug, write a test that "proves" the bug, and quash it, forever.

I agree with Carl Manaster advice. Another angle on the question is "economics": writing tests for a legacy app can be expensive, so where will you get the most bang for the buck? Think about a) classes and methods that are the most used, b) classes and methods which are the most likely to have a bug (usually the ones with the highest code complexity).

Also consider using tools like Pex and Code Contracts, which together can help you think about tests you haven't thought about, and problems that may exist in your code.


I would go with option C. Trying to fit unit tests around code that wasn't designed for Unit testing can be a major time suck. I would recommend only adding tests when you revist parts of the code and even then you may have to refactor that code to allow it to be unit tested.

Integration tests might be something to consider as well on legacy code as I assume they would be easier to put in place than Unit tests.


Options A and B don't fit the definition of TDD, and are both quite time-consuming. I would choose option C, because it's the most pragmatic solution.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜