How to adopt TDD and ensure adherence?
I'm a senior engineer working in a team of four others on a home-grown content management application that drives a large US pro sports web site. We have embarked upon this project some two years ago and chose Java as our platform, though my question is not Java-specific. Since we started, there has been some churn in our ranks. Each one of us has a significant degree of latitude in deciding on implementation details, although important decisions are made by consensus.
Ours is a relatively young project, however we are already at a point when no single developer knows everything about the app. The primary reasons for开发者_JAVA技巧 that are our quick pace of development, most of which occurs in a crunch leading up to our sport's season opener; and the fact that our test coverage is essentially 0.
We all understand the theoretical benefits of TDD and agree in principle that the methodology would have improved our lives and code quality if we had started out and stuck with it through the years. This never took hold, and now we're in charge of an untested codebase that still requires a lot of expansion and is actively used in production and relied upon by the corporate structure.
Faced with this situation, I see only two possible solutions: (1) retroactively write tests for existing code, or (2) re-write as much of the app as is practical while fanatically adhering to TDD principles. I perceive (1) as by and large not practical because we have a hellish dependency graph within the project. Almost none of our components can be tested in isolation; we don't know all the use cases; and the uses cases will likely change during the testing push due to business requirements or as a reaction to unforeseen issues. For these reasons, we can't really be sure that our tests will turn out to be high quality once we're done. There's a risk of leading the team into a false sense of security whereby subtle bugs will creep in without anyone noticing. Given the bleak prospects with regards to ROI, it would be hard for myself or our team lead to justify this endeavor to management.
Method (2) is more attractive as we'll be following the test-first principle, thus producing code that's almost 100% covered right off the bat. Even if the initial effort results in islands of covered code at first, this will provide us with a significant beachhead on the way to project-wide coverage and help decouple and isolate the various components.
The downside in both cases is that our team's business-wise productivity could either slow down significantly or evaporate entirely during any testing push. We can not afford to do this during the business-driven crunch, although it's followed by a relative lull which we could exploit for our purposes.
In addition to choosing the right approach (either (1), (2), or another as of yet unknown solution), I need help answering the following question: How can my team ensure that our effort isn't wasted in the long run by unmaintained tests and/or failure to write new ones as business requirements roll on? I'm open to a wide range of suggestions here, whether they involve carrots or sticks.
In any event, thanks for reading about this self-inflicted plight.
"The downside in both cases is that our team's business-wise productivity could either slow down significantly or evaporate entirely during any testing push."
This is a common misinterpretation of the facts. Right now you have code you don't like and struggle to maintain. "hellish dependency graph", etc.
So, the "crunch" development you've been doing has lead to expensive rework. Rework so expensive you don't dare attempt it. That says that your crunch development isn't very effective. It appears cheap at the time, but in retrospect, you note that you're really throwing development money away because you've created problematic, expensive software instead of creating good software.
TDD can change this so that you aren't producing crunch software that's expensive to maintain. It can't fix everything, but it can make it clear that changing your focus from "crunch" can produce better software that's less expensive in the long run.
From your description, some (or all) of your current code base is a liability, not an asset. Now think what TDD (or any discipline) will do to reduce the cost of that liability. The question of "productivity" doesn't apply when you're producing a liability.
The Golden Rule of TDD: If you stop creating code that's a liability, the organization has a positive ROI.
Be careful of asking how to keep up your current pace of productivity. Some of that "productivity" is producing cost with no value.
"Almost none of our components can be tested in isolation; we don't know all the use cases"
Correct. Retro-fitting unit tests to an existing code base is really hard.
"There's a risk of leading the team into a false sense of security whereby subtle bugs will creep in without anyone noticing"
False. There's no "false sense of security". Everyone knows the testing is rocky at best.
Further, now you have horrifying bugs. You have problems so bad you don't even know what they are, because you have no test coverage.
Trading up to a few subtle bugs is still a huge improvement over code you cannot test. I'll take subtle bugs over unknown bugs any day.
"Method (2) is more attractive"
Yes. But.
Your previous testing efforts were subverted by a culture that rewards crunch programming.
Has anything changed? I doubt it. Your culture still rewards crunch programming. Your testing initiative may still get subverted.
You should look at a middle ground. You can't be expected to "fanatically adhering to TDD principles" overnight. That takes time, and a significant cultural change.
What you need to do is break your applications into pieces.
Consider, for example, the Model - Services - View tiers.
You have core application model (persistent things, core classes, etc.) that requires extensive, rigorous trustworthy testing.
You have application services that require some testing, but are subject to "the uses cases will likely change during the testing push due to business requirements or as a reaction to unforeseen issues". Test as much as you can, but don't run afoul of the imperative to ship stuff on time for the next season.
You have view/presentation stuff that needs some testing, but isn't core processing. It's just presentation. It will change constantly as people want different options, views, reports, analysis, RIA, GUI, glitz, and sizzle.
I need help answering the following question: How can my team ensure that our effort isn't wasted in the long run by unmaintained tests and/or failure to write new ones as business requirements roll on?
Make sure that your build process executes the tests on every build and fails the build if there are failures.
Do you use Continuous Integration? Hudson is a great tool for this. It can keep a graph of # of tests, # of failures, test coverage, etc., for every build over the lifetime of your project. This will help you keep an easy eye on when your coverage % is declining.
As you mentioned it can be pretty hard to retrofit unit testing into an existing project, let alone TDD. I wish you luck on this effort!
Update: I also want to point out that 100% test coverage isn't really a great goal, it has diminishing returns as you try to go from ~80% or ~90%. To get those last few percentage points you need to start simulating every possible branch in your code. Your team will start spending time simulating scenarios that either can't happen in real life ("this stream won't actually throw an IOException when I close it but I need to get this branch covered!") or has no real value in your testing. I caught someone on my team verifying that if (foo == null) throw new NullPointerException(...);
as the first line of a method actually threw an exception when the value was null
.
Much better to spend your time testing the code that actually matters than becoming obsessive-compulsive about making every last line show up as green in Emma or Cobertura.
I would recommend that all new code and bug fixes required a unit test. Period.
Then, I would go back (before refactoring the older code) and write unit tests for those pieces of code that are the most business critical, has the most bugs, and least understood.
I perceive (1) as by and large not practical because we have a hellish dependency graph within the project. Almost none of our components can be tested in isolation; we don't know all the use cases;
This is your real problem.
You can start by writing integration tests that essentially automate your testing process. You get value out of this right out of the gate. What you need is a safety net for refactoring that will give you a hint whenever you've broken the code. Take the widest swath of transactions you can to exercise the app, automate the process of pumping them through and comparing expected to actual.
Once you have that in place, start trying to break some of that dependency graph so you can isolate pieces to test.
Whenever you have a bug to fix, write a unit test for it that replicates the error. Then fix the code and re-run the test. You should see the test pass successfully. Make this your standard bug tracking and fixing procedure. That list of tests will grow like interest in a bank account.
I agree with the CI recommendation. Add code coverage metrics to either Hudson or Cruise Control.
It goes without saying that you're using Subversion, Git, or another source code management system, right?
It's a long process, but worth it in the end.
I think that you need to flip around part of the question and the related arguments. How can you ensure that adhering to TDD will not result in wasted effort?
You can't, but the same is true for any development process.
I've always found it slightly paradoxical that we are always challenged to prove the cash benefit of TDD and related agile disciplines, when at the same time traditional waterfall processes more often than not result in
- missed deadlines
- blown budgets
- death marches
- developer burn-out
- buggy software
- unsatisfied customers
TDD and other agile methodologies attempt to address these issues, but obviously introduce some new concerns.
In any case I'd like to recommend the following books that may answer some of your questions in greater detail:
- Working Effectively with Legacy Code
- Lean Software Development: From Concept to Cash
Retro-fitting tests will allow you to improve your design and find bugs. Trying to rewrite big swaths of your application will just result in you missing your deadline and ending up with a bunch of half-working pieces of functionality. Major rewrites almost never work. Go for option (1).
Oh and just having hudson running and making it complain when you break the tests (through e-mail) is probably enough to get you in the spirit of things.
I've run into this in the past with a few teams. When I've been the tech lead on the project this is the way I approached it. First, I had the developer write out what the test case would be (via a design document or such). That way not only do we have the test case but it parrots back to me their understanding of the change so I can verify that they know what they're doing.
After that for non-GUI tests I have the developer use JUnit or something like that to make sure it's good. Since the test case is already defined this doesn't take too long.
Depending on the complexity of the regressive test case it could take a little longer but after explaining the better maintainability, fewer bugs because of changing existing code with these kind of tests and such.....I've usually been able to push it through.
Hope this helps.
Along with some of the excellent suggestions already, I will chime in with two points.
- Start writing test cases for all new code. This is a must and you need to make it part of your culture.
- Start writing tests to replicate any bugs that you find in existing code. Before fixing any bug that you find in your existing code base, write a reproducible test case for that bug. This will at the very least allow you to start introducing test cases against areas that are known issues in your code. Although the ideal would be to write tests against all your existing code, this is seldom feasible, so at least address known issues.
In addition, I definitely agree with the Hudson for CI suggestions. In addition, if you aren't doing it already, do peer reviews for all checked in code as well. This does not have to be a drawn out formal process.
For example, we simply have to assign any completed task (via JIRA) to a Code Review status when the developer is done. The developer will choose another developer to assign this task to, and we shoot for a 48 hour turnaround time on code reviews. The task is then marked resolved by the reviewer, not the developer. This gives some extra confidence to the quality of the code, the test coverage and design. Tasks are never resolved by the person who implemented them. An added benefit is that it exposes the code to others so there is at least some knowledge transfer inherent in the process.
This may sound odd, but what actual problems do you have now? You don't say what problems the business is experiencing.
Sure, if I joined your team, I'd be doing TDD as much as I could, I've been committed to practising TDD for many years now, but I've also seen ugly code bases and wanted to clean them up, but not understood enough to be sure any source code changes I wanted to do to improve testability were actual refactorings.
You sound like you have a good grasp of the reality of the situation, but can you prove that doing TDD would improve the situation? Have you got any metrics that highlight your problems? If you have then you can use better practises to improve them.
OK, after all that, my practical advice would be to start by using your relative lull
to do whatever you can, including: documenting test cases, writing automated tests, refactoring to reduce dependencies, writing new code with TDD, training up your programmers to be effective with TDD, etc.
精彩评论