Any good arguments for not writing unit tests? [closed]
I have read a lot of threads about the joys and awesome points of unit testing. Is there have a good argument against unit testing?
In the places I have previously worked, unit testing is usually used as a reason to run with a smaller testing department; the logic is "we have UNIT TESTS!! Our code can't possibly fail!! Because we have unit tests, we don't need real testers!!"
Of course that logic is flawed. I have seen many cases where you cannot trust the tests. I have also seen many cases where the tests become out of date due to tight time schedules - when you have a week to do a big job, most developers would spend the week doing the real code and shipping the product, rather than refactoring the unit tests for that first week, then pleading for at least another week to do the real code, and then spending a final week bringing the unit tests up to date with what they actually wrote.
I have also seen cases where the business logic involved in the unit test was more monstrous and hard to understand than the logic buried in the application. When these tests fail, you have to spend twice as long trying to work out the problem - is the test flawed, or the real code?
Now the zealots are not going to like this bit: the place where I work has largely escaped using unit tests because the developers are of a high enough calibre that it is hard to justify the time and resource to write unit tests (not to mention we use REAL testers). For real. Writing unit tests would have only given us minimal value, and the return on investment is just not there. Sure it would give people a nice warm fuzzy feeling - "I can sleep at night because my code is PROTECTED by unit tests and consequently the universe is at a nice equilibrium", but the reality is we are in the business of writing software, not giving managers warm fuzzy feelings.
Sure, there absolutely are good reasons for having unit tests. The trouble with unit testing and TDD is:
- Too many people bet the family farm on it.
- Too many people use it as a religion rather than just a tool or another methodology.
- Too many people have tried to make money out of it, which has skewed how it should be used.
In reality, it should be used as one of the tools or methodologies that you use on a day to day basis, and it should never become the single methodology.
It's important to understand that it's not free. Tests require effort to write - and, more importantly, maintain.
Project managers and development teams need to be aware of this.
Virtually none of my bugs would have been found by unit testing. My bugs are mostly integration or unexpected-use-case bugs, which in order to have found them earlier, more extensive (and ideally automated) system tests would have been the best bet.
I'm waiting for more evidence-based and less religion-based arguments for unit testing, as dummymo said. And I don't mean some experiment in some academic setting; I mean an argument that for my development scenario and programming ability, cost-benefit would be positive.
So, to agree with other answers to the OP: because they cost time and cost-benefit is not shown.
You have a data access layer that isn't easy adapted for mocking.
The simple truth is, when you write some code, you have to make sure it works before you say it's done. Which means you exercise it - build some scaffolding to call the function, passing some arguments, checking to make sure it returns what you expect. Is it so much extra work, to keep the scaffolding around, so you can run the tests again?
Well yes, actually, it can be. More often than not the tests will fail, even when the code is right, because the data you were using is no longer consistent, etc.
But if you have a unit testing framework in place, the cost of keeping the test code around can be only marginally more work than throwing it away. And while yes, you'll find that many of your test cases will fail because of problems with the data you are using, instead of problems with the code, that will happen less as you learn how to structure your tests so as to minimize the problem.
True, passing your unit tests does not guarantee that your system works. But it does provide some assurance that certain subsystems are working, which isn't nothing. And the test cases provide useful examples of how the functions were meant to be called.
It's not a panacea, but it's a useful practice.
Formal verification.
If you can formally prove the correctness of code, there is no reason to unit test it unless the test condition brings in new variables, in which case, you'd still only have a small amount of unit tests (or prove for the new variables).
Unit tests will tell you whether one specific class method sets a variable correctly (or some variation on that). That does not, in any way, shape, or form, indicate that your application will behave properly or that it will handle the circumstances it will need to handle.
Any problem you can think to write a test for, you are going to handle in your code, and that problem is never going to show up. So then you have 300 tests passing but real-world scenarios you just didn't think to test for. The effort required to create and maintain the tests, then, isn't necessarily worth it.
It's the usual cost/benefit analysis.
Cost: You need to spend time developing and maintaining the tests, and put resources into actually running them.
Benefits are well known (mostly cheaper maintenance/refactoring with less bugs).
So, you balance one against the other in the context of the project.
If it's a throwaway quick hack you know will never be re-used, unit tests might not make sense. Although to be honest, if I had a dollar for every throwaway quick hack that I saw running years later or worse, had to maintaing/refactor years later, I'd probably be able to be one of venture capitalists investing into SO :)
Non-deterministic outcomes.
In simple cases you can seed the random generator(s) (or mock them somehow) to get reproducible results but when the algorithm is complex this becomes impossible as code changes will alter the demand for random numbers and thus alter the results.
This would rarely be encountered in a business situation but it's quite possible in games.
- Testing is like insurance.
- You don't put all your money in to it.
- But you don't avoid your life insurance. (People form US should still be remembering the Health Insurance Bill).
- Insurance is ESSENTIAL evil.
- BUT BUT BUT...
- You don't get insured expecting a Fatal accident to recover all the money you put into your insurance plan.
In summary,
- There is SOME reason to write Tests.
- Unit tests are sometimes One of the many ways to go forward
- BUT There is NO REASON to just to focus entirely on writing (Unit) Tests.
It can discourage experimenting with several variations, especially in early stages of a project. (But it can also encourage experimenting in later stages!)
It can't replace system testing, because it doesn't cover the relationship between components. So if you have to split up the available testing time between system testing and unit testing, then too much unit testing can have a negative impact on the amount of system tests.
I want to add, that I usually encourage unit testing!
It is impossible to generalize where unit tests are going to provide cost-benefit and where they are not. I see a lot of people arguing strongly in favor of unit testing and blaming people who don't for not using TDD enough, while completely ignoring the fact that applications can differ as much as the real world does.
For instance, it is incredibly hard to get anything useful out of unit tests when you have a lot of integration points, either between systems, and/or between processes and threads of your own application.
If all you ever did were websites like Stackoverflow, where problem domain is well understood, and most solutions are fairly trivial, then yes, writing unit tests have a lot of benefits, but there are lots of applications out there that simply can't be unit tested properly, as they lack, well, "units".
Laziness; sometimes I'm lazy and just don't want to do it!
But seriously, unit testing is great, but if I'm just coding for my own enjoyment I generally don't do it, because the projects are short lived, I'm the only one working on it, and they're not that big.
There is never a reason to never write unit tests.
There are good reasons to not write specific unit tests. (Especially if you use code generation. Of course you could code generate the unit tests to make sure nobody has mucked with the generated code. But that is dependent upon trusting the team.)
*Edit
Oh. And from what I understand some things in functional programming either compile thus work or don't compile.
Would those things need unit tests?
I agree with the notion that there are no good arguments against unit testing in general. There are some specific situations, however, where unit testing may not be a viable option or is at least problematic and/or poses a difficult return-on-investment proposition for the level of effort involved to create and maintain tests.
Here are some examples:
Real-time-dependent behavior in response to external conditions. Some purists may argue that this is not unit testing but rather involves scenarios at an integration or system testing level. However, I've written code for simple, low-level functionality for quasi-embedded applications where it would be useful to at least partially test real-time response via a unit-testing framework for build/regression testing purposes.
Testing behaviorial and/or policy-level functionality that requires a complex data description of the environmental state to which the tested code module is responding. This is related to an earlier poster's comment regarding the difficulty of doing unit testing involving a data access layer that isn't easily adapted for mocking. Although the behavior/policy being tested may be relatively simple, it needs to be tested across a complex state description. The value of doing unit testing here is in assuring that rare yet key conditions are handled correctly for a mission-critical application. One wants to mock the latter and create a simulated environment/state, but the cost of doing so may be high.
There are at least two alternatives to unit testing for the above scenarios:
For real-time or quasi-real-time functionality testing, extensive system testing can be done to try to compensate for the lack of good unit testing. For some applications this may be the only option, e.g., embedded systems involving hardware and/or physical systems.
Create a test harness or system-level simulator that facilitates extensive testing across a range of randomly simulated conditions. This can be useful for testing the policy/behavior scenarios described earlier involving a complex environmental state. Although significant work may be involved in creating the test harness or simulator, the return-on-investment may be a much better value than for isolated unit tests since a much broader range of conditions can be tested.
Since the test environment involves random rather than specific conditions, this approach may not offer quite the level of assurance desired for some mission-critical scenarios. Conducting extensive tests may help make up for this. Alternatively, creating a test harness or system simulator for random conditions may also help with reducing the overall cost of testing specific complex state scenarios since the development cost is now shared across a broader range of testing needs.
In the end, how to best approach testing any given application comes down to cost vs. value. Unit testing is one of the best options and should always be used where feasible, but it is not universally applicable to all scenarios. Like many things in software, sometimes it will just be a judgment call one has to make and then be prepared to make adjustments based on the outcome.
Unit testing is a trade-off. I see two problems:
It takes time. Not only to write the tests, but also (and this is the most annoying) if you want to make an important change, now you have two places you need to modify. In the worst case it could possibly discourage you to re-architect your codebase.
It only protects against problems that you think could arise, and you mostly cannot test against side-effects. This can lead to a false sense of security as mentioned before.
I agree unit testing is a valuable tool for increasing the reliability of enterprise software with a relatively stable codebase. But for personal projects or infant projects, I think a generous use of asserts in your code is a much better trade-off.
I wouldn't say it's an argument against it, but for us we have a legacy application with a TON of code, and written in COBOL. It is virtually impossible at this point to say we want to implement unit testing and do it with any degree of accuracy or within a reasonable time frame for business as pointed out by duffymo.
So I guess to add onto that, maybe one argument would be the inability (in some cases) of trying to implement unit tests after development has been completed (and maintained for years).
Instead of completely getting rid of them, we write unit tests only for core functionality such as payment authorization, user authentication, etc etc. It is very useful as there will always be some touch points that are very sensitive to code changes in a large code base and you would want some way to verify those touch points work without failing in QA.
For writing unit tests in general, learning curve is the biggest reason I know of to not bother. I have been trying to learn good unit testing for about 1.5 years now, and I feel like I'm just getting good at it (writing audit log spies, mocking, testing 1 constraint per test, etc.), although I feel it has sped up development for me for about 1 year of that time. So call it 6 months of struggling through it before it really started paying off. (I was still doing "real" work during that time, of course.)
Most of the pain experienced during that time was due to failure to follow the guidelines of good unit testing.
For a variety of specific cases, ability to unit test may be blocked; others have commented on some of those.
In Test Driven Development, the unit tests are actually more importantly a way to design your code to be testable to begin with. As it turns out your code tends to be more modular and writing the tests helps to flesh out APIs and so forth.
Too often though, you find yourself developing the code then writing the tests, commenting out the code you just wrote to ensure that the tests fail, then selectively removing the comment tokens to make the tests pass. Why? Well because it's much harder to write tests than it is to write code in some cases. It's also often much more difficult to write code that can be tested in a completely automated way. Think about user interfaces, code that generates images or pdfs, database transactions.
So unit tests do help a lot, but expect to write about 3 times as much code for the tests than you will write for the actual code. Plus all of this will need to be maintained. Significant changes to the application will invalidate huge chunks of tests - a good measure of the impact to the system but still... Are you prepared for that? Are you on a small team where you are doing the job of 5 developers? If so then automated TDD development just won't fly - you won't have time to get stuff done fast enough. So then you end up relying on you're own manual testing, QA testing stuff as well and just living with bugs slipping through and then fixing things up ASAP. It's unfortunate, high pressure and exasperating but it's reality in small companies that don't hire enough developers for the work that needs to be done.
No, really I don't. In my experience people who do are presenting a straw man's argument or just don't know how to unit test things that are not obvious how to unit test.
@Khorkak - If you change a feature in your production code, only a handfull of your unit tests should be affected. If you don't that means that you are not decoupling your production code and testing in isolation, but instead excecising large chunks of your production code in integration. That's just poor unit testing skills, and yes it's a BIG problem. Not only because your unit test code base will become hard to maintain, but also because your production code base will suck and have the same problem.
Unit tests make no sense for Disposable Code: If the code is a Q&D proof of concept, something created during a spike to investigate various approaches, or anything else you are SURE will almost always be thrown away, then doing unit tests won't bring much return on the investment. In fact they could hurt you as the time spent there is time spent not trying a different approach etc (alternative cost)
The key is being sure that's the case, or having enough of an understanding with teammates etc that if someone says 'that's great, use that one' that you then invest the time bring the code up to the standards for NON disposable code.
For those who asked for better proof I'd refer you to this page. http://biblio.gdinwiddie.com/biblio/StudiesOfTestDrivenDevelopment note that many of these are studies by academic types, but done against groups doing real production software work, so personally it seems like they have a pretty good amount of validity.
精彩评论