Do large enterprises utilize mocking/stubbing? [closed]
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this questionHas anyone worked at a large company, or on a very large project, that successfully used unit testing?
Our current database has ~300 tables, with ~100 aggregate roots. Overall there are ~4000 columns, and we'll have ~2 Million lines of code wh开发者_StackOverflowen complete. I was wondering - do companies with databases of this size (or much larger) actually go through the effort to Mock/Stub their domain objects for testing? It's been two years since I worked in a large company, but at the time all large applications were tested via integration tests. Unit testing was generally frowned upon if it required much set up.
I'm beginning to feel like Unit testing is a waste of time for anything but static methods, as many our test methods take just as long or longer to write than the actual code ... in particular, the setup/arrange steps. To make things worse, one of our developers keeps quoting how Unit Testing and Agile methods was such an abject failure on Kent Beck's Chrysler project ... and that it's just not a methodology that scales well.
Any references or experiences would be great. Management likes the idea of Unit Testing, but if they see the amount of extra code we're writing (and our frustration) they'd be happy to back down.
I've seen TDD work very well on large projects, especially to help us get a legacy code base under control. I've also seen Agile work at a large scale, though just doing Agile practices alone isn't sufficient I think. Richard Durnall wrote a great post about how things break in a company as Agile gains ground. I suspect Lean may be a better fit at higher levels in an organisation. Certainly if the company culture isn't a good match for Agile by the time it starts being used across multiple projects, it won't work (but neither will anything else; you end up with a company that can't respond effectively to change either way).
Anyway, back to TDD... Testing stand-alone units of code can sometimes be tricky, and if there's a big data-driven domain object involved I frequently don't mock it. Instead I use a builder pattern to make it easy to set that domain object up in the right way.
If the domain object has complex behaviour, I might mock that so that it behaves predictably.
For me, the purpose of writing unit tests is not really for regression testing. It helps me think about the behaviour of the code, its responsibilities and how it uses other pieces of code to help it do what it does. It provides documentation for other developers, and helps me keep my design clean. I think of them as examples of how you can use a piece of code, why it's valuable and the kind of behaviour you can expect from it.
By thinking of them this way I tend to write tests which make the code easy and safe to change, rather than pinning it down so nobody can break it. I've found that focusing on mocking everything out, especially domain objects, can cause quite brittle tests.
The purpose of TDD is not testing. If you want to test something you can get a tester to look at it manually. The only reason that testers can't do that every time is because we keep changing the code, so the purpose of TDD is to make the code easy to change. If your TDD isn't making things easier for you, find a different way to do it.
I've had some good experiences with mock objects and unit testing in projects where there was a lot of upfront design and a comfortable timeline to work with -- unfortunately that is often a luxury that most companies won't afford to take a risk on. GTD and GTDF methodologies really don't help the problem either, as they put developers on a release treadmill.
The big problem with unit tests are that if you don't buy-in from a whole team what happens is one developer looks at the code with rose colored glasses (and through no fault of their own) implements only the happy path tests which are what they can think of. Unit tests don't always get kept up as well as they should because corner cases slip by, and not everyone drinks the Kool-Aid. Testing is a very different mindset than coming up with the algorithms, and many developers really just don't know how think that way.
When iterations and development cycles are tight, I find myself gaining more confidence in the code quality by relying on static analysis tools and complexity tools. (FindBugs, PMD,Clang llvm etc) Even if they are in areas that you can't directly address, you can flag them as landmines and help better determine risk in implementing new features in that area.
If you find that mocking/stubbing is painfull and takes a long time then you probably have a design that is not made for unit-testing. And then you either refactor or live with it.
I would refactor.
I have a large application and see no trouble in writting unit-tests and when I do I know it's time to refactor.
Of course ther is nothing wrong with integration test. I actualy have those too to test the DAL or other parts of the application.
All the automated test should form a whole, unittest are just a part of those.
Yes they do. Quite extensively.
The hard part is getting the discipline in place to write clean code - and (the even harder part) the discipline to chip away at bad code by refactoring as you go.
I've worked in one of the world's biggest banks on a project that has been used from New York, London, Paris and Tokyo. It used mocks very well and through a lot of discipline we had pretty clean code.
I doubt that mocks are the problem - they're just a fairly simple tool. If you've got to rely on them super-heavily, say it looks like you need mocks of mocks returning mocks - then something has gone wrong with the test or the code...
精彩评论