To mock or not to mock? [closed]
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this questionAs far as I know from eXtreme Programming and unit testing, tests must be done by another developer before another one develop the tested method (or from the same developers but test must be written before the method implementation).
Ok, seems good, we just need to test if a method has a good behavior when I give it some parameters.
But the difference between theory and practice is that in theory there isn't but in practice there is...
The first time I've tried to test, I've found it difficult in some cases due to relations between objects. I discovered mocking practice and I found it very useful but some concepts make me doubt.
First, mocking implicit says : "You know how the method runs because you must to know what other objects it needs...". Well, in theory that's my friend bob which writes the test and he just knows that the method must return true when I give it "john" string... It's me who code this method using a dao to access a database instead of using an hashtable in memory...
How my poor friend Bob will write its test ? He will anticipate my work...
Ok, seems to not be the pure theory but no matter. But if I look at the documentation of a lot of mock frameworks, they allow me to test how many times a method is called and in what order ! ouch...
But if my friend Bob must test this method like that to ensure good use of dependencie开发者_开发知识库s, the method must be written before the test, isn't it ?
Hum... Help my friend Bob...
When do we stop to use mock mechanism (order verification and so on) ? When mock mechanisms are useful ? Theory, practice and mock : what is the best balance ?
What you seem to be missing from your description is the concept of separating contract from implementation. In C# and Java, we have interfaces. In C++, a class composed only of pure virtual functions can fill this role. These aren't really necessary, but helpful in establishing the logical separation. So instead of the confusion you seem to be experiencing, practice should go more like: Bob writes the unit tests for one particular class/unit of functionality. In doing so, he defines one or more interfaces (contracts) for other classes/units that will be needed to support this one. Instead of needing to write those right now, he fills them in with mock objects to provide for the indirect input and output required by his test and the system under test. Thus the output of a set of unit tests is not just the tests to drive development of a single unit, but that plus the contracts required to be implemented by other code to support the unit currently under development.
I'm not sure if I understand your question.
Use mocks to verify collaboration between objects. For example, suppose you have a Login() method that takes a username and password. Now suppose you want this method to Log failed log in attempts. In your unit test you would create a mock Logger object and set an expectation on it that it will be called. Then you would dependency inject it into your login class and call your Login method with a bad username and password to trigger a log message.
The other tool you have in your unit testing tool bag is stubs. Use stubs when you're not testing collaborations but to fake dependencies in order to get your class under test to run.
Roy Osherove, the author of the The Art of Unit Testing, has a good video on mocks: TDD - Understanding Mock Objects
Also I recommend going to his website http://artofunittesting.com/ and watching the free videos on the right side under the heading "Unit Testing Videos".
When you are writing a unit test you are testing the outcome and/or behavior of the class under test against an expected outcome and/or behavior.
Expectations can change over the time that you develop the class - new requirements can come in that change how the class should behave or what the outcome of calling a particular method is. It is never set in stone and unit tests and the class under tests evolve together.
Initially you might start out with just a few basic tests on a very granular level, which then evolve into more and more tests, some of which might be very particular to the actual implementation of your class under test (at least as far as the observable behavior of that class is concerned).
To some degree you can write out many of your tests against a raw stub of your class under test, that produces the expected behavior but mostly has no implementation yet. Then you can refactor/develop the class "for real".
In my opinion it is a pipe dream though to write all your tests in the beginning and then fully develop the class - in my experience both tests and class under tests evolve together. Both can be written by the same developer too.
Then again I am certainly not a TDD purist, just trying to get the most out of unit tests in a pragmatic way.
I'm not sure what exactly is the problem. So I may not accurately answer the question, but I'll give it a try.
Suppose you are writing system A, where A need to get data (let's say a String for simplicity) from a provider, and then A inverse that String and send it to another system C.
B and C are provided to you, and they actually interfaces, the implementations in real life may be BImpl, and CImpl.
for the purposes of your work, you know, that you need to call the readData() from system B, and sendData(String) from system C. Your friend Bob should know that as well, you shouldn't send the data before you get it. Also if you get "abcd" you should send "dcba"
looks like both you and Bob should know this, he writes the tests, and you write the code... where is the problems in that?
of course real life is more complicated, but you should still be able to model it with simple interactions that you unit test.
精彩评论