开发者

Is it common or even a good idea to release source code for automated tests with a closed-source app?

Please point me to any dupes or better places to post this question that you may find.

I have never sold software before, but when I put myself in my customers' shoes, I think, "I sure would like to see source code for automatic tests for any software that I buy. It would serve as awesomely detailed documentation for the software that I'm buying! In fact, I'd love to see the tests as proof-of-concept before I buy, if possible."

From the seller's perspective, I see no reason to hide the test code source (as long as the tests only access the binaries through "front doors" and contain no sensitive data), or even to release the tests as open source to receive help from anyone who wants to help us, as well as prove to potential customers what the software can do in detail.

So... Is it customary, or even a good idea, to let users see (some) automated test code, or even release it open-source, for a closed-source application application that you are selling?

EDIT: Thank you all for the insightfu开发者_C百科l comments. I should clarify that my goal is not to say "Look at how high-quality my software is!" but instead "This is how you use the software" (using tests as documentation). This is for an API; no UI is involved. I just want to demonstrate how to use it (public interface), not the internal workings. I really hate demo apps that are very long, but show only the "happy path" through an API, so I was looking for a way to improve on that.


This idea may be good for the customer, but it's not good for you. It's nearly impossible for you to create a suite of automated tests that doesn't exploit some accidental property of your implementation that isn't documented and that you'd like to have the freedom to change later. Once you let those tests out the door, you are essentially guaranteeing that those tests will run on any future version of the software. You don't want to lock yourself into a straitjacket like this.

In theory, of course, you could have a set of tests that would test exactly what is exposed in the public interfaces, no more and no less. But such a test suite would be very expensive to create and maintain as the software evolves. And the very idea brings to mind one of the minor apocrypha:

In theory, theory and practice are the same. In practice, they aren't.


It will expose you to people copying your implementation if they can accurately test for compatibility.

It may make sense to release tests for API's or areas that are designed to be exposed but if the product is closed source, revealing clues to the inner working would appear to be at cross purposes.


It is neither customary nor a good idea to let users see the test results from any automated testing.

Think of this like when you go o a doctor. The doctor may run any number of tests on you by sending blood or whatever to a lab. The lab processes it and sends the results back to the doctor.

If you see those results before talking to a doctor you might completely misinterpret them; whereas (s)he is trained to understand what values are in or outside of normal and more importantly what normal really is for you.

The same thing applies here. You might have a set of tests which consistently fail. An end user will only see the failure but not have an understanding that those particular tests don't impact them at all. For example, let's say you have a section of your code base which isn't complete and ready for production use. However, you might already have tests set up to stress that code.. knowing that you're going to finish that area next month.

In that situation would you want a current customer to tell a perspective one that 5% of your tests fail? Or, would you rather have your current customer say "everything I use works perfectly."..

-- Just to add one more thing --
End users have a tendency to view any minor failure as meaning your entire application is broken. The only reference most people have is their car. If the battery is dead, then the entire car is broke.

This will lead to a lot of frustration on your part. Especially if you have a test against some edge case that may not even be reproducible in production. All the user is going to see is that something is broke and therefore not trust the entire app to function correctly.


Not customary, I've never seen this for out-of-the-box software.

BUT, I've seen clients (in the more scientific and/or engineering domains) who will "certify" the software with their own set of test and data to be certain the software does not deviate from their own standards (which could be different than what the developers sets).

When the certification is done (and accepted) the new version of the software can be put into production.


Instead of that, do your own automated testing and display the results publicly. A hundred check marks in a row will certainly raise the perceived reliability of your software.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜