开发者

Test Case Design and responsibility of Testers, Developers, Customers [closed]

Closed. This question is off-topic. It is not currently accepting answers.

Want to improve this question? Update the question so it's on-topic for Stack Overflow.

Closed 开发者_运维技巧9 years ago.

Improve this question

So it seems like a lot of people are playing the blame game around where I work, and it brings up an interesting question.

Knowns:

Requirements team writes requirements for product. Developers create their own unit tests according to requirements. Testing team creates Test Conditions, Test Design and Test Cases according to requirements.

Product released if and only if X% of test cases from Testing team passes.

After delivery customer does Acceptance tests --> Customer response team gets bugs from the field, and lets the testing team know about these issues.

Question:

If the customer ends up filing a lot of defects, who is to blame? Is it the Testing team for not covering those? Or is it the requirements team for not writing better requirements? And how does one improve upon the system?


The statement "Product released if and only if X% of testcases from Testing team passes" really bothers me. The team may want to consider having better release criteria which is gated on more than just test pass rates. For example, are the scenarios known, understood, accounted for (and tested)? Certainly not all bugs will be fixed, but are the ones that have been postponed or not fixed been triaged correctly? Have you reached your stress testing and performance goals? Have you threat modelled and accounted for mitigations to potential threats? Have x amount of customers (internal/external) deployed builds and provided feedback prior to release (i.e. "dogfood")? Do developers understand the bugs coming from the field and the testers to create regression unit tests? Does the requirements team understand these bugs coming in to see why the scenarios weren't accounted for? Are there key integration points between features which weren't accounted for in specs, development, or testing?

A few suggestions to the team would be to first do a postmortem on the issues found and understand where it broke, and strive to push quality upstream as much as possible. Make sure the requirements team, devs, and testers are communicating frequently and well throughout the planning, dev, and testing cycle to make sure everyone is on the same page and knows who is doing what. You would be amazed at how much product quality can be gained when people actually talk to each other during development!


Bugs can enter the system at both the requirements and development steps. The requirements team could make some mistakes or over-simplifying assumptions when creating the requirements, and the developers could misinterpret the requirements or make their own assumptions.

To improve things, the customer should sign off on the requirements before development proceeds, and should be involved, at least to some extent, in monitoring development to ensure things are on the right track.


The first question in my mind would be, "how do the defects stack up against the requirements?"

If the requirement reads, "OK button should be blue" and the defect is "OK button is green", I would blame development and test -- clearly, neither read the requirements. On the other hand, if the complaint is, "OK button is not yellow", clearly, there was an issue with requirements gathering or your change-control process.

There's no easy answer to this question. A system can have a large number of defects with responsibility spread between everyone involved in the process -- after all, a "defect" is just another way of saying "unmet customer expectation". Expectations, in themselves, are not always correct.


"Product released if and only if X% of test cases from Testing team passes" - is one the criteria for release. In this case "Coverage of Tests" in written TCs is very important. It needs good review of TCs whether any functionality or scenario is missed or not. If anything is missed in TCs there might possibility to find bugs as some of requirement is not covered in test cases.

It also needs some ad-hoc testing as well as Exploratory testing to find uncover bugs in TCs. And it also needs to define "Exit criteria" for testing.

If customer/client finds any bug/defect it is necessary to investigate as: i) What type of bug is found? ii) Is there any Test case written regarding that? iii) If there is test case(s) regarding that executed properly? iv) If it is absent in TCs why it was missed? and so on

After investigation decision can be taken who should be blamed. If it is very simple and open-eyed bug/defect, definitely testers should be blamed.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜