How to integrate QA into the Sprint [closed]
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this questionOne of the challenges with Scrum is how to fit QA into the process. Sure, QA works with the developers on each individual user story during the Sprint, but what about giving QA time with the fully completed sprint to do a full regression and load test before releasing into production?
I've seen 2 approaches:
- launch into production on the last day of the Sprint; or
- launch into production a week after the Sprint
Both approaches have their challenges so I'm wondering 开发者_开发问答what most shops do that release every Sprint?
Both approaches have their challenges so I'm wondering what most shops do that release every Sprint?
In my opinion, the ultimate goal with Scrum is to be able to release a new increment after the end of a Sprint. In other words, the result of the Sprint is a releasable increment (not a released increment).
So option #1 seems a bit too early to me (our Product Backlog Items are done at the end of the Sprint, but before the Demo, and we don't include "releasing to production" in our Definition of Done, because this isn't really under our control, this is the job of another team).
And somehow, I think that option #2 means that you're not including everything required to be "DONE DONE" in you definition of DONE. I'm definitely not saying that it's easy to do and it will very likely take some time to really include all the required steps to reach releasable in your Definition of Done and to make the necessary organizational changes to achieve the goal.
Personally, I still didn't really reached this level of fluidity (releasing at each Sprint) and, while a part of the QA is done during each Sprint (IST, UAT), we are actually releasing every 4 sprints of 2 weeks, the last Sprint being a kind of release Sprint with "special" Product Backlog Items like performing load tests, optimizing if necessary (although there aren't much bad surprises now), writing documentation (for the production team, for users). Shortening the release cycles would require deeper changes that can't be done for now and is actually not desired in our case. Your context is certainly different of course.
See also
- Pitfalls of QA during Sprint?
Related questions:
- Best practices for QA / testing in an Agile (Scrum+XP) team?
- Help me understand how QA works in Scrum
- Best ways to fit bug fixing into a Scrum process?
It depends on the industry, the market and lots of other factors. There is no single answer. Remember Scrum is a framework and it doesn't fit all. I've seen the solution #1 in action the most.
At the end of the sprint you should have a potentially releasable product version. It works very well in small startups or small companies. It's one of their competitive advantage. QA people can be put in the team easily. This can be achieved in large corporation when the software is not critical (solution #2).
I've implemented scrum in a large enterprise where security was critical. Releasing after the sprint was impossible due to regulations and certification constraints. You had to go trought a long release process where the developers had to be involved. We had to work with that.
But in most software factories they can release after the demo, and almost in one click. You get all the power of iteration development when you make this possible and it's a very big competitive advantage.
It's also a good practice to not release at the end of each iteration, in a commercial point of view.
If I may humbly assume that you are in the Software Industry the answer to your question or a solution to your problem would be to use the Enterprise Scrum Model with a solid Project Release Plan and project timeline plan.
There should be an Operations Support Scrum Team which may include a DB administrator, an Application Server administrator, Senior QA people, and Senior Production Support Analysts. This Team may look after the complete QA regression and load testing, release management, code deployment and other operations support activities for the Development Scrum Teams. The Development Scrum Teams on the other hand would just produce the releasable software and put it on the shelf for the Operations Support Team.
In this particular scenario of yours the Operations Support Team would have Backlogs on their Product Backlog list to do Regression Testing and load testing activities for the shelved product produced by the development Team - Note Regression should ideally be a part of the development process!!!
Now organizations that release every Sprint need to have the Operations Team lag a week or 2 behind the development Teams, so for example if Scrum Development Team is working on the Release 2.0 code the Operations Support Team should be deploying the Release 1.0 code which the development Team finished "shelving" 2 weeks ago.
The bottom line is that a release plan needs to be laid out with the correct timelines. There may be a misconception in Scrum that each Scrum team has its own release plan because they are self managed, and they would do their own deployment etc, which could be true to a certain extent, but the teams plan will need to be fit in with the project wide release plan as well, and the timelines will need to be fit in accordingly.
The responsibility of coordinating the timelines and organizing the backlogs as per Project timelines would mainly lie on the shoulders of the PO and offcourse the SM, who is responsible for training the PO on how to make this work most effectively. So simple answer is = 2 weeks is a good gap to have between a development team and an operations team doing QA or release activities, but time lines and gaps should be adjusted according to project needs.
If you need more details on this please ask. A conversation to discuss this would have been much easier to explain but I hope this answers your question. And By the way, releasing (to PROD) the day after the Sprint is over for the development Team is a bad idea as per me but you can always try it and inspect and adapt ;) and 1 week is a good enough gap but depends how big your application is and how big your data is, and also how many resources you have.
Thanks, Sid Telang. Certified Scrum Master
My understanding is that it is important to be "done" at the end of the sprint and not to have to carry on "technical debt" in another sprint where people can have to rework previous released software.
精彩评论