开发者

Understanding Scrum [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.

Want to improve this question? Update the question so it focuses on one problem only by editing this post.

Closed 6 years ago.

Improve this question

I have been working as a .net developer following the waterfall model. When working on, say a 12 months project, usually my team follows Analysis, Design, Coding and Testing phases. But when it comes to following the Scrum process, I don't really understand how I need to deal with it.

Consider a sprint for 4 weeks and the backlog has 10 items. Let the sprint start now. If developers are working on some backlog items for the first 10 days, I don't know if testing (both SIT and UAT) will require JUST the remaining 10 days to complete the work. And now our sprint does not have any time to do last minute bug fixes and only few bugs could be fixed IN THE PLANNED SPRINT.

And when we do development, how can we make sure that we keep the testing team busy apart from just preparing test cases and waiting for us to deliver the functionality?

This raises a question if we need to deliver the first task/feature within the first 3 days of the sprint, so that testers might be ready with their test cases to test that piece.

I also need to educate my client to help in adapting the Scrum process.

I need some guidelines, references or a case study to make sure that our team follows 开发者_JAVA技巧a proper Scrum process. Any help would be appreciated.


In an ideal Scrum team, testers and developers are part of the team and testing should occur in parallel of the development, the phases are overlapping, not sequential (doing things sequentially inside a Sprint is an anti-pattern known as Scrumerfall). And by the way, contrary to some opinions expressed here, an ultimate Scrum implementation produces DONE DONE stories so testing - including IST, UAT - should be done during the Sprint.

And no, testers don't have to wait for Product Backlog Items (PBI) to be fully implemented to start doing their job, they can start writing acceptance tests scenarii, automate them (e.g. with FitNess), set up test data set, etc (this takes some time, especially if the business is complicated) as soon as the Sprint starts.

Of course, this requires very close collaboration and releasing interfaces or UI skeletons early will facilitate the job of testers but, still, testers don't have to wait for a PBI to be fully implemented. And actually, acceptance tests should be used by developers as DONEness indicator ("I know I'm done when acceptance tests are passing")1.

I'm not saying this is easy, but that's what mature (i.e. Lean) Scrum implementations and mature Scrum teams are doing.

I suggest reading Scrum And XP from the Trenches by Henrik Kniberg, this is very good practical guide.

1 As Mary Poppendieck writes, the job of testers should be to prevent defects (essential), not to find defects (waste).


You definitely don't want to do all development in the first half of the sprint and all testing in the second half. That's just a smaller waterfall.

Your stories and tasks should be broken up into very small, discrete pieces of functionality. (It may take a while to get used to doing this, especially if the software you're working on is a monolithic beast like a previous job of mine that moved to using scrum.) At the beginning of the sprint the testers are developing their tests and the developers are developing their code, and throughout the sprint the tasks and stories are completed and tested. There should be fairly constant interaction between them.

The end of the sprint may feel a bit hectic while you're getting used to the methodology. Developers will feel burdened while they're working on the rest of the code and at the same time being given bugs to fix by the testers. Testers will grow impatient because they see the end of the sprint looming and there's still code that hasn't been tested. There is a learning curve and it will take some getting used to, the business needs to be aware of this.

It's important that the developers and testers really work together to create their estimates, not just add each other's numbers to form a total. The developers need to be aware that they can't plan on coding new features up until the last minute, because that leaves the testers there over the weekend to do their job in a rush, which will end up falling back on the developers to come in and fix stuff, etc.

Some tasks will need to be re-defined along the way. Some stories will fail at the end of the sprint. It's OK, you'll get it in the next sprint. The planning meeting at the start of each sprint is where those stories/tasks will be defined. Remember to be patient with each other and make sure the business is patient with the change in process. It will pay off in the long run, not in the first sprint.


The sprint doesn't end with perfect code; if there are remaining bugs, they can go in the very next sprint, and some of the other items that would have went in the next sprint will need to be taken out. You're not stopping a sprint with something perfect, but ideally, with something stable.


You are (ironically) applying too much rigor to the process. The whole point of an agile process like scrum is that the schedule is dynamic. After your first sprint, you work with the users/testing team to evaluate the progress. At that point, they will either ask you to change details and features that were delivered in the first sprint, or they will ask you to do more work. It's up to them.

It's only eventually, once you have determined the velocity of the team (ie. how many stories one can reasonably accomplish in a sprint) that you can start estimating dates and things for larger projects


First of all, not every Sprint produces a Big Release (if at all). It is entirely acceptable for the first sprints to produce early prototypes / alpha versions, which are not expected to be bug free, but are still capable of demonstrating something to the client. This something may not even be a feature - it can simply be a skeleton UI, just for the user to see how it will look and work like.

Also, developers themselves can (and usually do) write unit tests, so whatever is delivered in a sprint should be in a fairly stable working state. If a new feature is half baked, the team simply should not deliver it. Big features are supposed to be devided into small enough chunks to fit within a single sprint.


A Scrum team is usually cross-functional, which means that the entire team is responsible for building completed pieces of functionality every Sprint. So if the QA testers did not finish the testing, it only means the Scrum team didn’t finish the testing. Scrum counts on everyone to do their part. Whenever any is needed, the people with those skills take the lead, but they all have to do their part.


Try to do continuous integration. The team should get into this habit and integrate continuously. In addition, having automated unit test suite built and executed after every check-in/delivery should provide certain level of confidence in your code base. This practice will ensure the team has code in working and sane condition at all time. Also it will enable integration and system test early in the sprint.

Defining and creating (automated) acceptance tests will keep people with primary QA/testing skills busy and involved right from the sprint start. Make sure this is done in collaboration with Product Owner(s) so everyone is on the same page and involved.


We started our agile project with developers first (a lot of training in Enterprise Framework, etc.) in the first sprint. Then we added QA slowly into the second sprint. At the end of sprint 2, QA started testing. Closing in on the end of sprint 3 QA had picked up the pace and where more or less alongside the developers. From sprint 4 and out, QA is more or less done with testing when the developers have completed the stories. The items that are usually left to test are big elephants that include replication of data between new and legacy system. And it is more a 'ensure data is OK' rather than actual tests.

We're having some issues with our definition of Done. E.g. we have none. We're working on a completely new version of a system, and now that we are closing in on the end of sprint 6, we are getting ready for deployment to production. Sprint 6 is actually something I would call a small waterfall. We have reduced the number of items to implement to ensure that we have enough time to manage potential new issues that come up. We have a code freeze, and developers will basically start on the next sprint and fix issues in the branch of necessary.

Product Owner is on top of the delivery, so I expect no issues in regards to what we deploy.

I can see that Pascal write about mature sprint teams + the definition of Done. And agile always focus on 'delivery immediately after sprint has reached its end'. However - I'm not sure if there are very many teams in the world actually doing this? We're at least not there yet :)


There isn't any testing team in Scrum. Its development team which is cross functional. Scrum discourages specialists in the team so as to avoid dependencies. So the role of tester is somewhat different in Scrum than in Waterfall. Its another debate but for now lets stick to the question at hand.

I would suggest you to slice the stories vertically in as smaller the tasks as you can during how part of the sprint planning meeting. Its recommended to break the tasks to as small units so that they can be completed in a day or two.

Define a DoD at the start of the project and keep on refining it. Work on one task at a time and limit work in progress. Work in order of priority and reduce waste in your system. Do not go for detailed upfront planning and delay your best decisions till the least responsible moment. Introduce technical competencies like BDD and Automation.

And remember that the quality is the responsibility of the whole team so don't worry about testing being done by a dedicated person.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜