When is an agile iteration considered complete? [closed]
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this questionI am working on a team that is exploring the possibility of adopting agile development practices.
One question that we are running up against is deciding when an iteration (sprint) should complete.
Let's say we've defined our feature backlog, and produced story-point estimates for them, and we have decided that the first 30-day sprint will include features A, B, D, and F. What should you do if at you're reaching the end of the sprint and you've completed A, D, and开发者_运维问答 F - but B is only 80% complete. Should you:
Complete the sprint on time but exclude feature B (deferring the remaining work to a future sprint)
Extend the sprint by the time necessary to complete feature B but not start the next sprint.
Extend the sprint by the time necessary to complete feature B and begin working on the next sprint.
Fail the entire sprint, and bundle all work to be part of a future release.
The problem I see with option 1 is that the team isn't delivering what it committed to. In some cases, you may not be able to exclude feature B without making the entire release useless (or at least substantially less valuable). It may make it difficult to guide the direction of the next sprint without feature B.
The problem with option 2 is that some members of the team may be idle for a significant period of time - which eats into overall productivity. You may be able to add more unit tests, or polish features, but it doesn't add proportional value. It's also politically hard to explain to management why most of your team is idle.
Option 3 seems to be against the spirit of agile - you are putting the next sprint at risk by not allowing the results of the prior one to guide the next iteration of development.
Option 4 seems to severe and has most of the same problems of Option 1 and 3. First off, you're completely missing a commitment. Second, bundling more features into a subsequent release makes it harder to test and verify with customers - and it again precludes the ability to guide the future iteration based on feedback from prior ones.
Option 1 of course. Your velocity for the next iteration is going to be less, as it is based on yesterdays weather, so the next iteration you have a better chance of being complete.
In scrum you are time-boxing. And you only deliver features that work.
In the sprint planning you have made an estimate of what you could deliver. The customer has to accept a certain level of uncertainty in the estimate, or be prepared to have too many resources on the team.
If you miss the next iteration again, switch to a shorter iteration length, and make sure the size of individual features is smaller.
You would normally do option 1 - finish the sprint. Use the completed work, let the unfinished work get reflected in the project velocity - so future planning takes account of the difficulties you experienced.
Yes, option 1 means we didn't finish what we committed to - but if that's what's happened then it's better to admit it and learn to cope better next time than to hide it. Bad stuff happens to everybody - the critical thing is how we improve from where we are.
You could do option 2 - continue finishing the outstanding work by extending the sprint. Only do this if the work is super-high priority to the customer and they explicitly choose to do it. Extending the length of sprints makes them harder to compare with each other - because they're different lengths.
Absolutely NEVER never let one sprint merge into the next - either you're extending the old sprint, or you're starting a completely new one. If you let two sprints merge into each other then you're not really doing sprints anymore and planning breaks down...
Can I answer with "It depends"? Plus, I'd like to throw in a "Define complete".
We've had this situation a couple of times and dealt with it differently depending on the circumstances.
As far as I remember in two cases we let the sprint fail. It was actually more of a demo-rejected kind of fail. The features themselves were considered complete by the team, but there were too many open questions, loose ends and little details that popped up during the demo. It would have taken a couple of days to wrap everything up, so we let the sprint fail and took all the open items into the next sprint. We still had a retrospective and sprint planning for new user stories, but there was no integration of code lines and the sprint was officially marked as failed.
In another case we only had a couple of bugs that turned up last minute, plus a couple of things from the user story. We estimated the total work to three days tops and just extended the sprint. That was enough for us to fix the bug, make a couple of changes and do a second sprint demo about three days later.
So, it was either option 4 or option 2 for us when it was called for.
There are a few things to consider here. First of all, (and I'm talking Scrum terminology here, because I'm used to it, so feel free to substitute it with whatever fits best) get the ScrumMaster, Product Owner and the team together and discuss the options openly. I don't think there's one way to go. You can stick to pure methodology, but in real life that's not always the best way to go. Sometimes bending the rules a bit helps a lot and makes life easier for everyone.
If you're working well together you should find an option that works for everyone involved. (If you can't you may have underlying problems anyway.) Don't just drop something on the team without at least discussing it or at least explaining the reasons why.
Option 3 sounds like the most messy to me, so I'd rule that out.
A lot of people here have argued that option 2 goes against basic agile (or Scrum) rules, but I'd disagree. Scrum explicitly says that you can extend the sprint if called for, the same as you can reduce stories or add resources. You shouldn't do it until absolutely necessary, but as far as I know it's not strictly against the book. In the base when we did it, it was the best solution for everyone, because we still got the sprint done, only three days later and everyone was very happy with the results. If we were talking a week or more option 2 wouldn't have been appropriate.
I don't really like option 1. Taking half-done stuff out a working implementation can be really messy. You lose touch with the code that has been done and integration, frankly, can be a pain. It might be different depending on the way you work, but from my experience, taking code out of a codeline is not something you want to do.
As for option 4, it is pretty harsh, but then again, when communicated correctly it should be okay. The team usually knows when it messed up and won't be able to deliver, so it's not like you're breaking any news to them.
So, there are a few things to consider.
- How much time will it need to get it done done? If it's only one or two days, extending your sprint might be the best option.
- How much effort will it be to remove the code that's already there? If it's messy and takes up time, resolve to option 2 or 4. If it's easy, maybe option 1 is the way to go.
- What does the team think? What does the product owner think? What do others think? Failing a spring might have an impact on team morale, but it might not.
For an agile project it is important to have a 'Definition of done'. This is a small check list of things that need to be done in order to class something as complete. It is not unusual to have different levels of done:
User Story - This could include things such as all tasks associated with the US must be complete, All code is checked in and building successfully with passing unit tests, Acceptance testing has been completed.
Sprint - This could include things such as all stories for the sprint are 'done' (see above, a retrospective has be held, the product owner has seen a demonstration of the functionality etc. etc.
Release sprint - the development of the previous series of sprints has been successfully integrated and regression tested, the functionality has been released into the live environment.
In terms of the 4 options it is less clear cut. A lot is said and has been written about what should and should not be done around failing the sprint, extending the sprint and excluding some feature or other. I find the that with agile projects a lot of pragmatism is required, especially in the first few sprints.
The important thing is not to get hung up it. Just learn from it, adapt and move on.
I'd take a variation on option 1. If feature B can be broken down into what is completed and what isn't completed, this should lead to a revised set of tasks to complete it for the next sprint. What is finished is delivered, and while the delivery isn't perfect, the idea should be to try one's best and then work on what is next according to priority.
Extending the sprint is a recipe for disaster to my mind. Does completing the feature mean resolving all bugs on it, too? Ever seen software that had zero bugs?
Failing the sprint introduces too much perfectionism into things. Is something that is 99% done worthless? I wouldn't think so, but there are some people that have really high standards and can be pretty demanding.
EDIT: Sometimes a feature initially is given with vague requirements that get clarified over the course of the sprint. For example, a feature request of, "As a user, I'd like to place an order," can either be broken down further as part of planning the sprint or during the sprint. In either case, if some stories involving a feature are done, those can and should be presented at the demo if one is done. The point is to be able to say, "This is where we are. How much of a priority is there on finishing this?" as what might have been urgent before may not be so at the end of the sprint.
First, the rule: iterations are fixed-length time-boxes and they are complete at the end of the time-box. So this eliminates Option 2 and Option 3. Regarding Option 4, iteration abnormal termination may occur under very particular circumstances (certainty that the goal cannot be achieved, external event invalidates the goal, ...) but this must remain an exceptional event. And before to abort, there are in general other options: 1. Do something else / innovate 2. Get help / outsource 3. Reduce the scope. And this leaves you with Option 1, the obvious choice.
The problem I see with option 1 is that the team isn't delivering what it committed to. In some cases, you may not be able to exclude feature B without making the entire release useless (or at least substantially less valuable). It may make it difficult to guide the direction of the next sprint without feature B.
If this is true, then either B was more important than A, D and F and you didn't work on the most important items first which is wrong, it shouldn't happen or... A, D and F are actually very valuable in which case your assumption is actually not true and postponing B is thus not a big problem. So, just do it as soon as you realize it won't be done and see if you can replace it with a smaller item.
精彩评论