I’m not a big fan of estimating stories in sprint planning. I don’t do it, nor do I re-estimate stories in sprint planning. I estimate stories in a separate estimating meeting and usually at least a couple sprints in advance, if not more. There are a few reasons why (re)estimating stories during sprint planning is a dangerous practice: In sprint planning, we are thinking at a lower level of detail with far greater knowledge about the story, the code base and the system than we had when we estimated the rest of the backlog. You cannot correctly estimate such a story in sprint planning relative to some other story estimated with far less detail. Such a practice leads to velocity inflation and risky release planning.
Different Level of Detail
Once we get to sprint planning, we may have implemented a spike related to the story and we’re digging into the story details. We’ve thought about the design of the code for the story. We’re digging into the tasks. We know a whole lot more about this story than we did when we estimated the rest of the backlog.
So, if I estimate or reestimate the story in sprint planning, I CAN’T estimate it relatively to the rest of the backlog. I don’t know that level of detail about the rest of the stories. I’m now thinking at a different level of detail, using a different kind of thinking than when we estimated the other stories.
We know our estimates are wrong. We knew they were wrong when we estimated in release planning, and we know they are still wrong when we get to sprint planning. That’s why we have velocity — an empirical measure that takes all that wrongness into account.
We don’t need the right estimate to keep us from overcommitting. During sprint planning, we break the stories down into tasks, estimate those tasks, and compare the task estimates against our capacity. It’s that, not points, that keep us from overcommitting in this sprint. No need to change the estimate.
“So what?” you ask. Bear with me and I’ll tell you why you care.
Greater Knowledge and Skill
If you are making release commitments, if you have a release plan, or if you follow our other advice, then your release backlog was likely estimated many weeks ago. All those estimates didn’t have the benefit of what we know now. Many of those estimates are suspect. Likely wrong. If we allow estimate changes once we hit sprint planning, there will be even more estimate changes in sprint planning. This is not the only wrong estimate. If this is happening too often (discovering poor estimates), we should reconsider how well groomed the backlog is and whether we should reestimate the whole remaining product backlog.
“So what?”, you ask “I just like my estimates to be right anyway.” Consider how reestimating in sprint planning might affect our velocity and therefore release planning…
New estimates are most likely going to be higher, not lower. If an estimate is too big, development is not concerned. They might not mention it. If an estimate is too small, development is very concerned.
Think about how often you change or want to change an estimate in sprint planning. Hold that number in your head for a moment. Maybe it’s one story every other sprint? Maybe more? Perhaps 1 out of 20? Also consider how big of an increase it usually is. Is it from a 3 to a 5? From a 1 to a 2? From a 5 to an 8? Perhaps it’s usually a 1.6x increase?
Think about the rest of the product backlog for a moment. If we tend to increase 5% of the stories 1.6x AT THE LAST MINUTE, then your velocity relative to your remaining product backlog is TOO BIG. The remaining backlog is estimated relative to the original points, not relative to the reestimated points.
Here’s an Example
Example: Suppose we typically do around 10 stories and 20 points per sprint, and the distribution in size is something like [1, 1, 1, 2, 2, 2, 3, 3, 5]. Suppose it’s a 3 point story that becomes a 5. If we complete all those stories, should we say our velocity is 20 or 22? Say we have 400 points remaining in our release backlog. Can we implement 22 points out of that release backlog every sprint? I would say no, you can only knock out 20 backlog points each sprint.
To look at it another way, you think you have 400 points in your backlog, but since you are going to reestimate them on the fly, you really have 440 points to do. You won’t get done in 400/22 iterations. You’ll get done in 440/22 iterations. Or 400/20.
Our team’s ability to produce, their capacity, is stable or at least not increasing rapidly. That is our cash on hand. The price of stories, however, can increase more rapidly due to inflation. Silly analogy: I want to buy a backlog that is originally priced at $400. I can’t pay for it all at once, so I start putting aside some money each pay period. I can put aside $22 per period toward this backlog, but by the time I’ve saved up enough for the original $400, I find that the store has raised the price to $440. Bummer.
If I’m making a release commitment, I want to be absolutely sure I’m not overcommitting. I want to under commit and over deliver. I’m going to evaluate risks, reduce risk early, put risk mitigation stories in my backlog (create options) and reserve buffer for those risks I’m accepting instead of mitigating. I’m going to anticipate that I don’t know all the stories that we need to do. We haven’t thought of some of the dark matter we’re going to have to deal with, so I reserve a little buffer for that. That’s a reasonable and prudent approach. I absolutely don’t want velocity inflation to consume those little buffers. I need them for what I set them aside for. If I allow velocity inflation to use up my contingency, then I’m running a riskier project than I think I am.
It is for similar reasons that I don’t estimate newly discovered defects or unplanned spikes. If you aren’t careful, you’ll inflate your velocity and under estimate the size of your remaining backlog.