Skip to main content

Don’t Estimate Software Defects

Reading: Don’t Estimate Software Defects
Don’t Estimate Software Defects

Software Defects = No Estimates

I don’t estimate software defects. Well, I have two exceptions: If I have a backlog of old defects to burn down, I may estimate those. If I have found some new bug that we plan to fix in some later sprint, I may estimate those. Although I really don’t like to defer defect fixes, but—otherwise—I don’t estimate defects.

Why Not Estimate Software Defects?

Before explaining why, it’s important to pause and consider my context. To get away with not estimating software defects you must have high-quality standards, a strong suite of unit tests and a habitual practice of TDD.  You must also fix all defects as soon as they are discovered, or at most, 1 sprint later. If that’s not your context, stop reading this and go put TDD into place.

Deciding not to estimate software defects, under these conditions, is just easier. It makes for more conservative release and iteration planning to not estimate defects, or at least not include them in velocity. It’s a simple rule, simple to explain.

So, what’s the problem with doing it any other way? If you estimate new defects and include their points in your velocity as you fix them, then you can’t just divide backlog size by velocity to figure out when you’ll be done. By doing this, you are including in your velocity fixed defects but excluding from your backlog the stuff you haven’t found yet. It’s safe to assume you’ll find more defects and so your backlog is growing, whether you recognize that or not.

Dividing an inflated velocity number by an underestimated backlog is risky planning. Sure, your burn-up chart may show you a projection of the growth of your backlog and burn-up intercept, but that’s only if you estimate your defects correctly relative to your stories. There’s enough guessing in estimating the rate of growth of your backlog when only considering new story scope increase. Why make it harder by including defects in the computation?

The Challenges of Estimating Software Defects

For sake of example, let’s say we have one new defect each sprint, and let’s assume they are each 1 point on average. Also assume an initial velocity of 10 (without estimating the defects) and backlog of 200 points (without any measure for unknown future defects). If I fix the defect without an estimate, I’ll continue to have a stable velocity of 10 . I’ll have a stable backlog of 200. My math is easy: 200/10=20 sprints. This is easy to teach. And it’s conservative.

On the other hand, estimating the defect gives me a stable velocity of 11 but a backlog that increases by 1 point each sprint. This math is, let’s see…

The slope is, uh…. something

And we’ve got the intercept, and uh…

Oh, I don’t know.

We can compute this easily if we know the average defect size and the defect arrival rate.  But, most teams don’t know their average defect size or defect arrival rate. This is harder to teach. It’s even harder to get people to remember and account for an increasing backlog when they’re doing back of the envelope figuring in their head.

Additionally, defects are hard to estimate and the estimates we make for defects often do not reflect the same behavior as for stories. They have a different variance.  A different distribution and the mean effort per point is typically significantly different than for stories. Relative estimation between a defect and a story is not the best thing.


If an organization finds itself not ready to start the next release and they’re going to have the otherwise idle teams fix defects until the Product Owner team gets ready, then why estimate the defects?

I lean towards velocity measuring effort put toward Value Delivered—not for defects, rework, or research. I do this because I care greatly about my release commitment/budget. If I inflate my velocity due to defects and spikes (i.e. things that were unplanned), then I’ll end up over-committing to the release. My velocity number will be higher than the actual known value to be delivered.

Let me remind you that there’s a lot of context here that’s important. If your team has high-quality standards and they fix all defects as soon as they find them, or perhaps 1 sprint later, then my approach of not estimating works because they do not allow an increasing backlog of defects to accumulate. However, if your team defers defects, and intends to fix them later in the release—then my approach is really, really bad poison. At that point, you should estimate all your defects and understand the current arrival rate and the historical defect load per release.

I do make exceptions to this rule and I’ve not always held the same belief. An older post on this subject, Estimating Defects – Using Principled Negotiations, discusses what you might do for a myriad of different contexts.

Bottom line: Track, quantify, and represent the effect of software defects separately.

Lead a Structured and Disciplined Agile Transformation
Download Now
Next Product Vision Tools Part 1: The Product Vision Board

Comments (5)

  1. scot

    I would say it’s simpler – it’s just not possible to estimate defects.

    The most catastrophic and fatal bugs might be simple one line fixes that you can spot and correct in a minute; the simplest of trivial issues (the ones likely to end up back in your backlog) could be a complex combination of user, server, compiler, and code environments that take days to even isolate.

  2. john

    I would say “Bugs can’t be estimated”.

    When I say Bugs, I don’t mean things that weren’t implemented or are missing, those aren’t bugs, those are missing features.
    I also don’t mean things that were unclear in the requirements…those are unclear requirements.

    To me, Bugs are things you coded and expected to work, and you tested it, and thought it was working, but it isn’t working, and you may or may not know why that is. They can’t be estimated because you don’t know what the problem is.

    Many times I’ve seen someone look into a bug, in order to estimate how long it will take to fix it, but that effort isn’t tracked and is essentially doing the first half of the fix.

    Here’s an old post I wrote about it:

  3. Andrew Fuqua

    A complication in this discussion deals with capitalization. If you capitalize SW development, finding and fixing defects related to the current sprint/quarter/year/period of development is capitalizable work. Fixing defects related to a regression of function capitalized in a prior period, however, is expense. (/Introducing/ that defect, however, IS capitalizable. — Only half joking.)

    So what to do? If you can partition that work out in time (fix all defects the last sprint of a release or clear the defect backlog the 1st sprint of a release) or in space (have a separate team do all non-capitalization work such as training, maintenance, support, defects, etc.) then you can choose to not estimate those regression defects. If you can’t batch those up by time or team, then you could consider tracking the hours (ugh) or estimate the defects (ugh).


Leave a comment

Your email address will not be published. Required fields are marked *