Skip to main content

Estimates or #noestimates… It’s All a Matter of Context

Mike Cottmeyer Chief Executive Officer
Reading: Estimates or #noestimates… It’s All a Matter of Context

I think I’ve found myself (somewhat accidentally) at the beginning of a series of posts called ‘debates I find useless… let’s move on’. The latest round of discussion that seems to have spiked (at least for me) this week is the whole ‘to estimate or not to estimate’ conversation. The answer to this question clearly falls into the ‘it depends’ category, so if we are having an argument that involves any kind of absolute, we are probably wasting our time.

Even in a domain like commercial, non-governmental, software product development… the one LeadingAgile plays in most of the time… there is seldom any one, single way to do anything. I do believe that in this domain most estimates are functionally useless… but understanding what is estimateable and what isn’t… and more importantly what makes things un-estimateable and why they are un-estimateable… I find to be a way more useful conversation.

If we decide not to estimate, we better have a credible response to the question… when will you be done and what will I get for my money… because asking someone to spend a bucket of cash on the promise they might get something when the bucket runs out… is usually pretty much a non-starter. There are of course exceptions, but for most companies, the answer to this question is pretty important so we’ll need an alternative approach for solving this problem.

Almost always when someone calls my company… they aren’t really looking for advice on how to innovate or how to build the right product, that hasn’t historically been our brand… they are looking for help using agile to make and meet commitments, get product into market earlier, improve quality, or reduce costs. That’s typically been our sweet spot when it comes to introducing agile into large complex organizations and solving it requires more than just better estimating.

What’s the Problem with Estimating?

The companies that call us want to know how much they need to spend to get a particular outcome. They want to be able to make and meet commitments. They want to be able to manage customer expectations.

To have that conversation, you have to first start looking at why organizations aren’t predictable. Most of the time it’s not so much that they can’t estimate, it’s that they have way too many things going on at one time, they have way too many dependencies, and way too many non-instantly available resources. They believe in optimizing for individual production capacity, which causes them to matrix people across multiple initiatives, which just further exacerbates all the aforementioned problems.

Even once you get past the alignment issues, and you reduce much of the waste getting in the way of delivery, quite often companies don’t really know exactly what they want to build. They don’t really know exactly how they are going to build it. They might not even know who exactly is going to do the work (when they need to pull together the estimate) and we all know that we can see wild swings in throughput and productivity between any given developer on the team.

Even if companies can remove the waste, eliminate bottlenecks, and such… even if they know exactly what to build, how they are going to build it, and have a highly consistent stable of software engineers… developers able to deliver against estimates in a reliable way… quite often the code bases they are working in aren’t covered in tests, have a ton of technical debt and defects, and aren’t generally architected in a way that lends itself to stable delivery throughput.

Is Estimating the Right Problem to Solve?

Here is my take… having established our context and domain… I think that asking how to do better estimates is the wrong question to ask. I don’t think we really have an estimating problem as much as we have an organizational alignment problem, we have an investment strategy problem, and we definitely have a risk management problem. As companies, we are placing critical dollars on investments that have a very low probability of paying off… and we are relying on flawed estimates to mitigate that risk.

That is the problem that is killing us.

We want to use estimates to reduce uncertainty and end up increasing uncertainty.

We want to use estimates to reduce risk and end up increasing risk.

We want to believe that with enough up front planning we can know exactly what we need to build and how we are going to build it. That with enough historical information… or enough up-front analysis… we can determine how long the work is going to take. We want to believe that all developers are the same and that every developer can do everything in the estimate at the same rate as any other developer.

In practice, in my experience, this does’t work.

All it does is shift the perceived risk from the business to the development team and everyone loses.

What is the Right Problem to Solve?

We have an interesting paradox to deal with here. This is irrational… but make no mistake… this is the current reality in most software businesses today.

We live in a world where requirements are uncertain, technology is rapidly evolving, people are unpredictable… a world where technology is poorly architected, changes result in unintended consequences, and defects are rampant… AND we have to be able to make and meet commitments with some level of assurance that we can actually solve the business problem within the time and cost allocated to the project.

We figure this out or our companies fail.

Any solution that doesn’t answer the questions of when will we be done and what will we get for our money is a non-starter in most organizations.

#noestimates is a non-starter in most companies.

To solve for this irrationality, there is a ton of thinking that has to change, but at the highest level, you have to tackle this problem on two fundamental dimensions…

1) Do everything you can to optimize for throughput and stable delivery capacity, and…

2) Focus on budgeting and constraints rather than estimates.

What does this mean? Let me explain.

Optimize for Throughput and Stable Delivery Capacity

Well… given the uncertainty of requirements, technology, and people… you have to optimize the systems of delivery to eliminate as much of that variability as possible.

To me, that means creating complete cross functional teams, teams aligned toward a single set of products, features, or business capabilities. You have to give those teams as much clarity as possible regarding the problem they are trying to solve. They need to be given the tools necessary, and be held accountable for producing a measurable, working tested increment of product on regular intervals. This allows us a sampling frequency to assess progress.

To me, this is where agile really comes in. Well formed agile teams, working against a known backlog, and having the ability to produce a working tested increment of software at the end of every iteration will begin to stabilize delivery throughput and become predictable delivering on regular intervals. In a really stable, well formed team, it is often possible to estimate the backlog and establish a velocity against the backlog.

But even with well formed agile teams and stable delivery throughput… there is still variability in both the requirements space and the solutions space. In our problem domain, this isn’t going to be solved for easily, so what do you do?

Establish Budgets and Constraints Over Estimates

This is where I think the notion of estimates get’s us in trouble. Often we don’t know exactly what to build, or even how we are going to build it, and believe it or not… in our domain that can be a good thing. If our domain is uncertain, we don’t want to pretend that uncertainty doesn’t exist, or worse, force a level of certainty that isn’t good for our product, or for our company, or that forces us to make early decisions that need to be deferred to when we have more information.

Given though that business won’t allow software production to be totally open ended… what do we do?

We generally recommend that folks look out across their product roadmap, consider where they need to be with their product in the next 6, 9, or 12 months, and do a very high level estimate on what they think it will take to get there. At this level of abstraction, you don’t simply consider what you believe it will cost to do the work, but also what you are willing to invest to get the return on investment you are expecting to get.

At this level of abstraction, it doesn’t have to be exactly right, just directionally correct.

Once you have a high level estimate, stop calling it an estimate. It is a budget. It is a constraint.

As you begin the process of elaborating the requirements, and maturing your emerging understanding of the product you are building, you are no longer asking yourself ‘what are the requirements’ or ‘how how am I going to build this’.  You are asking yourself what solution can we develop that can be delivered for the time and cost constraints that the business has asked me to develop within. This is a VERY powerful thinking tool for constraining product development and meeting objectives.

You are adapting the requirements in real time to solve the business problem, and adapting the solution to something that can be built within the constraints that have been established. You invest much of your energy into requirements decomposition, getting the MVP done as early as possible, getting feedback from the delivery teams, aggressively managing the backlog to the constraints, and adjusting to meet business goals. We make tradeoffs at all levels to meet business outcomes.

This is the essence of agile risk management IMO.

Mitigating Risk and Managing Uncertainty

Just to drive a couple of these points home… in the commercial, non-governmental, software product development space… many organizations are not organized well, technology is not architected well, we are making critical high risk investments all the time, competition is fierce and speed to market is essential, and we are dealing with crushing uncertainty in requirements, technology, and people… and given these constraints, sometimes stuff isn’t knowable.

That said, not estimating isn’t an option. Not constraining development isn’t an option. The good news is that in the presence of a stable delivery organization… in the form of complete cross-functional agile teams… teams working against a known backlog and able to produce a working tested increment of product on regular intervals… and in the context of an investment strategy that is based on high-level estimates which quickly become budgets and constraints… we can begin to deliver on plan.

We can now start evaluating progress against our assumptions and actively manage risk to make sure we are optimizing our chances of being successful.

We can begin to enumerate the backlog, making relative size estimates as we go. Since the teams are stable, we can begin to correlate the estimates with what it actually cost to build them. We can use past performance data around the accuracy of the estimates to anticipate the accuracy of the future estimates. The further out we have the backlog, the better we can get at planning forward. Because we know our progress, we can see where we are at against our business objectives.

When things change, we have clear line of sight to our top level business objectives, we can assess how the changes will impact what we are trying to accomplish, and if the emerging size of our backlog is going to impact our ability to meet those objectives within the time and cost constraints that we have established. When we learn that things are getting too big, we can either take something out, agree to spend more and go longer, or we can kill the project.

Sometimes we have bitten off more than we can chew. Sometimes the investment we want to make isn’t really knowable until we get in and start building it. Sometimes we get started, and then we realize we are screwed, and our only viable option is to kill the project, cut our losses, and be thankful that we spent as little money as possible to figure out we were running a fools errand. Nothing (in any of this) guarantees success.

We just need to know when to get out as early as possible.

Separating Knowable Stuff and Unknowable Stuff

I think that much of the stuff that we think is unknowable is often more knowable than we think. Just because we don’t know exactly what to build, doesn’t mean we know nothing about what to build. Just because the technology is a mess, doesn’t mean that it’s such a mess that we can’t work with it at some level of abstraction. Just because developer throughput varies, doesn’t mean that we can’t make some planning level assumptions that are reasonably accurate.

That said, I have worked with teams that are inventing new mathematical algorithms for solving problems that have never been solved before. I’ve had conversations with developers where the estimate could be anywhere between 2 hours and 2 months… and maybe even the problem is unsolvable with the team we have available to solve it. Honestly, the team legitimately doesn’t know, and no amount of analysis is going to help them know. They just have to try.

All I want to do is acknowledge these kinds of problems exist, but they aren’t every problem. The approach I’ve taken for problems like these is to isolate them from the stuff that we are better able to understand and deliver. Now… we have a business decision to make… does the product have value even if the problem never gets solved? Is it still worth investing in? Is it worth spending money given the risk that this aspect of the solution could never be solved? Can I afford to assume this business risk?

A big part of our challenge is that we are placing big bets on unknowable things, we are making hard commitments on things that aren’t sufficiently understood. We have to get better at isolating R&D from the stuff that is more well understood product development. If we are going to invest in R&D, the payoff better be big enough that the risk of failure is worth it. We can’t pretend the uncertainty isn’t there and we can’t estimate that uncertainty away.

In Conclusion

To have a rational conversation about what to estimate or how to estimate, look at the nature of the problem we are trying to solve, the nature of the organization that is chartered to solve it, the investment and risk profile of what we are trying to build. We need to assess our tolerances for uncertainty. Don’t pretend we can estimate things we can’t, but recognize that some things under some conditions can be estimated.

My experience is that there is a ton which we can do with organizational alignment, program and portfolio management, investment rationalization, risk management, budgeting and constraints, limiting WIP and focusing on flow, that can dramatically help companies get better at making and meeting commitments… and yes, even better at estimating.

Likewise, I think there are problems that don’t lend themselves to estimating at all, where the real problem isn’t estimating or not estimating, but the allocation of essential dollars to high risk endeavors… which are masquerading as software product development… but profile more like experimental R&D.

You just have to know what business you’re in, build your organization to accommodate those realities, estimate and control what you can, and stop demanding those things which can’t conform to your sense of reality. This is, unless you are ready to invest in more advanced project modeling than most organizations I’m aware of have the capability to perform.

NOTE: Just realized this is the 600th post on LeadingAgile. That’s kind of a cool milestone. Thanks for being around and keeping up ;-)

Next A Little Risk Goes A Long Way

Comments (2)

  1. Daniel Zacarias
    Reply

    Great analysis, Mike.

    I can attest to the relevance of your points.

    At our team, it’s been specially important to use constraints vs estimation, as well as maintaining a stable team over time (growing from long-time core team members).

    As a sidenote, we’re using Kanban and use an indirect estimation method that’s worked out great for us: all backlog items should be doable in two days, at most. We’ve been able to keep the average quite close to this, with reasonable variance. It’s been easier for the team to just break down tasks until being under 2 days than looking at something bigger and estimating through any other technique.

    Reply
  2. Alex Dragon
    Reply

    Great blog post. This has inspired me on a direction to take the program i’m working on in. We’ve been agonizing over what the right level of estimating and release planning should be.

    This also reinforces my own approach on estimating for my team. Let’s continue to do it but let’s not make it a mathematical equation :)

    Many thanks

    Reply

Leave a comment

Your email address will not be published. Required fields are marked *