Skip to main content

Stuck on Velocity…

Mike Cottmeyer Chief Executive Officer
Reading: Stuck on Velocity…

Okay… so it seems I am a little stuck on velocity this week. Maybe with one more post I can get this out of my system and move on to other topics. Let’s talk a little more about what velocity is, what it is not, and how it should best be used by a project team.

Traditional Estimating

Most projects I worked on prior to discovering agile would start with a long list of requirements. The project manager would reach out to each of the resource managers for an estimate. These estimates would be done in real hours and used to calculate how many people were needed for the project, and by extension, how much the project would cost. In my experience, the business almost never liked the estimate, and would always ask us to do something to lower the cost.

As a project manager, I would go back to the resource managers and start the negotiation process. The most extreme example of this I ever participated in lasted almost four months. We negotiated scope, tried to do a bunch of up front design work, made assumptions about the project, and were finally able to cut the estimate down by about 50%. The funny thing was, I had been largely negotiating with each manager individually, and none of them really had much understanding about how the other managers were thinking about solving the problem. Their estimates were fundamentally out of sync.

We literally spent four months, and a bunch of valuable time from critical resources, haggling over a number (that at the end of day) turned out to be total fiction. The reality was that we had no idea how big the project would actually be. Furthermore, we had no idea which team members would actually be doing the work, so coming up with a reliable end date was nearly impossible. The only thing we could do was create a project plan, with the estimates we had available, and hope for the best.

Agile Estimating

Agile teams do estimate projects but use mechanisms that reflect the uncertainty inherent in the process. You have probably heard about the idea of story points and planning poker. This is one of the most common methods of agile estimating but not the only method. The general idea behind agile estimating is that you start by identifying a small, relatively understood requirement and assign it an arbitrary value of one (1). Each feature in your product backlog is evaluated against the well understood requirement. We ask ourselves, as a team, do we think the new requirement is 2x, 3x, or 5x more complicated than the one we are comparing it against?

The first requirement gets a point value of 1, the next requirement might get a point value of 3, while a third requirement might get a point value of 5; all based on the team’s understanding of the relative complexity of each feature.

At this point, we have the size of the backlog in points, but it doesn’t really tell us anything. Since points do not equate to hours, we still don’t know how long the project is going to take. Some teams will do detailed estimating on the first few features to get a general idea of the size of each backlog item, but we really won’t know how many points we are able to build each iteration until we start building them.

This is where velocity comes in.

The product owner prioritizes the features based on business value, the team gets to weigh in based on technical complexity and risk. Collectively, the team sets the build order of the features and gets busy writing software. Each iteration, the team will deliver some number of features, and keep track of the point values assigned to each item. The sum total of all the point values, associated will all completed features, is the team’s velocity. It is fundamentally a measure of the team’s throughput iteration over iteration.

Because we know the overall size of the backlog (in story points) and how many of the features the team can build each iteration (in story points), we are able to calculate when the project will be completed:

Iterations to Complete = Backlog Size/Points per Iteration

The key concept here is that traditional estimating almost never results in a reliable project cost indicator. Only by building working software in short cycles, and measuring our progress, will we have the necessary information to know how long the project is going to take. If you didn’t read my last post on progressive estimating, this might be a good time to go back and take a look. Because we won’t know how much we can do, until we actually start doing it, we might have to insert a few checkpoints where the business gets to recalibrate the project or decide to pull the plug.

The traditional view of velocity is that it is specific to the team that is doing the estimate. One team might give a feature three (3) points while the next team might estimate the same feature at a five (5). As long as the team doing the estimate is the one doing the work, this is not a problem. Remember, points are estimates of relative complexity… of relative size. The actual value does not matter. The only thing I care about is the team’s throughput against the overall size of their backlog. It is the only thing I need to know to determine if my project is progressing according to plan.

Normalizing Velocity

That said… what happens if I have many teams that are working together to deliver a large, complex project. If one team calls a feature a three (3), and the next team calls a similar feature a ten (10), it might get difficult to communicate performance on the overall initiative. The team that uses the larger point values would obscure the performance of the team with the smaller point values.

To answer this question, I have seen many teams normalize a point to a common unit of measure. For example, my team will often think of a point as a single person day. We are still using lightweight estimating techniques, we just use the normalization value as a starting point for conversation. This approach can level the playing field and enhance communication between teams that are working on various parts of the same initiative.

While this normalization can have benefit on large scale agile implementations, there is some risk.

As soon as we start equating a point to a standard unit of time, we start trying to do math on velocity. The discussion usually starts by asking how we can improve the velocity of the team. Well… if the team has four people that can deliver 20 points per iteration, we could add a fifth person and increase velocity to 25. Or maybe… we find that the same team of four is only delivering 17 points per iteration. We want to know what those deadbeats were doing with my missing three points of capacity.

My personal opinion is that it is perfectly okay to normalize velocity across teams. As managers, we need to recognize what velocity is measuring… and what it is not measuring. Velocity is measuring throughput against an abstract estimate, it is not measuring productivity… and certainly not time spent on task. It is very common to see situations where the point estimate was 20, the detailed estimate was 25, the actual hours spent was 30, and the features delivered was 15. The team can use these numbers to get more consistent estimates, but again… all I care about as a manager is throughput and how I can help the team improve over time.

Even in an environment where we are normalizing velocity, there are many factors that can contribute to velocity variations between teams. Team size is an obvious consideration, but what about skillset and experience? We should consider team chemistry… we should ask if the team dependent on other organizations to get their stuff done. Maybe they’ve encountered impediments that have slowed their progress. This is all stuff that contributes to team velocity.

I coach teams to measure velocity so we can measure what is… to get honest information about how the project is progressing… then look to the other factors to determine how to make it better. The last thing we want to do is to inadvertently incent the team to manipulate the numbers to make the data look good.

Next Mike Cottmeyer... Agile Coach

Comments (7)

  1. Basim Baassiri
    Reply

    interesting post Mike and I agree with you that as soon as you compare a point with a unit of time you’re doomed

    For estimating from a QA perspective I usually ask myself simply questions to get a rating. For Example,
    Does the story specify something never done before, or that your company has not done before?

    I give the question factor out of ten and add them all up to give me a value. I can easily use this value for points or use the value as an indicator of how effort is required to test the story

    I hope I make sense

    Reply
  2. Mike Cottmeyer
    Reply

    Makes perfect sense. Thanks for your comment.

    Reply
  3. S.Vaidya
    Reply

    Please email me, I would like to talk to you about your articles.

    Reply
  4. Miles McBain
    Reply

    Your argument for the necessity of velocity normalisation is a little shaky. You say it is important for understanding progress in complex projects spread over multiple teams. This implies you are measuring the project’s size in story points? I think this is where you have gone wrong. The smallest unit of measure as far as project completeness is concerned is a feature. These are either complete (in a releasable version of the product), or incomplete. To get an accurate picture of progress we should direct our efforts into normalising the feature sizes by breaking them down and understanding them better, rather than team’s velocities.

    Visibility problems like the one you describe would be much more pronounced in projects with longer release cycles. This would permit larger (and hence less well understood) features from being scheduled into a release without being broken down. So I would suggest shortening your release cycles as a tool you could use to mitigate some of the issues you are attempting to solve by normalising velocities.

    Reply
  5. Kirsten
    Reply

    Nice post, I’m enjoying a lot of your articles thanks.

    I’m also not sure on the normalisation process as described, we’ve tried something similar and it doesn’t work. As soon as you say 1 ‘man’ day you ask the question (implicitly) ‘what can YOU get done in one day’ (or 7 hours). Once this has clicked in the estimators minds you’ll struggle to keep relatively sizing abstract enough.

    There’s another issue with the post (might be specific to me), we work on product development with quite a few startups and when money men are involved its impossible not to provide guidance on budget and release time PRIOR to development beginning. I’m not able to say ‘hey, this is software development, it comes with risk and is tricky to estimate so how about we do a few sprints first?’. I’d love to use your method but I wouldn’t have any work. Better for us (and I think this might ring true for a lot of others) is to use historical data to provide forward guidance on our velocity potential for a given project of similar technical complexity. Additionally, looking at our historical data we usually start much more slowly (in terms of throughput) because we’re discovering much more during the first few sprints. It’s taken us 6 sprints to hit some level of consistency and I’d be loathe to use the first few sprints as data points. Better still, you can use the data on historic projects to see how many sprints it takes to get to a consistent velocity and use that data to predict the slower start to a new project.

    Keep on posting!

    Reply
    • Dennis Stevens
      Reply

      Kirsten,

      Both points you make are valid and the post agrees with you. Regarding your first point, the post points out that when you tie points to a standard unit of time we start trying to do math on velocity. This is not desirable. It may not be clearly stated but the post not recommending using a unit of time to define points. It recognizes that there is risk when we do this.

      Regarding your second point, if you have valid history, you would use points to estimate size and the historical performance to estimate cost and duration. You should absolutely use history to determine the estimates if the data is available. If you don’t have history. Or if the history isn’t predictive for some reason (brand new technology, changes to the team, etc) then you size the project with points, do your best cost and duration estimates, and use the points to track progress. If you were wrong in your anticipated velocity you can find out early. Then you can decide to re-calibrate the project, pull the plug, or go a few more sprints to get to consistency. If the first iterations aren’t consistent then use your judgement in how you verify performance of the project.

      Thanks for the reply.

      Dennis

      Reply
  6. Kirsten
    Reply

    Thanks. for reply Dennis, I agree on all your points in your response to my comment, the latter point is exactly the process we follow. Keep on writing! Kirsten

    Reply

Leave a comment

Your email address will not be published. Required fields are marked *