Skip to main content

Velocity in the Enterprise, Part 1

Mike Cottmeyer Chief Executive Officer
Reading: Velocity in the Enterprise, Part 1

Okay… so I am hoping that things have settled down enough and I can get back into my writing groove. It’s amazing how turning your life upside down impacts your ability to write on any kind of a regular schedule. I think I had underestimated how much I depend on having the right environment around me to be creative. Now that the transition from VersionOne to Pillar is pretty much complete… it’s time to start getting back into some sort of routine. This blog sure isn’t going to write itself… I am guessing the book won’t either.

Today I thought we could take another look at team level velocity and project velocity.

More specifically… I want to explore a bit where velocity works and where it doesn’t. I’ve talked quite a lot over the past year about how having a stable velocity is critical for having a well run and predictable agile project. We haven’t talked much about what it takes to have a well run and predictable agile project portfolio. About a year ago I started working with company after company all having the same fundamental problem… team level velocity wasn’t rolling up into enterprise level velocity. If we want to have a stable and predictable agile enterprise… the business has to know what it can expect. At every level of the organization, the performance of the team needs to be able to predict the performance of the enterprise. In many organizations… this just isn’t happening.

Before we can explain why velocity breaks in many organizations… I want to first talk a little bit about why velocity works… and get started on looking at the challenges with measuring velocity across teams.

Why Velocity Works

Velocity is fundamentally a measure of throughput. It’s a measure of how much work the team can deliver iteration over iteration measured in terms of completed features. The features are estimated using some abstract value like story points or ideal engineering hours and the value of all the estimates are added up to determine the size of the backlog. As the team completes a feature, the point value of that feature is subtracted from the total of the estimates and the feature list ‘burns down’ over time.

For this to work as designed, you have to understand a few things. The features you are burning down have to be small and independent. When you get to the iteration the features have to be totally done and defect free. There is no going back to readdress a feature without adding new work items to your queue. It is important that the team get good at making and meeting commitments so that the rate of feature completion gets steady over time. By measuring the rate at which features are ‘burned down’ from the backlog, you can begin to use past performance as a measure of future performance.

There is a bunch of stuff that is tucked up behind this idea of backlog, burndown and velocity. First, velocity builds in the idea that estimates are inherently inaccurate and deemphasizes spending a bunch of time creating detailed estimates up front. Features are estimated in abstract units that are generally defined by the team. In other words, one team’s story point is not the same as another team’s story point. Second, value is implied by the features relative position in the backlog. It is generally assumed that the team is building the highest value features first.

These factors make measuring enterprise velocity a challenge. You can’t compare velocity between teams because the units of estimation are all over the place. Different teams use different standards, make different assumptions, and have different team members doing the work. Different teams might have more external dependencies and might require skills or people not present in the team. They might require specialized domain knowledge that isn’t always available. In other words, it’s not just that points are different team to team… every team has a different ability to deliver the work.

So… my first challenge with velocity is that it isn’t really a meaningful measurement across teams. If it’s not meaningful across teams… what does that mean for the enterprise roll-up. We’ll explore that a little in my next post.

Next Velocity in the Enterprise, Part 2

Comments (4)

  1. Scott
    Reply

    I've always thought of velocity more as a measure of capacity than throughput. I believe they are related, in that constraining capacity will impact throughput, but the velocity number itself seems like capacity to me, whereas Cycle Time would probably more accurately reflect throughput.

    Reply
  2. Mike Cottmeyer
    Reply

    I think all these ideas are related. I use a definition for throughput similar to what is on http://www.dictionary.com:

    "the quantity or amount of raw material processed within a given time, esp. the work done by an electronic computer in a given period of time."

    In this context… velocity measures the amount of features (measured in terms of relative complexity) that can be delivered in a given iteration.

    Reply
  3. Bob Williams
    Reply

    I don't follow the logic here. You define velocity as a measure of work produced then expound on the notion of the work estimates. Certainly work estimates are a piece of the total velocity or throughput of the process. However, even outside of the differences in estimating units, tasks require different amounts of time to complete.

    So I see the challenge as defining a standard rate of velocity because each task or feature takes a variable amount of time to complete. You aren't creating widgets from a machine. You are creating custom pieces of software.

    Reply

Leave a comment

Your email address will not be published. Required fields are marked *