Clients engage companies like ours because they are interested in improving their performance. “Performance” for a corporate IT department means effectiveness in delivering application software functionality to support business objectives, maintaining and operating the technical infrastructure, and handling any issues that arise in production.
Most of the organizations we talk to have no idea how to measure their performance. Leadership has a vague feeling that things could be better, but they can’t quantify what is “wrong.” They feel pressure from business stakeholders to do more, but they really aren’t sure more of what or how much is enough. Typically, they aren’t really sure how much they can do right now, let alone how to improve.
Nor are they quite sure what the root causes of the discomfort may be. Do business stakeholders simply expect too much, or is the problem that the IT department lacks the capacity to meet genuine business needs? Those are very different issues, and solving the wrong one would be a waste of time (and, as we all know, time is money).
Some of the popular approaches to achieving agility at scale appear to gloss over this issue; or at least, companies tend to implement them without considering how to measure progress. This often occurs when SAFe or LeSS is implemented, or an organization crafts its own method around agile principles. The focus tends to be on tracking the level of adoption of recommended practices. Tracking actual delivery performance takes a back seat, assuming it is in the car at all. The back seat may be filled with posters, sticky notes, yarn, and snacks, leaving no space for Mr. and Mrs. Metrics to ride.
The result may be a sort of family road trip: The kids are happy at first, occupied with games and snacks, but eventually, they grow cranky and want to get out of the car. This outcome doesn’t resemble what was sold, and organizational leaders become wary of agile scaling methods.
Other approaches are explicit about the need to stabilize current performance and get measurements around it, as a baseline to monitor the effects of improvement efforts. David Anderson’s Kanban Method, Scott Ambler’s Disciplined Agile, and LeadingAgile’s Expedition model all begin by stabilizing the current delivery process and establishing meaningful metrics.
The main theme of LeadingAgile’s Basecamp 1 is to get predictable. That involves stabilizing the existing process and measuring performance, among other things. Organizations then have an objective basis against which to gauge progress toward their performance goals.
Absent this sort of baseline measurement, there’s no practical way to know whether improvement efforts are yielding fruit. Thanks to the Hawthorne Effect, people in the organization may feel enthusiastic about the changes. They will report that things are improving, but this may be due to the fact something interesting is happening for a change, and that management seems to be actively interested in what’s going on, at last. There may or may not be objective improvement in delivery performance.
It’s been my experience that there are two main reasons to measure delivery performance: To steer planned work, and to see the effects of process improvement efforts. It’s fine to use metrics that depend on the methods and processes currently in use for the purpose of steering work in progress. But improving a process means changing it. In that case, process-sensitive metrics won’t help us.
This is axiomatic: The outcomes you achieve are the result of the actions you take; therefore, to achieve different outcomes you must take different actions. To track the effects of improvement efforts we need to select metrics that provide consistent information regardless of the methods and processes the organization uses, because those methods and processes will change.
For example, a traditional organization may use a linear process model to deliver software. Their transformational journey may involve, among other things, switching to an iterative process model. If we measure performance using metrics that depend on a linear process model, we will not obtain dependable information about the effects of switching to a different model. Similarly, if we begin by measuring performance using metrics that depend on an iterative model while the organization is still using a linear model, our baseline measurements will be meaningless.
This is exactly what happens in organizations that try to adopt a process such as SAFe or LeSS, and they measure performance using Velocity at the outset, while the organization still thinks and acts in accordance with assumptions aligned with a linear model. They end up with no usable baseline performance measurements, and no way to gauge improvement.
Sooner or later, the kids are screaming to get out of the car. Some of them just jump out the window (that’s a metaphor for changing jobs; clever, eh?). And the parents may well conclude that family road trips are overrated (that’s a metaphor for leadership saying, “We tried that and it didn’t work”; sometimes I’m so clever it hurts).
I’ve found three basic metrics to be useful in tracking the effects of process change:
- Cycle Time – how long it takes to get something done
- Throughput – how many things you get done in a unit of time
- Process Cycle Efficiency – what proportion of time is spent adding value
All these come from the Lean school of thought. They’ve been adapted from lean manufacturing to suit the realities of software delivery and IT operations. All of them directly measure delivery performance, and none is dependent on any particular methodology or practices. This makes them useful for the purpose of monitoring an organization’s process improvement efforts.
They aren’t particularly hard to understand or difficult to use, but the details are out of scope for a blog post (even the long ones I tend to write). If you aren’t familiar with these metrics, I suggest you do an Internet search for more information or look for a book that covers them. Better still: Give us a call.