The Problem with Agile Specifications
We’ve all been there. Sitting in an uncomfortable feature review where each point of feedback is being taken as an attack on the Delivery Team, who promptly retorts with, “The specifications weren’t clear.” After the meeting, the Program Team and Delivery Team each retreat to their respective corners and discuss what went wrong.
The Program Team feels like they’re being hung out to dry by a Delivery Team who says they don’t want to be order takers, yet immediately shirk any responsibility by just “doing what they are told.”
Their answer: “The Delivery Team needs to start owning the solution.”
On the other hand, the Delivery Team feels like they’re being hung out to dry by a Program Team who’s expecting them to read their minds and hit a constantly moving target.
Their answer: “The Program team needs to better define the specifications up front.”
What would effectively solve this impasse?
The Goal that Agile Teams Can’t See
I often use a golf metaphor when I coach teams who encounter this problem. Success in golf comes from the ability to reliably “hit the green” and set up an easy putt for par. No one expects to get a hole-in-one every time or even most of the time. Good golfers evaluate the course and intentionally play a series of shots that set themselves up to have the best opportunity to make a birdie and an easy par as backup. They’re seeking the best approach-shot that will give them the ideal position on the green to make the birdie attempt.
We know we’ll have to deal with hazards from time to time.
Consistently hitting the green in the right location to set up a birdie opportunity is the key to high performance on the golf course. Likewise, teams that plan their feedback “shots” have the best chance of “hitting the green” and make a birdie or par predictably—e.g. deliver value consistently.
The most valuable feedback you can get is working, tested code in production—i.e. in front of the customer or user that the software is intended to serve. Short of that, working/tested code in an environment as close to production as you can get that is able to provide similar feedback. Remember, everything we build is based on a hypothesis … “If we do X, we will solve Y problem and generate Z business impact.”
The only place that we will know the hypothesis was correct is once that code is in production and being used by the intended audience.
The obvious question is then, “How do we get most effective feedback the soonest?”
If we assume—that in Agile—working, tested code is the primary measurement of progress. I equate that to “hitting the green.” Getting the Feature into PROD as promised is like playing the round at/under par. You need to be consistently “hitting the green” and giving the team great chances to make “birdie.” There are always going to be hazards and bunker shots along the way … that’s just the reality in golf and in life. Better to have to scramble occasionally than to have to do it regularly to avoid “missing the cut.”
Which brings us back to our meeting. The real reason the meeting went south was unclear expectations, specifically in the shared understanding of acceptance criteria and the nature of the “feedback shots” we’re making.
Note the differences in each team’s focus in their responses: one is focusing on the “what” and the other on the “how.” Neither seems to be looking at the overall outcome or “why.” The Delivery team reacts to the Product Owner Team (PO) critiques as a pejorative observation of “how” (e.g., “We don’t like the way you did this”). The PO Team is seeing the “working code” for the first time, so they may very well be saying “we don’t like the way you did this.” However, it might be: “Now that we see what was built, we realize what we asked for needs to be done a bit differently.” The Program Team feels their value in the meeting is to give feedback, so that the feature adds value to the business.
Have Clear Specifications and Expectations
To get the benefits we’re looking for from a collaborative, working relationship between the Program and Delivery Teams we must make the specifications and expectations clear.
Specifically, what feedback is trying to be generated, and, as importantly, “What does ’done’ look like at this point in time?”
When we talk about specifications, we should only go as far as to set up the Delivery Team to “hit the green” (e.g. provide the group with working/tested software that will provide the necessary feedback at this point).
When we review in-progress features the expectation is that the Delivery Team will come up with a solution that is “on the green” and is a short-putt away from being in the hole. The goal of the review is to collect the feedback that represents the work of the “short-putt.”
If the Delivery Team is constantly “missing the green,” you work as needed to get a better-shared understanding of the outcomes/feedback desired, expressed in the form of acceptance criteria.
Once the Delivery Team is consistently hitting the green, you may consider backing off the specifications a bit to give the team a little more latitude to creatively solve the problem.
Predictability is defined as the Program Team always getting the Delivery Team into position to consistently “hit the green.” The Delivery Team should have a preponderance of “short putts” … not having to scramble (long putts) or hit it out of the hazard to “save par.”