In the absence of done… there is risk.
Agile methods put a high premium on what it means to be done. But if done is so valuable to our projects… just what exactly does done mean? Is there one universally accepted definition of done… or are there varying definitions of done depending on your circumstances? Personally… I have used and coached several definitions of done and am pretty much cool with all of them.
My favorite definition of done was something that I used back in my CheckFree days. We defined done as a feature that was designed, documented, developed, tested, accepted, ran on the UAT server, could be run from the Product Owner’s laptop, and that the Product Owner was proud to show to their customer. We did not specify 100% test coverage or that the software was released to production… were we done?
What about the situation where a team is transitioning to agile and trying to iterate through a big up-front design document to ultimately get ready for a big back-end test effort after all the code has been written. That team defined done as working software with 100% test coverage and deployed to the alpha environment. Were they done?
Here is a situation I was talking through today. What if you are developing a feature that has dependencies with several component teams across a large complex enterprise. If one of the component teams delivers an API that is fully designed, documented, developed, tested, meets the specification, and is ready to be integrated with the other components into the larger feature… is the component team done? What about the feature team?
Done can mean lots of things… and the definition of done needs to be defined by the team doing the work and the organization receiving the work. To me… it is a matter of risk. How much risk are we absorbing leaving the code in it’s current state?
While not perfect, I felt pretty good about the definition of done on my CheckFree team. The transitioning team I mentioned is actually absorbing quite a bit of risk with that big back-end testing effort… but given their circumstances… I would accept that definition of done and work to mitigate the risk. In the last scenario… the component team is done but the feature team is not until the code is integrated. Does the component team absorb some risk… absolutely.
The definition of done should be driven by the potential impact to the project. We are assessing the risk that we think ourselves done but really aren’t. We are assessing the risk that we have to go back and fix something. We are asking how much risk we’re absorbing if we allow partially completed work to move forward in the development process. That could be as little undone work as allowing less than total test coverage or as much as allowing a component team to move forward before their work has been integrated into the larger, customer facing feature.
In the absence of done… we have risk. I am okay defining done differently as long as we address the risk and have a solid strategy for mitigating that risk.
We have been really lax on the definition of 'done' in our company. The result is bugs. Lots of bugs.
I think it stems from having stakeholders not being interested in demonstrations. This makes for a lot of incomplete 'done' stories at the end of the sprint.
I had a former boss who always talked about making the qualification of doneness as binary as possible. I think he was dead on, but its not always easy to do. I think measuring it from a "how much risk is left?" perspective is very intriguing.