Warning: This may be a little bit preachy. If so, it’s only because I’m a little bit frustrated by it.
Tying Up Loose Ends at Home
We’ve had a bad habit at our house over the years. When we used a tool, we’d neglect to put it back in its proper place. The next time someone needed the tool, we’d spend more time looking for it than it took to use it for the task at hand. The pathetic thing is this was true even if it was the same person who used it last. It was often faster to buy another one than to search for the one we knew we already had…somewhere.
We ended up with four medium-sized Philips-head screwdrivers, three slotted, multiple redundant pliers and wrenches and socket adapter sets; not to mention impressive collections of pens, rubber bands, batteries, and what-not, all scattered around here and there.
One day we decided it had to stop. We carved out proper places for things and organized what we had. We donated duplicates to charity. We did our best to practice self-discipline to put tools and supplies back in their proper places after each use. Quality and bandwidth to get things done went up. Stress and wasted time and money went down.
Tying Up Loose Ends in a Commercial Garage
What if mechanics working in a commercial garage left things lying around randomly? They would spend more time looking for tools and parts than the time it took to complete the work. Parts would be misplaced, lost, stolen, or become rusted or damaged. The cost of supplies would go up.
Job times would go up, but quality would go down. Working with rusted, worn, and broken tools, mechanics would find it hard to fit and secure parts properly. Aside from that, the mentality of leaving the shop in disarray would surely be reflected in the care applied to the work, as well.
Customers would become fed up with waiting beyond the promised completion times for the work. They would soon seek more reliable shops. Word would get around.
The connection with software development/maintenance is probably obvious.
Tying Up Loose Ends in Software
A case in point: Incremental refactoring.
Many people seem to be confused about it. I don’t know if it’s the word, “refactoring,” or the word, “incremental” that leads to the confusion.
Incremental refactoring is a completely different thing from large-scale refactoring. People think all refactoring is the same; and they assume all refactoring is “large-scale” in that it requires “permission” from someone, and it will invariably cause work to take significantly more time. In reality, not refactoring as you go causes work to take more time.
Most technical coaches, including me, have a lot of difficulty clarifying the difference; not because it’s inherently complicated, but because it’s so obvious that it’s hard to come up with a verbal explanation. But I think our experience with keeping track of our tools and supplies offers a useful analogy.
Incremental refactoring is like keeping your work area clean as you go. It’s like putting the screwdriver back in its place after using it. It isn’t like building a whole new garage every time you do a job. It’s more in the nature of finishing…really finishing…what you started. Tying up the loose ends.
Incremental refactoring doesn’t add time and doesn’t require anyone’s explicit permission. If anything, you ought to ask for special permission not to keep the code clean as you go. Keeping code clean at all times really ought to be considered baseline job performance for a software engineer. The fact people had to coin a special buzzword to describe it, and that software engineers today argue against doing it, is a shame.
This is absolutely not the same thing as undertaking a large-scale, architectural refactoring that really does require distinct planning and allocation of time, money, and people.
Let’s say you decided, as we did, to organize things better so that we wouldn’t keep wasting time looking for the same item over and over again. The effort to get that set up the first time is equivalent to a “large-scale” refactoring of software. We needed to “get permission from stakeholders” by agreeing as a family to organize the storage of tools and to remember to replace tools after each use.
Going forward, when we start a task that requires tools, putting them back in their place afterward is not like a “large-scale” refactoring. Not at all. We don’t need to convene a family meeting to gain agreement that the screwdriver ought to be put back in its place. We just do it.
The Scout Rule
There’s the so-called Scout Rule of software engineering: Leave the code at least as clean as you found it. The same thing applies to storing our tools at home. If, in the process of looking for a pair of pliers, we see that the slotted screwdriver and the Philips-head screwdriver have switched places, we go ahead and switch them back to their proper places. We don’t need a formal project plan to do that.
That’s how incremental refactoring works, too. We go into a piece of code to add a new case to a list of conditionals. We see that the list is implemented as a big if/else block. We decide to change it to a switch or case statement in the course of adding the new case. You might argue that’s not much better than an if/else block. You’d be right: It’s not much better. But it might be a little better. And the next time someone touches that code, they can make it a little better still, because you will have left them at a better starting point.
If you’ve done something like that before, then you probably discovered that just by reading through all the conditions you found some duplication, some inefficient ordering, possibly a condition that can never be reached, and maybe even a logical “hole” that would allow control to fall all the way through the if/else block without the case being handled at all. Just lucky you didn’t discover that via a production support ticket in the middle of the night; especially if you found the code in a messy state then, when you’d rather be sleeping.
Maybe we add some functionality to an application, and we notice that a function or method we wrote is very similar to an existing one elsewhere in the code base. It only takes a few seconds to factor that duplication out, even without the conveniences offered by sophisticated IDEs. We can safely refactor because we’ve also applied the self-discipline to ensure microtests or unit tests cover that functionality, even if we refactor by hand. After all, what’s the justification not to do those things?
“Long Term” isn’t as Long as People Think
I often hear people say the benefits of refactoring are “long term,” and while it makes sense in theory, out here in the Real World® we’re always under pressure to deliver on a tight schedule. But when it comes to keeping the code clean as you go by doing small refactorings incrementally, “long term” means the very next time you touch that part of the code base. The very next time.
On the Dark Side, the converse is true, too. If we don’t keep the code clean as we go along, the damage accrues quickly. We don’t have a ten year “cushion” to accumulate cruft free of charge before the damage kicks in. The next change we make will take longer than necessary and will carry a greater risk of introducing a regression, and the situation will worsen if left unattended.
If the reason we cut corners is to satisfy stakeholder demand for fast delivery, how do we propose to meet that demand after we’ve ensured no change can be made quickly anymore, because we’ve allowed the code to fall into an unmaintainable state?
What is Fast? What is Slow?
Your stakeholders will always want things to happen faster, no matter how fast you deliver. The most effective way to minimize delivery time is to do things right; do them well; tie up all the loose ends, every single time. Cutting corners to try and “go faster” only makes us go slower; not in the mythical “long term,” but immediately.
When stakeholders say they want things faster, they don’t mean that they want things sloppy. It doesn’t occur to them that they have to tell us, over and over again, with every request they make, that they expect high quality. High quality is assumed to be part of our job.
As software engineers, we choose whether to treat incremental refactoring as a baseline part of our professional work. We don’t ask permission to do our jobs properly, any more than a surgeon asks an accountant whether it’s okay to take the time to wash her hands before an operation. The time it takes to complete an operation is the time it takes to complete the operation properly. Patients expect their appendix to come out cleanly, and to resume their normal lives without infections or an embedded surgical sponge. It does them little good to get out of the operating room ten minutes faster if they’re going to suffer complications for it.
The standard response to this observation is to say that doing the work properly would take too long. What really “takes too long” is recovery from doing the work in a rush and failing to apply due diligence. Recovery from infection has a huge impact on the patient’s life, family, and work. A second operation to remove a sponge that was left in the body because people were working in a rush is an even larger impact.
In software work, creating five regressions due to rushing through a software change does not “save time.” Instead, it causes the change to take longer: It doesn’t count as “done” until it’s done right. If teams have to spend hours or days fixing things to get it right, that’s time they can’t spend on other value-add work; it’s opportunity cost.
There’s a ripple effect of cost in terms of both time and money, sometimes including lost customers. The additional stress on technical staff has a cost, too; low morale and high turnover, leading to a reinforcing loop of increasing pressure and stress, and decreasing morale and engagement.
The Technical Debt Metaphor
Whenever the subject of incremental refactoring comes up, someone invariably will mention the technical debt metaphor. They’ll say, given the time we have to deliver, we have to cut corners. They’ll say, that’s an example of incurring technical debt intentionally for business reasons.
That represents a misunderstanding of the metaphor, and maybe of metaphors in general.
- A metaphor is not a complete and comprehensive definition of the Thing it refers to. It’s only a kind of shorthand way to help someone get a general sense of some aspect of a Thing.
- People have a tendency to replace the Thing with the metaphor, and to start treating the metaphor as if it were the actual Thing. It isn’t. Ever.
- People also have a tendency to extend the use of a metaphor beyond its original context. This dilutes the meaning and usefulness of the metaphor. Ward Cunningham came up with the technical debt metaphor in the context of working with a client in the financial services industry. He was looking for a metaphor that would resonate with them, to explain how incremental delivery could provide a feedback loop to improve a software solution; not as a way to cut corners for the illusion of speed.
- The original concept of the technical debt metaphor included the assumption that the code was kept clean at all times. The idea was that as people’s understanding of the problem and solution improved through iterative development, the code would be in a condition that allowed for change; not that the code was crufty and in need of remediation for technical reasons.
Cutting corners to try and deliver software faster isn’t the same thing as the technical debt metaphor. It isn’t a business-driven decision. It’s just sloppy work.
The way to achieve fast delivery is to remove obstacles from our path. One such obstacle is crufty code. It’s an obstacle that can be removed incrementally through disciplined work. It isn’t dependent on process or methods, like “waterfall” or “agile.” We choose how to do our work every time we touch a keyboard.
My opinion: Doing the work right, including tying up the loose ends, is beneficial for customers, funders, management, future maintainers, and ourselves.