Skip to main content

Stop Writing Code You Can’t Yet Test

Dennis Stevens Managing Director, Consulting
Reading: Stop Writing Code You Can’t Yet Test

Most of the organizations we engage with have more work to do than they can possibly get done. So, developers are writing code as fast as they can to get done trying to maximize their capacity. Typically, we see developers running way ahead of testing. Often, testing is still working on the prior release while development is running off on the next release – and testing just can’t ever catch up. This inventory of untested code shows up as long lists of defects, lots of code branches, untestable features, and products that can’t be integrated or regression tested until just before the next release. This vicious cycle of long bug lists, painful regression testing, and production defects colliding with the next release continues to grow.

The goal is not to write code faster. The goal is to produce valuable, working, testing, remediated code faster. The most expensive thing developers can do is write code that doesn’t produce something needed by anyone (product, learning, etc). The second most expensive thing developers can do is write code that can’t be tested right away. Recently, LeadingAgile’s Derek Huether wrote about Getting Teams to Deliver Predictably (https://www.leadingagile.com/2013/05/getting-teams-to-deliver-predictably/). He shows how having more work in the queue than can be performed slows down throughput. Pretty quickly, everything gets congested and things back up, building up latency and uncertainty in flow.

So, we understand that it is not a good economic investment for developers to write code that can’t be tested. It isn’t getting done faster – and it’s increasing the cost of all the work in process. Even people that understand this push back, however. We hear developers have to be busy – that not writing code is not a good economic option.

There are at least six things that developers can work on that are better economic investments than writing untested code.

  1. Get ready for upcoming work – In complex products, not being able to make and keep commitments creates delays that are very costly. Rather, than writing code that can’t be tested in the sprint developers should be figuring what they need to understand for the next (or future) sprint(s). They are going to have to figure it out at some point, anyway. Figuring it out ahead of the sprint improves the ability to make commitments, so it is a better choice than figuring it out in the sprint.
  2. Help finish work in progress – Developers can (and should) pair with testers to find and fix bugs, or pair with developers to finish other work that the team committed in the sprint.
  3. Expand capability and future capacity – There are often scarce skills or knowledge sets that only a few people on a team can do. Rather than writing code that can’t be tested, developers can pair with someone to develop team capability and capacity. Even if the new person isn’t as productive as the old person at this scarce skill or knowledge set it is still, a better economic investment than writing code that can’t be tested.
  4. Increase testing capacity – Leveraging their unique skills, developers can dramatically increase testing capacity by improving test automation, establishing test data management tools, working on build automation or cleaning up environments. When testing is the constraint – this is a great investment for the team. It can create permanent capacity increases by raising throughput at the constraint.
  5. Refactoring difficult code – If there are sections of code that are tricky to work on or are frequently the source of unintended errors, developers can work to refactor this untested code. It might make sense to combine this effort with increasing the ability to test the code being refactored. Refactoring is particularly compelling when focused on a section of code related to work the team is getting ready to do.
  6. Practice – Increase the capacity and capability of the team by practicing new skills using code katas. There is a craft to being a productive developer. The body of knowledge is large, including patterns, new features of programming languages, new testing and development techniques, and the API’s or interfaces that developers work with. Finding the time to practice new skills on code that will never go into production is valuable when it results in improved capability of the developers.

We need to focus on improving the economic return of developers. Writing code that isn’t needed is not a good economic investment. Writing code that can’t be tested is not a good economic investment. Not writing any code is actually a better economic investment than writing unneeded code or code that can’t be tested. Remember, the goal is not for developers to write code faster. The goal is for the organization to produce valuable, working, tested, remediated product faster. These two items often collide. Make the best economic decisions – and writing code that can’t be tested is typically not the best economic decision.

Next Upskill with Year-End Agile Training

Comments (7)

  1. Mark Ahearn
    Reply

    Fascinating to say the least.

    I come from the era of SASD (structured analysis and system design) when code was written in way that made it bug free, testable in a specified environment, and easily integrated with all other functions at hand.

    Now we write SW in _automated tests_ BEFORE we write the code that actually does the work. Now the actual functional requirments are embodied in a piece of SW that itself has not been tested and WILL BE used WHEN the real SW is done. Amazing. I am amazed at how the entire country has jumped on this bandwagon without understanding that the best way to actually test written SW is using a “white box” approach. Think all of that time you spend writing AUTOMATED tests is gong to be better than actually stepping through a well written 300-500 line function? Guess again.

    I can see that the Agile methods are iterative, going through the same steps: req analysis, design, test, etc., but now somehow SW test is deligated to a separate body of people who run automated scripts in the “test harness”. Now we have “time-boxes” that call for code to be delivered to test other code that has not even been written, let alone defined in a requirements document. I will say this once and once only: SOFTWARE CAN ONLY BE TESTED AGAINST VALID REQUIREMENTS- you can certainly test HARDWARE to be a pass/fail thing using diagnositcs, but SW itself is best verifed line by loving line, function by loving function, unit by loving unit, and system by loving system at a time. You validate your HARDWARE using software, but software itself peforms according to the REQUIREMENTS or it doesn’t.

    I hope that there are people out there who understand this one, for this may be the worst place to post this note on what I am seeing in American Corporations today — Agile- Test Driven Methodology. right.

    Mark Ahearn, Director
    Allocated Systems
    http://www.allocatedsystems.com

    Reply
    • Dennis Stevens
      Reply

      Mark,

      From your comments I understand we have a different perspective of what Agile is. I believe you are just seeing bad practices under the guise of Agile. We also are most likely solving different problems.

      You state that in Agile there aren’t requirements, that developers don’t do white box testing of their code, that automated tests are a bad investment, and that automated tests aren’t “tested”. When Agile is done well these aren’t valid assertions.

      Requirements. Given the size and risk involved in a project there may be a large amount of Structured Analysis and Systems Design done before a project starts coding. In most of our clients a project couldn’t get funded, much less successfully built, without an appropriate amount of feasibility, options, architecture, scoping, modeling, etc.

      Once coding starts a well formed Agile team is going to have Validated Requirements in hand before they write code. These will be in the form of Features, Stories and Acceptance Criteria – provable data driven assertions. These are acceptance tests may or may not be automated. The point in time that these requirements are elaborated is a risk based decision. But it is never good practice for developers to start coding with out a shared understanding of a clear outcome they can validate their work against.

      White Box Testing. Once the teams start producing code, the craft of reviewing your code line by line and function by function, is still good practice. In some flavors of Agile this is actually done in pairs – where the code is constantly being reviewed by two individuals. In other places there are code reviews etc. Not walking the code line by line as it is written is just bad craft.

      The Value of Test Automation. The point of ongoing test automation is to assert at a black box level that the baseline expected behavior at the function and feature level is correct. In complex systems with many interdependent components, it would be become impossible to walk every line of code that is potentially impacted for every condition for every line that is changed. A well formed test harness provides the safety of validating that the behavior that has been carefully crafted into the code is still working. When test assertions are poorly designed or when the test automation is at the very outside boundaries of the systems (i.e. automated GUI tests as the only form of test automation) you are likely to find the costs are far greater than the benefits.

      Testing of Test Automation. Automated tests should be reviewed just as carefully as the code itself. The tests themselves should be designed just as carefully as the code itself. These tests should run many times a day – so the tests are tested multiple times a day. Every failure should be fixed immediately. Sometimes the failure is in the test – and sometimes in the code. The code should be fixed to produce more reliable behavior and the test to make more reliable assertions about the underlying behavior of the code. Only by writing code and tests together do we get this appropriate level of test coverage and validation. My entire point was that testing needs to happen in unison with development – not later. In Agile, the teams work together to produce working tested code.

      My final point is that Agile is not a call to abandon good practices. We tend to be dealing with large, complex systems with some unverifiable assumptions that need to move very fast. The goal is to find the appropriate amount of Structured Analysis and Systems Design (SSADM) to mitigate the risk associated with the project. Making requirements assumptions up front that can’t be validated and then building a lot of detail based on those assumptions is expensive. Delaying the benefit that a project will return for months while we get the documentation right is expensive. Agile doesn’t not suggest abandoning process, or planning, or architecture, or documentation, or good technical practices. Teams that are following bad practices aren’t doing Agile – they are just following bad practices.

      Reply
  2. Lee
    Reply

    Dennis,

    Wow! I want to work on your Agile team. Structured Analysis and System Design done before a project starts, validated requirements, shared understanding, review of automated tests. I’ve worked for a number of companies using Agile and have never had any of them approach this level.

    I have on the other had read and heard time after time that when anyone has a complaint of Agile. “You’re not doing it right.” or “We just haven’t found the right combination of people.”

    I found it interesting that the items you mentioned remind me so much of waterfall and iterative methodology.

    Reply
    • Dennis Stevens
      Reply

      Lee,

      Thanks for you reply. The point of the post is that you need frequent delivery of working tested code. That certainly isn’t waterfall. The problem with waterfall is that we wait until the end to verify anything so you can’t make trade-offs. The other point is that developers and QA should be actively collaborating and learning as they go. What makes it Agile is the collaboration around the work that is about to be done and the frequent delivery of working tested product. This frequent (iterative) delivery allows for learning, trade-offs and improvements as the project progresses.

      So, iterative yes, waterfall no. The types of problems we are addressing deal with agile in large, complex organizations. This requires a sufficient understanding to coordinate work across teams – so there is some upfront planning. But its not waterfall. Every detail isn’t defined up front – they are progressively elaborated. And you definitely don’t wait until the end to test. The version of Agile where everyone just does what they want, solutions emerge from no vision, and dependencies are managed on the fly doesn’t work when you get beyond a handful of teams.

      Dennis

      Reply
  3. Alan Dayley
    Reply

    I have referenced and referred this article over and over again as a great discussion of why keeping busy in a specialty is not usually the most valuable thing to do. Thank you for writing it!

    (Off Topic Aside: I am so glad this article is re-published at DZone since the gray text on white is nearly unreadable.)

    Reply
    • Mike Cottmeyer
      Reply

      Hey Alan, thanks for the kind words. Just wanted to tell you that you are not the first to comment on the text on the site. it’s funny, the grey did not bother me when i approved it, I actually like the way it looks. That said, I put in a request to our digital marketing company today to darken it. Not going to make it black, but a darker shade of grey. Feel free to let us know what you think once the change goes into production.

      Reply
  4. Humberto
    Reply

    Usually it is because it is easier and more obtainable as you can merely perform from the comfort of your property.

    Reply

Leave a comment

Your email address will not be published. Required fields are marked *