Skip to main content

Hybrid Test-Driven Development

Reading: Hybrid Test-Driven Development

hybrid test-driven development

In coaching technical practices, I often meet software developers who say and believe they are using test-driven development (TDD), but what they are doing does not look like TDD to me. The most common pattern is that the developer first writes one or more “empty” or “skeleton” source files, and then fills in the logic little by little, writing unit test cases either before or shortly after writing the production code.

This strikes me as a sort of “hybrid” of TDD and test-after (or test-never) development. I have long been curious to know how developers learn this approach. Any examples or tutorials one might find online that demonstrate basic TDD state clearly that the only reason to write a line of production code is to make a failing test case pass. If you know you need new logic to fulfill a requirement, then your first step is to create test cases that “force” you (or guide you) to develop that logic.

And yet, many developers write quite a few lines of production code before they have a failing test case in place, and many (most?) of them honestly believe they are test-driving the solution.

Recently, I had to learn the Microsoft .NET framework and VisualStudio to prepare for a coaching engagement. I have used many programming languages in the past, but never used .NET for “real work” before. In reading material produced by Microsoft on the subject of TDD, and working through tutorials for MSTest, NUnit, and Xunit, I noticed that all the instructional materials have the developer create some amount of production code before writing even a single failing test case.

The experience started me thinking about situations that might lead developers to believe it is appropriate to write production code before writing a unit test. Leading with Microsoft’s approach, here is a list of situations I’ve seen that might cause developers to use a hybrid approach:

  • When tutorials and documentation explicitly tell learners to generate or type in some amount of production code before starting to write unit tests (e.g., Microsoft).
  • When the development tools automatically generate some portion of the solution. It’s easy for developers to set aside TDD when creating the remaining pieces of logic, as they are already working with generated code (e.g., Oracle ADFRuby on RailsMATLAB or Simulink).
  • When developers use Domain Driven Design, they create a visual model of the solution before they begin coding.
  • When developers use the approach of writing skeleton source modules containing comments, and then create the solution by replacing the comments with real code (a popular approach prior to the advent of TDD).
  • When developers have inherited a monolithic code base in which it’s difficult to tease the functionality apart into isolated, testable pieces. In these situations, there is usually real or perceived delivery pressure that discourages developers from taking the time to begin incrementally refactoring the offending code.
  • When developers have limited access (or no access) to tools that enable fine-grained testing (e.g., a traditional mainframe environment that has not been kept up to date with tooling for unit testing, service virtualization, and continuous delivery).
  • When developers consider low-level testing such as microtests and unit tests to add little or no value, and deliberately defer writing test cases until the functional or integration level (see the work of James Coplein for insight into this way of thinking).

Learners are taught a hybrid approach

Given that millions of developers have learned “the Microsoft way” since the .NET framework was first released in 2000, it’s no surprise that the majority of existing .NET solutions in the wild exhibit characteristics typical of code that is not test-driven, including monolithic design and tight coupling; nor it is surprising to find the majority of .NET developers using a relatively weak variant of TDD that does not call for high self-discipline in test case design or code isolation.

Part of the problem is the mindset that TDD is a testing technique as opposed to a software design technique. Developers who conceive of TDD in this way tend to feel it is appropriate to write all or most of the production code before writing test cases. As a result, some or most production code is difficult to isolate for unit testing, and developers avoid going to the trouble.

As .NET developers advance in their careers, their relative seniority leads them to assume they are doing things pretty well. After all, what better evidence than career success? When challenged on their code structure or unit testing approach, they tend to justify the way they have always worked on the basis that they have not experienced any particular problems.

I’ve been taken aback at times by comments from senior .NET developers. For instance, one recently told me he didn’t consider the 500-line C# method we were examining was long. He thought it was just fine. Another was pleased with his design for “flexibility,” in which he passed boolean values to a long method to control alternative paths through the method.

If developers learn to build code in a test-after way, they will tend to create monolithic designs. If they think monolithic designs are okay, because that’s the way they were taught, then they won’t recognize basic code smells. If they don’t recognize basic code smells, then how can we expect them to understand there might be problems with their designs? Lacking awareness of problems, why would anyone bother to change their habits? It’s doubly challenging when working with senior developers, because they often are not open to alternative perspectives.

Frameworks and modeling tools that generate boilerplate code

When the development tool can automatically generate an executable simulation of the solution (like Simulink), a fully-functional if basic CRUD app (like Oracle ADF), or a working skeleton of a solution (like Ruby on Rails), the temptation is to extend the generated code directly rather than to isolate custom components and test-drive them separately.

It’s often a good idea to keep your custom components isolated (within reason) from generated code. That way, if you need to re-generate the code you won’t destroy your customizations, and if you need to replace a custom component you won’t need to re-generate the boilerplate code.

When using a code generator like Oracle ADF, it’s actually neither practical nor valuable to take a “pure” TDD approach. The difficulty of cramming TDD into a tool stack that is built on very different assumptions can be more trouble than it’s worth.

Oracle ADF generates a CRUD app based on an Oracle RDBMS schema. It allows for customization by creating POJOs, or “plain old Java objects,” that contain solution-specific functionality the tool cannot generate based on the database schema. By following the documented conventions, you can ensure the custom logic will be invoked at the appropriate points in the request-response cycle at runtime.

These POJOs are a natural fit for pure TDD, but developers usually don’t drive them from test cases because it seems “easier” just to hack up some code and drop it into the tool. In this case, we would be using a “hybrid” TDD approach deliberately, so that we could take advantage of the code generator while ensuring any custom components were designed well and tested thoroughly in isolation.

It’s important to note that the development communities that work with some of the examples listed above subscribe to TDD in a serious way (notably Ruby on Rails). I’m only suggesting that the existence of generated boilerplate code can lead developers astray.

Domain Driven Design

DDD is an excellent approach to many categories of solutions. The method is silent on the subject of TDD. Many developers who use DDD tend to create shells or skeletons of classes based on the domain model, and then fill in the missing logic little by little.

Some developers make the excuse that DDD doesn’t “allow” you to test-drive the code, but this is not really true. It’s perfectly feasible to begin to drive out the logic through microtests, while referring to the domain model as a guide. Even if you chose to write skeleton classes to represent domain entities in the model, you could intentionally use a hybrid TDD approach to drive out the logic in those classes.

Filling in the blanks

For many years before TDD became popularized, a well-respected and widely-used approach was to lay out a general solution design on paper, write skeleton source files containing comments that expressed the functionality to be built, and then go back and replace the comments with real code. Working incrementally, developers carefully test each change manually as they go along. When all the comments have been replaced by real code, the solution is complete.

This is actually not a bad way to come up with reasonably clean, working software, assuming the developers are cognizant of generally-accepted software design principles. Where it falls short of TDD is that once you’ve delivered the initial version of the solution, you have no executable regression suite and no accurate documentation of what the system does (unless you build those things in separate efforts, which most people don’t undetake). Subsequent enhancements or extensions to the solution tend to muddy the design and accumulate technical debt until the solution becomes unsupportable.

An inherited code base

Most of the articles, examples, and tutorials about TDD focus on greenfield development. Most of the actual work performed by the millions of developers in corporate IT organizations focuses on support and enhancement of existing applications. Most of those applications were built without TDD, and they exhibit the design qualities one would expect. Developers often have trouble connecting the dots between the TDD technique they see demonstrated and the realities of modifying the code base they work with on the job.

Developers feel real and perceived pressure to deliver quickly. Many of them harbor the misconception that “going fast” means “cutting corners” in quality. The truth is quite the opposite.

The key to getting microtests around existing code is to refactor incrementally in the normal course of making changes to the code. In most cases, there is no need for a massive refactoring effort separate from everyday work.

Unfortunately, there is a widespread misconception about how incremental refactoring fits into software development work. As part of the “how,” the decision to refactor falls to software developers only. Business stakeholders, ScrumMasters, Product Owners and other roles literally have nothing to say about it.

Many advisors in Agile methods exacerbate the problem by treating refactoring as something “the business” has to understand and authorize. The popular Agile scaling framework, SAFe, goes so far as to call for Product Owners to give permission for refactoring.

This is tantamount to requiring everyone who holds a driver’s license to be able to rebuild an internal combustion engine. Incremental refactoring is simply a development technique, and not a separate piece of work. Ron Jeffries, a signer of the Agile Manifesto and a long-time proponent of TDD, tweeted recently that he doesn’t need permission to refactor any more than he needs permission to write an if statement or a for loop.

Limitations of available tools

TDD first began to gain traction on a development project using Smalltalk and a unit testing framework called SUnit, whose design became the model for similar frameworks supporting numerous other programming languages.

So, TDD as a routine development practice is associated most strongly with object-oriented languages. Tools abound for Java, Ruby, Python, C#, and other object-oriented languages. Tools to support TDD for other types of languages are less common.

Proponents of functional languages often argue that the strong type systems offered by these languages obviate the need for microtesting. While the question remains unsettled, they do make a good point. Functional languages are designed for mathematical and scientific solutions. Their syntax is meant to resemble mathematical notation, and the compilers convert mathematical statements into executable code. When custom types are defined properly, the type definitions and the runtime engines guarantee functions cannot execute with invalid input, and cannot generate invalid output.

In addition (no pun intended), functional languages are designed to operate on lists or collections of values with single source statements. That means the smallest unit of code for which a microtest is meaningful can be larger than the smallest unit in non-functional languages. Between strong type systems and list operations, the number of microtests necessary to gain confidence in the code can be lower than with non-functional languages.

Procedural languages are an entirely different matter. These languages have very weak type systems, or no type system at all beyond compiler hints. A great deal of existing production code is written in COBOL, a procedural language that dominated business application programming for many years, and in other procedural languages such as PL/I.

The tooling available to support TDD with these “legacy” languages is not as easy to use as tools designed for object-oriented languages. Companies like IBM, Compuware, and Microfocus offer tools to support executable test scripts and unit testing, but there are a couple of inhibiting factors.

First, the barrier to getting started with TDD and refactoring on the mainframe platform is high. The tools are complicated, bulky, and require significant configuration to be usable. It is easy for developers to give up, thinking the potential value isn’t worth the effort.

Second, the cost of tools to support legacy languages on mainframes is high. Many organizations I’ve worked with are interested in the idea of TDD, but unwilling to invest in the tooling to support it. On the bright side, it really isn’t too difficult to roll your own testing frameworks using mainframe languages.

Finally, as TDD and refactoring have not been common practices in the mainframe world down through the ages, there is no established culture of test-driving code among mainframe practitioners. Many of them see it as little more than an academically-interesting but impractical theory.

Disregard for the value of unit tests

The final reason why developers might not embrace TDD is perhaps the most basic of all: They don’t perceive any value in it. There’s really nothing much to be done about this. It’s a professional judgment call, and it’s quite normal for different individuals to reach different conclusions.

Conclusion

The question of whether to test-drive code is any easy one for me to answer. It’s been my experience that when code is test-driven, we experience fewer production issues, modifications are easier, it’s easier for new team members to learn the code base, deployments are less risky, and work is less stressful. Those characteristics appeal to me.

And yet, very little existing production code was built in a test-driven fashion, and very little new code is being developed in that way. My guess is the proportion of test-driven code in production worldwide is statistically insignificant.

In exploring the reasons why, it’s easy to think of a wide range of possible causes: management, training, habit, misunderstanding, individual initiative, tool availability, philosophy, and maybe just plain old inertia. Each calls for a different response.

Next Sprint Report Basics: What Should You Be Tracking? With Jessica Wolfe

Comments (3)

  1. F Bourbonnais
    Reply

    About DDD, an outside-in approach is perfectly compatible and help the design to emerge. See the book Growing Object-Oriented Software, Guided by Tests.

    Personally, I don’t even see how we can think that TDD is incompatible with DDD.

    Reply
  2. Eric
    Reply

    I tend to follow a hybrid model when I develop independently, but I do TDD religiously when pairing with junior developers and insist they do it because it’s a good discipline to learn. On my own, I find TDD purity distracting and a killjoy. Why get derailed by some tiny error edge case when I can come back and handle it later (and yes, I do come back)?

    I’ve never felt the magic of TDD as far as how much better formed it’s supposed to make my code and how it will force me to ask the big questions and there through unfold the mysteries of the universe for me. A lot of people do, which is great, but not for me. The big win I see is that it gives my tests granularity. But if granularity is the big win, what does it matter if someone achieves that granularity by mixing test-first and test-last?

    Reply
    • Dave Nicolette
      Reply

      If that’s the only value you get from TDD, then you’re right: It doesn’t matter how you approach it.

      Reply

Leave a comment

Your email address will not be published. Required fields are marked *