Skip to main content

Defect Driven Test Automation

Reading: Defect Driven Test Automation

A team needs guidelines when starting to use automated tests.   Not only is the development of tests expensive in terms of development time but on going maintenance of test also consume development time.  One option for building a testing strategy is to use defect reports to define software use patterns.  The goal is to build tests in an area that will bring the biggest bang for the buck, just like all software development.

Test Smart:  When teams build tests based on reported defects many times the strategy is to build a unit test for every defect.  The concept is when the team picks up a defect, they write a test for the expected behavior and see that fail.  Next they write a fix that causes the test to pass.  This approach will definitely increase  test coverage and will yeild an automated test suite that can be used.  However how do you know that you have made the best investment in writing automated tests?

Focus on Use:  Similar defect reports from customers often indicate common usage of a feature that would benefit from automated tests.  For example a rounding error in a monthly report.  Even if there is considerable refactoring involved to build automated test for this defect, due to usage there is payback for the effort.  Where a single instance of a defect or defects found by the internal quality team may indicate a feature not in heavy use so that the same refactoring effort is not going to have as big an impact on the overall code base.

Perceived Quality: Considering that only about 40 -60% of the code base is ever going to be used and that only 20 to 40 % is going to be used frequently, placing your testing efforts in the most used portion of the code is going to result in the biggest return and significantly increase  customer’s perception of the product quality.  Analyzing customer reported defects to generate guidelines for adding test is one tool for building higher quality code.

Next Lack of Predictability: Your Biggest Problem

Comments (5)

  1. Rob Hooft
    Reply

    In many cases, unit tests are made to prevent defects. In my experience they quickly SAVE development time, because they make it much easier to trace the reasons of code not behaving in the expected way.

    Of course, any defects that get through and end up in released software should be added as tests, but these are “regression” tests and could probably more effectively be implemented as functional tests and not as unit tests.

    Am I missing something?

    Reply
    • Jann Thomas
      Reply

      Thanks for taking the time to write. I think I was not clear enough with my opening but I was considering the point of existing software with new automation effort. In this case defining where to start requires guidelines for the team. Any time software is in production there are going to be defects reported. Most often these are not necessarily slips but just changes in behavior.

      Reply
  2. Leah
    Reply

    Interesting! So, we’re developing a new app (HTML5) and creating automated tests as the new functionality is added – so far this is working pretty well. However, new things present themselves (i.e. defects), which are not always caught by our existing tests. So I have to ask myself, are our tests the best they can be, can they be better, and should I include this nuance in our test so it might catch this same bug in the future? Or should I write a new, independent test to specifically catch this exact bug? Or, should I not automate any tests at all based on defects? I don’t want to start going down the rabbit hole of creating tests to catch every defect – it would be impossible anyway – and even so, where do you draw the line? I like the Perceived Quality rule above – that helps.

    What do you think about this case, if I may?

    We have a calendar app on which a user can log their workouts. One day can have many workouts. On any day header, you can Ctrl+C to copy the day, and Ctrl+V to paste on another. This worked fine. Suddenly weeks later, a defect introduces itself and now the Ctrl+V triggers two paste events (i.e. workouts are being doubled). Our existing test to test if the shortcuts worked would not have caught this bug. Question – should I create a new automated test to make sure only one instance occurs on a paste event? Or is this a waste of resources? What are your thoughts on these types of things?

    Thank you for your input (in advance!),
    Leah

    Reply
    • Jann Thomas
      Reply

      Leah,
      Thanks for taking time to write. First I want to address the problem with your calendar app. The problem with duplicates is counting. It could be that the paste is getting fired by multiple methods for different reasons which is not catchable by the existing test. This condition points to an opportunity to refactor but you need a failing test to verify that the refactor is working. So the short answer is you need to write a test. It maybe that writing this test eliminates the need for your first test and you can drop the first one.
      Writing test for every defect for well covered code is not a waste. The point of thinking of the testing code as valuable and expensive is to push the idea of building a strategy that can be easily communicated about how to best use the investment in tests. You test suite like you code base needs to be groomed and maintained so that it too is easy to change. As a development team we should be looking for opportunities like your example above to delete old tests and replace them with tests that we have learned work better.

      Thanks,
      Jann

      Reply

Leave a comment

Your email address will not be published. Required fields are marked *