Defect Analysis After Hexawise

How to identify the defect reason after you move to combinatorial testing.

Ivan Filippov avatar
Written by Ivan Filippov
Updated over a week ago

Challenge Summary

A common question from Hexawise clients in early adoption stages is:

“Before, we had a clear testing focus for each scenario, so we knew exactly what would fail the test. Now, with all the inputs mixed together, how can we identify the cause of a defect?”

The purpose of this article is to demonstrate a defect analysis approach when using Hexawise tests. We'll use a simplistic customer preferences application with 5 “Checked”/ ”Not Checked” indicators, looking like this:

Manually selected tests would likely focus on isolated scenarios:

So, when test #3 fails, it tells us there is likely something wrong with Indicator 3. 

If instead we use Hexawise to generate the a set of 2-way tests for this application, we might get test scenarios like these:

Now, if scenario #4 fails, it's not just Indicator 3 that is checked, but also Indicator 1. So, how do we pinpoint the reason for the failure?

An Analysis Approach

The key to our analysis is to look at the whole test suite and identify the dependency pattern between the consistent elements and the execution outcome.

First, let's define the "strength" of the defect. We'll call an issue a “one-way defect” if it is caused by just a single parameter value, irrespective of everything else happening in the test case. A "two-way defect" means that a specific combination of 2 parameter values happening together triggers the error (e.g., Checked for Indicator 1 AND Checked for Indicator 4), and so on for "three-way defects" and beyond. 

Keeping that in mind, let's go back to the original manual selected scenario #3:

If this is truly a 1-way defect of “Indicator 3 = Checked”, then the rest of the values in the row won't matter. The same defect would be caught by each of the following highlighted Hexawise scenarios:

The debugging process in this case would consist of the following steps (assuming there is no error message specifying exactly what went wrong):

  1. Execute all the scenarios and notice that scenarios #1, #4, and #6 were the only ones that failed.

  2. Analyze what is the consistent element in each of these scenarios:

          Indicator 1 has both values – not a culprit
          Indicator 2 has both values – not a culprit
          Indicator 3 has only “Checked” value – culprit
          Indicator 4 has both values – not a culprit
          Indicator 5 has both values – not a culprit

As in this case, the holistic view of the Hexawise scenarios usually provides you with enough data to pinpoint an n-way issue.

At the same time, the original manually selected scenarios didn't cover two indicators checked simultaneously, which means there's a significant risk of 2-way defects (or higher) slipping into production. Based on the statistical evidence, on average, 84% of defects in production are caused by 1 or 2 system elements acting together to create a defect.

So it's important to use a combinatorial testing methodology to improve the testing coverage and then adjust the defect analysis process accordingly.

Afterword

There may be situations where targeted testing is absolutely necessary. Hexawise allows you to do this by forcing specific parameter values to be used. For example, we can force only 1 indicator at a time to be checked in each of the first 5 scenarios:

Then the rest of the Hexawise generated scenarios will provide all of the remaining pairwise interaction coverage.

Alternatively, you can use a higher coverage strength, especially if the system has no more than 6 binary parameters. In our demo example, selecting “5-way” from the drop-down automatically provides us with all possible combinations of 5 inputs. You can read more about the interaction coverage strengths here.

If your experience with defect analysis has been different, we would appreciate you reaching out to support@hexawise.com & telling your story.

Did this answer your question?