All Collections
Hexawise Features
Coverage Analysis
How Are Hexawise Tests Objectively Superior?
How Are Hexawise Tests Objectively Superior?

"Before vs After" Hexawise Comparison Guide

Conor Wolford avatar
Written by Conor Wolford
Updated over a week ago

One of the most common concerns we hear from clients is: 

“Our current tests are good enough. Why invest the time & resources in adopting the Hexawise methodology if it doesn’t move the needle?”

That may be true, but we don’t have to guess. In this document, we describe the process to evaluate the existing tests, directly compare them with the tests generated by Hexawise, and make data-driven conclusions about what is best for the software testing efficiency in your organization.

The example below uses the “Banking” model (available to any user if you want to recreate this case on your own). This process can also be followed with any existing set of tests as long as you also define a set of inputs for the system under test. The process description assumes intermediate knowledge of Hexawise features.

Prerequisites

The prerequisite actions are: 

 - Copy the “Banking” model

Type of Income[Low, Medium, High]
Credit Rating[Low, Medium, High]
Customer Status[Regular, Employee, VIP]
Term of Loan[15 yrs, 20 yrs, 30 yrs]
Loan Amount[10\,000 - 99\,999, 100\,000 - 199\,999, 200\,000 - 500\,000]
Type of Property[Condo, Apartment, House]
Loan to Value Ratio[70%, 80%, 90%]
Location of Property[Big City, Small City, Rural Area]

 - (Optional) Export it without tests

 - Reorganize the existing tests in the Hexawise format

The last step is the trickiest and the most time consuming one. There are several considerations:

 - The general format involves Parameters as columns and Values as rows

 - Most requirements and existing tests specify only the parameters necessary to trigger the outcome (i.e. black font above). While it can be beneficial for the precise impact identification, it lacks the systematic approach to selecting the values for other parameters and leaves room for the redundancy of the manual process at tester’s discretion. Traditional approaches leave plenty of room for the challenges below:

             - Direct duplicates (inconsistent formatting; spelling errors);

             - “Hidden”/Contextual duplicates (meaningful typos; same instructions                      written by different people with varied styles);

             - Tests specifying some values, leaving others as default (when several scenario combinations could be tested in the single execution run).

Note: there is a difference between “select these 3 values and everything else should be default for this rule” and “select these 3 values and everything else should be anything because it doesn’t matter for this rule”. The second interpretation is much more common in our experience.

To generate the most precise comparison, the actual values from execution logs need to be placed in all the blanks in requirements (i.e. red font in the picture above). If that is impossible, the default value for each parameter is assumed to be used.

For this example, we use 8 artificial existing tests which didn’t specify all the values in their documentation.

Process

Next, we proceed with generating the comparison. First, we update the “Banking – Copy” model by putting the reformatted dataset above onto the Forced Interactions tab. There are 2 options:  

  • Manually input each of the existing tests inside the tool

  • Copy all existing tests into the "Forced Interactions" tab of the exported Excel file, then update the Hexawise model via edit -> update & overwrite.

Note: that is why the export step is optional. However, when working with large existing suites, making the updates in Excel and importing the file into Hexawise provides significant time savings.

Verify the "Forced Interactions" screen looks like the following, with each scenario specifying all parameters/values necessary for the execution:

Next, click “Scenarios” in the left navigation pane.

Note: the process is the same for any N-way strength; this example covers 2-way for simplicity and the easy availability of the coverage matrix.

This is how your existing test suite looks when generated by Hexawise. However, we see that Hexawise believes you need 19 test cases (not 8) to thoroughly explore the potential system weaknesses. Why?

The answer to the central question of this guide is on the Analysis screen.

Comparison & Conclusions

Remember the dangers of manual selection without the systematic approach? The “good enough” existing suite only covers 48% of 2-way interactions in the system, leaving a significant number of potentially-defect-causing gaps in coverage. 

Granted, the more experienced testing organizations with focus on variations and, with some knowledge of combinatorial methodologies, will do better than this. Yet, it is rare that the manual selection can consistently achieve the coverage levels in the second picture. 

Thus, this portion of the comparison tells us that the existing thoroughness is not sufficient, and 4 more Hexawise-generated tests would be needed to get to 81% 2-way interactions, which is a safe benchmark proven by research studies. You can clearly see which pairs are still missing and make concrete execution decisions based on the business risks & constraints (i.e. execute all 19 tests to reach 100%).

However, that is not the key conclusion. These 2 images evaluate the concept of building Hexawise tests on top of the existing ones to just close the coverage gaps. This approach ignores the potential benefits of completely remodeling the system under test inside Hexawise. Let’s prove the benefits of this alternate approach by looking at what we started with – the original Banking model.

What if you let Hexawise select all the non-specified values for the 8 business rules that you had? As you open the original Banking model and go to the Analysis tab, this is what you should notice:

Note: we recommend opening the models in 2 different browser tabs so that you can easily go back and forth.

Hexawise is able to scientifically detect the optimal way to select values for each test scenario and generate 26% more interaction coverage with the same number of tests. Consequently, you hit the diminishing returns on coverage a lot sooner and your total suite size is smaller (18 in this case).

This is the process to prove the objectively-superior nature of the Hexawise-selected tests. Did your results come out drastically different than the above ones? Please feel free to reach out to us and share your experience or ask for advice on putting together this comparison yourself.

Afterword

What is different when creating such comparison from scratch:

 - Create a new model instead of copying the sample

 - (Optional) Download the Hexawise Forced Interactions Import Template from the "Forced Interactions" screen (click on the "Import" cloud in the left-most column on that screen, underneath any existing forced interactions)

 - Copy the existing tests to the “Forced Interactions” tab in the Template (in the format below) or enter them on the "Forced Interactions" screen in Hexawise

Important note: Hexawise will automatically populate the Parameters tab if you use this Excel template. The first 3 columns can also be populated with the appropriate details.

 - Import the template using the same dialog on the "Forced Interactions" tab inside Hexawise (not the Model Edit dialog)

Did this answer your question?