Why do most software tests suck?

Describes common challenges of manual scenario design that drive your testing costs up.

J
Written by Justin Hunter
Updated over a week ago

This lesson points out several important yet seldom discussed reasons why most collections of software tests in the world today leave a great deal to be desired.

Software tests repeat themselves far more than they need to.

  • If you imagine someone standing at one end of a mine field and they absolutely had to walk through it, following in the footsteps of someone who had successfully made it across would be a good strategy.

  • As James Bach has pointed out in software testing, the opposite is almost always true.  Repeating "in the footsteps" of other test scripts that have already been executed is usually an absolutely terrible way to find defects and an equally terrible way learn more about whatever it is you're testing.

  • Despite this, software tests repeat themselves much more than they need to.  This often has fairly disastrous effects on both the efficiency and effectiveness of software test execution.

Gut-feel and guesswork is used to decide which potential tests get tested and which never get tested. Many effective prioritization approaches are ignored.

  • You can't test everything, so what should be tested? When determining which potential tests should be executed and which do not get executed, test designers almost always miss the opportunity to use a scientific, well-reasoned approach to guide their decisions.  For most testing projects, "gut-feel" plays a large role in what actually gets tested.

  • Highly effective methods of prioritizing software tests (e.g., deciding which test scripts are executed vs. which potential test scripts are not executed) are completely ignored.  

  • This is because relatively few software testers are aware of these well-proven prioritization methods.

Did this answer your question?