Superfast and Reliable Automation Testing Platform

Try Now

In order to achieve high quality   like a high jumper achieves great height, you need to achieve an ideal velocity and stride. At Webomates we have found that there is a similar relationship between test case based testing and exploratory testing and how the combination can achieve spectacular results that are far higher than either strategy on its own. This is the first in a 3 part blog. I would be deeply appreciative if you could take the 5 question survey at the end of this blog. We are trying to compare our findings with what industry knowledgeable persons such as yourself predict.   

Background

This article builds on the base of my previous article https://www.webomates.com/blog/exploratory-testing/exploratory/ that compared and contrasted the differences between test case based testing and exploratory testing. Since Webomates’ inception three years ago we have been using both types of testing extensively for regression and feature testing. A couple of years ago we were heavily influenced by Elisabeth Hendrickson’s book Explore It that outlines how Exploratory testing is different from adhoc or random testing and provides a very structured approach to carrying out exploratory testing based on creating charters. We have been fortunate that Elisabeth granted us some time to guide/advise the direction that we are taking with our Exploratory testing!

As we started using Exploratory testing as one of the major pillars of our CQ portal service we started exploring, researching and tracking the effects of using this technique. We began by looking at defect yields (how many defects at what priority using what technique) we were finding per regression. We also started experimenting with using each technique in isolation, increasing the test case count significantly and increasing the hours spent in exploratory testing.

Although the research is not complete and we are building models based on the research across multiple customer accounts, and also pulling the interplay of test channels (automation, manual, crowdsource, Artificial intelligence) this article shares some of the early findings.

Start, Run, Jump

Interestingly what we found is that there was a definite interplay between using three overlapping concepts together in order to achieve a spectacular increase in defect yield. I want to emphasize that this is early days of the research that we are doing on this topic but we are so excited by the preliminary findings that we wanted to start sharing to the community and hopefully starting building practices that can benefit quality initiatives across the industry.

Start

This is our initial engagement phase with a customer. In this phase we pickup on the domain. One of the things that we found was that the level of domain understanding that we needed for both test case based execution and exploratory testing was very low. If you are interested read this article on the subject https://www.webomates.com/blog/software-testing/domain-knowledge/  

Figure 1 Level of Experience Curve

HOWEVER, it also turns out the level of domain expertise that we build out for each is critical element in the spectacular increase of defect yields that we have been able to achieve. For anyone in the QA arena reading this the reaction is probably – “DUH! Of course you need to know the domain!” But what is really interesting is that the level of domain needed is pretty low albeit absolutely necessary. Below a certain level of domain neither the test cases are adequate nor are the charters that are created. We have been constantly fine tuning the parameters and using defect yield as a metric to achieve an optimal setup.

In essence, as shown in Figure 2 below initially there is a minimum level of domain understanding that is needed, then a spectacular increase in defect yield with respect to effort is achieved followed by a drop off in yield.

Figure 2 Domain Understanding time versus Defect yield

I am ignoring the near constant need to keep updating the domain understanding as features change in the system under test as that effort is significantly lower than the effort in the Start Phase.

I am also lumping a number of activities into one group in the start phase that include:

  1. Domain understanding
  2. Domain clarification
  3. Test case writing & review
  4. Charter creation

Run

This phase basically involves getting test case execution operational. The defined set of test cases are executed using one or more test channels (Automation, crowdsourcing, manual & AI). Each of the channels have different characteristics that need to be catered to such as:

  1. Automation-Test case scripts need to be created. Multiple automation systems may be used based on platform requirements (mobile versus web versus native).
  2. Manual- Test case clarification and prioritization.
  3. Crowdsource-extremely detailed test cases need to be created and reviewed
  4. AI Automation – Manual intervention is needed at the minimum at the validations point and correcting the action of many of the scripts  

Jump

Once you have good test case coverage, exploratory testing gives you a high jump in the quality above and beyond just test cases. We did the research to see if only test cases or only exploratory can give the same defect yield or quality. Results proved that an optimized approach of test cases and exploratory gives the best defect yield and thus gives a jump in the quality of the software.

Research 1 Test Cases vs Exploratory vs Combined

Combination of test case and exploratory gives the best quality result and high jump to the software quality.

Fixed Attributes:

  • Same Product
  • Same version of software
  • Same environment

First step was to understand the product and identify 500 test cases of the product for testing.

During the second step we created 3 test execution teams and each team had similar knowledge about the product as well as experience.

  1. Team Exploratory – They used exploratory approach of testing. They created exploratory charters and executed those charters to find defects.
  2. Team Test Case – They executed only 500 test cases to find defects.
  3. Team Combined – They executed 500 test cases and also created and executed exploratory charters for exploratory testing to find defects.

The next section of this blog will be released in the next week that shows our findings based on projects and research that we have executed and what your feedback is as a comparison. I thought it might be interesting to compare what we are finding with what other people in the industry predict. So would you do us the favor of filling this 5-question survey?


SURVEY 

Please help us answering five questions about your experience with exploratory and test case based testing

[wpsurveypoll id=”1360157382″]

Spread the love

Tags: ,

4 replies on “How to High jump to High Quality Testing”

Thanks for sharing this Team.
I really.. really interested while reading every technique (I say).
Kudos!

My entire experience in testing, i could find more than 90% defects when using exploratory and instinctive methodology. Test cases may tend to find defects initially but later while products becomes stable, they loose productivity.. nice article..

My name is Emuobo and I am a High Jumper, my Personal Best is 1.90m. I don’t train, I am just gifted with it. I am in search of a Coach to help train me to my full potential cause I believe I can clear better heights than that. The article has really gotten my interest. I am going to bookmark your blog and keep checking for new details every week.

Absolutely fascinating! It’s incredible how there’s always something new to learn or appreciate, no matter the context. Thanks for sharing!

Leave a Reply

Your email address will not be published. Required fields are marked *

AT&T's Success Formula: Download Our Whitepaper Now!


Search By Category

Test Smarter, Not Harder: Get Your Free Trial Today!

Start Free Trial