Artificial Intelligence (AI) is at the peak of inflated expectations in the Hype Cycle. There is a law called Amara’s law that states “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.“
I think most people in AI would agree that we are currently over-estimating the effect of AI on technology. Looking through the lens of AI in the Quality Assurance space I thought it might be beneficial to look at what areas of Quality Assurance seem ripe for automation (the business type of automation) via AI.
In the sections below I review various areas in Test case based execution and then provide my personal estimate of the impact that using AI would have in the near term. I quantify this prediction with two probabilities:
There are several companies that are focused on creating software automation systems based on Artificial Intelligence. These systems typically reduce effort with the intent to eventually completely automate the following capabilities:
Test case creation is really challenging. The AI based system needs to be able to analyze a UI that it is asked to target and identify a series of test cases that implement user behavior when using the system. Since in most User Interfaces, particularly ones with dynamic data, a very large number of test cases can be created, the next question becomes is to identify the test cases that are a priority and select the ones to actually execute. It also needs some level of domain expertise.
Assistance: 3/10 Automation: 1/10
Many companies have been focused in this area and rapid advancements are being achieved. Often times the focus here is to convert and regenerate test cases to test scripts. Or alternatively to re-code test cases based on execution or on user behavior and then to generate the execution scripts.
Assistance: 7/10 Automation: 4/10
To me this category really falls in the area of rocket science. Self-healing scripts adapt to the changes that occur in the UI and re-generate the automation scripts. Such a system needs to be able to detect a variety of changes:
Combined, these two sets of requirements are a major technical challenge. In order to detect changes in functionality, complete (100%) script generation is a necessary precursor. And if the system is generating test scripts based on test cases then complete test case generation is needed as well too. The third requirement is that self healing must occur in minutes only compounds the technical challenge.
Assistance: 5/10 Automation: 1/10
These are systems that determine which is the best method for executing a set of test cases. Execution systems could be:
And combinations of each of the pools above determine the optimum execution based on quality, execution time, setup effort etc.
Assistance: 4/10 Automation: 2/10
After test case execution is complete only half the job is done. There is a lot more work that goes into analysis of test case execution and defect prediction.
Analysis of test case execution revolves around determining how many of the “fails” in a Pass/Fail report are true fails and how many are false fails. So what does that mean? For example, if the result of a test case execution is that the test case failed, a True Fail would be to determine that the test case has in reality failed. A False Failure would be, if on analysis, you determine that the cause of the failure is for example an automation error such as a Time-Out Error. Then, in reality, you have no idea whether the test case actually failed or passed. All you know for sure is that the Automation system failed the test and that failure is an automation failure. Unless you update your script, or manually verify the test case, you don’t know if the test case has in fact failed. Thus, this is a case of a False Failure. Automation, in particular, is notorious for the number of False Failures it tends to generate.
Similarly, a True Pass and False Pass need to be looked at more carefully – especially False Passes as they are incredibly important to identify should they ever occur. Fortunately, they tend to be rather infrequent.
Assistance: 4/10 Automation: 2/10
Defect prediction, on the other hand, centers around the probability for a True Fail or False Pass to translate into a new defect.
The key here is in satisfying the requirements of both words:
Assistance: 3/10 Automation: 2/10
Below is a table that summarizes the “scores” that I have assigned to various facets of QA with an AI focus.
Assist | Automate | |
Execution of Test Cases | ||
Test case creation | 3 | 1 |
Test script creation | 7 | 4 |
Self Healing scripts | 5 | 1 |
Method of Execution | 4 | 2 |
Execution Output | ||
Execution Analysis | 4 | 2 |
Defect Prediction | 3 | 2 |
Webomates is focusing on all of the above areas of AI in QA and is aggressively partnering with other leaders in this space. By using our TAAS (Testing as a Service) you automatically start using the best AI tools in the market!
If you are interested in learning more about Webomates’ CQ service please click here and schedule a demo or reach out to us at info@webomates.com.
Leave a Reply