“We believe that the most important solution to overcome increasing QA and Testing Challenges will be the emerging introduction of machine-based intelligence.” – 2016-17 World Quality Report
That was last year. This year, “The World Quality Report 2017-18: The state of QA and Testing” offers some interesting, if surprising insights. For one, despite growing murmurs of the widespread adoption of Test Automation, the report points out that adoption of automation has held steady at 16% or so across the 1600+ organizations surveyed. The report seems to suggest that while automation for specific needs like regression testing is viewed as growing in importance, at the enterprise level challenges remain. The sense is that as those challenges get addressed automation could zoom to 50% or more. The question is where are Enterprises expecting those solutions to come from? Well, the same report offers a clue. 40-50% of the surveyed organizations were looking to implement solutions like cognitive automation, machine learning, and predictive analytics. Gartner has also reported that by 2020, 50% of IT-driven organizations will look to apply advanced analytics to improve application quality. For convenience lets club all these techniques in the “Artificial Intelligence” bracket to see the role they could play in changing our world of software testing.
First though, let’s see just why AI fits naturally into software testing? Boiled down to it, AI is concerned with how machines can understand how separate elements work in a specific context and then take reasoned actions appropriate to the situation. The machine keeps “learning” as those actions are repeated and at a certain stage it is capable of acting even in situations it hasn’t been exposed to earlier. It understands what goes into a system, and then what comes out – inputs and expected outputs. Now, isn’t that what software testing is concerned with too –hence the natural fit.
This suggests one obvious role for AI – take over those tasks that call for repeated, if varied inputs and observe and report the outputs. In the context of web and mobile apps, this could be various user actions– valid and invalid, entries, swipes, and clicks. The AI learns the actions of users and mimics them completely. It can also project what happens with more complex combinations of inputs and also, based on the patterns it detects in the outputs, potentially what could go wrong in situations or behaviors previously not considered. Since the tasks are performed fast the testing can be much more comprehensive and cover all scenarios. The increased speed is a tremendous benefit in an age of shorter testing cycles, increases iterations, and accelerated go-to-market needs. Essentially what AI promises is more testing, in a shorter time, with less effort!
Given the ability to deal with large amounts of data and make sense of it, a role for AI in test case and test data management seems evident too. AI could automatically generate test cases for various scenarios based on an analysis of the available data on how use cases pan out across iterations and variations. AI could also generate the input test data needed to run those test cases. The standards to be followed could be built into the rules and the AI will ensure compliance. As an extension – with an understanding of the user stories and the acceptance criteria, AI could create the required test code too. Essentially what this means is that the test case may not have to be specifically “written” – rather they would be “written” by the AI.
Most app teams are concerned with the way their apps would perform when stressed. Among the major issues in ensuring this is the difficulty in arriving at an understanding of the expected performance. Testing in production environments is complex, SDKs are often inadequate, and relying on manual judgement is not scalable. The net result – performance issues become visible only late in the regression testing – leading to lost time and increased effort. This is where AI can play a role by comprehensively testing performance of all possible actions several times. Since this is done at great pace problems are identified virtually instantly. Generally, reporting is also more intuitive and visual – providing easier-to-understand insights.
These are all applications already in the here and now, but the future promises a lot too. There is already talk of how AI can do more to improve quality. Current quality practices and systems have been generating data for many versions and iterations of the products being tested, it seems reasonable to assume that AI could work its magic on this data and identify problems, possible areas for improvements, and even opportunities in the form of possible feature additions to the product.
Let’s give the last word to those whose report we started this reflection with. Hans van Waayenburg, Leader of the Testing Global Service Line at Capgemini who helmed the World Quality Report said, “To retain a competitive edge, QA and Test organizations must move towards test ecosystem automation, predictive analytics and intelligence-led quality assurance and testing, so that they are able to ensure business outcomes.” That seems quite unambiguous – nothing Artificial about it.