Skip to main content Skip to footer

We deeply believe in the ability of artificial intelligence (AI) to solve important problems and improve human lives in meaningful ways. The COVID-19 pandemic has put these abilities on display, with AI-driven solutions improving healthcare through telehealth; making “cold” supply chains more robust for critical foods and medicines; and allowing businesses to reopen facilities in a safe manner.

We’re not alone in our high hopes; according to Gartner, AI is entering a golden age and will attain greater heights over the near and long term.

It was inevitable that the trend toward AI adoption would intersect with software as a service (SaaS), which has transformed software production and consumption models. Built on the principles of cloud computing, SaaS lets companies focus on their core business, leaving software development challenges to vendors. We’re not surprised that another Gartner report found that most organizations’ preference for acquiring AI capabilities is shifting toward having those capabilities infused in off-the-shelf enterprise applications. Hence, providers such as Salesforce, Microsoft, Amazon, Google, SAP and others are embedding AI technologies within their offerings as well as introducing AI platform capabilities and embracing the AI-as-a-service model.

This evolution has made AI adoption easier for enterprises — but “easier” is a relative term, and AI as a service is not without its challenges. Quality assurance (QA) in general, and testing in particular, play a vital role in AI platform adoption. AI platform testing is complex for the following reasons:

  • Testing AI demands intelligent processes, virtualized cloud resources, specialized skills and AI-enabled tools.

  • As AI platform vendors typically strive for rapid innovation and automatic updates of their products, the pace of enterprise testing must be equally fast.

  • AI platform products usually lack transparency and interpretability, and so they aren’t easily explainable, studies find — hence, it is difficult to trust testing.

Establishing a continuous testing ecosystem for AI platform adoption

In our efforts to help customers implement SaaS-based AI platforms, our teams have observed and solved a number of challenges. Enterprise IT assets such as data, applications and infrastructure are constantly changing. At the same time, AI products are continuously upgraded by vendors. Given this dynamism, experience has shown us that it’s crucial to establish a continuous testing ecosystem that not only automatically confirms the ever-changing enterprise IT landscape, but also validates the changing versions of the AI product within the context of the business.

To establish a continuous testing ecosystem, we have executed (and recommend) these high-level steps in our client engagements:

  • Shift automation test scripts to an enterprise version-control tool to establish a single source of truth. Instead of storing automation scripts in a test management tool or a shared folder structure, they are checked into a version-control repository.

  • Integrate the automation suite with a code/data build deployment tool to enable centralized execution and reporting. Test teams align code/data builds with their respective automation test suites. Tool-based auto-deployment during every build should be orchestrated to avoid human intervention.

  • Classify the automation suite in multiple layers of tests to speed up feedback at each checkpoint. Tests for each data and code build, such as health check and smoke test, should be performed to verify if the key system features and individual services are operational and no blocking defects occur.

  • Optimize testing at the code and data levels to improve cost and time to market. Conventional requirements-based testing tends to result in considerable over-testing without the guarantee of complete coverage. The impact of missed tests on coverage and quality is significant; it typically results in additional QA costs and extended testing timelines that, in turn, impact project success. There are solutions that perform impact analysis to map the code and data changes to affected test cases. This ensures optimized testing while maximizing test coverage.

  • Test the training model. Traditional testing methods only help validate engineering philosophies, not the algorithmic approach of the AI solution. By testing the training model, we can certify if the solution has learned the given instructions — enforced, supervised or unsupervised. It’s critical to recreate the same scenarios multiple times to check for correctness and consistency. It is also critical to establish a process as part of testing in order to train the AI solution to learn from bugs, errors, exceptions and mistakes. Fault/error tolerances should be established based on customer-defined exception handling.

  • Apply a transfer learning model. AI techniques have challenges carrying their experiences from one set of circumstances to another, leading to more and more testing/training in real-world production data. Continuous testing setup helps continue the learning from testing through production rollout with less worry about transfer learning.

  • Embrace intelligent regression. If execution time for overall regression is high, continuous testing setup becomes less effective due to prolonged feedback cycles. To avoid this, a subset of regression should be carved out at run-time based on critically impacted areas. Our teams have often applied machine learning to achieve smart regression. Effective machine learning algorithms that use a probabilistic model for selecting regression tests help optimize the use of cloud resources efficiently and speed testing.

  • Utilize full regression. This can be done overnight or during the weekend, depending on the alignment with recurring build frequencies. This represents the final feedback from the continuous testing ecosystem. The goal is to minimize feedback time by running parallel execution threads or machines.

The following figure presents our continuous testing ecosystem approach for AI platform adoption.


We believe that continuous testing is a fundamental requirement for AI platform adoption, and one in which we’ve built considerable experience. For a deeper look, see our white paper, “Continuous Testing Is Key for Enterprises to Adopt AI Platforms.”

As testing processes and the tools are continuously enabled by AI and machine learning techniques, we foresee AI platforms evolving into self-testing and self-healing systems.


To learn more, visit the Digital Systems & Technology section of our website or contact us.