COGNIZANT CONSULTING
Helping organizations engage people and uncover insight from data to shape the products, services and experiences they offer
Learn More

Contact Us

THANKS FOR YOUR INTEREST IN COGNIZANT.

We'll be in touch soon!

x CLOSE

Refer back to this favorites tab during today's session for access to your selections.
Refer back to this favorites tab during today's session for access to your selections.x CLOSE

Perspectives

5 Ways to Overcome AI Model, Data and User Gotchas

2019-12-10


To take AI from concept to meaningful business application, organizations must rethink how they plan and prepare for the journey.

In the artificial intelligence (AI)-driven era ahead, human approaches and engagement will be as important to ultimate success as the AI models and data upon which they're built. As internal teams develop, test, refine and move new AI systems to production, they'll soon learn that conventional best practices are ill-equipped to address some of AI's most unexpected nuances.

When planning a major AI initiative, teams must account for the most common, looming issues that can delay or derail go-lives and hinder user engagement. Let's share an example to highlight these potential barriers and ways to address them.

You’ve been appointed as the “owner” of a new AI initiative to replace a decades-old, conventional retail application with a new prediction-driven solution. This pilot will serve as a flagship benchmark for further AI investments. As the project nears production, the team obtains current and live data for user acceptance testing (UAT) — then urgently raises concerns around the model's accuracy, data quality, and usage as detailed in the interactive Figure 1.

Figure 1

Given these issues, how will you get the project back on track? More importantly, what could have been done to avoid these problems at the outset?

Unlike most traditional software projects, AI initiatives can’t be approached in a formulaic way. Rather, approaches must be more dynamic, which includes up-front planning and collaboration. From our work with AI teams, we recommend the following five best practices to avoid potential  pitfalls and achieve AI's objectives.

Verify and enable data readiness.

To arrive at impactful business decisions using AI, quality data is essential. Ensuring there is a sufficient amount of data (or that modeling is accurate), avoiding data-entry shortcuts, and formulating an ongoing data extraction process will help companies avoid key bottlenecks. It’s important to understand going in that not all data will be perfect or complete, and that good data preparation is an essential first step.

Factoring in special data patterns like holiday sales and sales during weather emergencies will enrich the model. Looking beyond the initial dataset that was shared and performing a thorough data quality exercise on multiple years of production data generates more accurate data that in turn yields more reliable predictions and business recommendations.

Ensure consistent cross-team collaboration and seamless end-user adoption.

AI engagements typically span business units and involve data scientists, domain experts, business specialists and the active participation of field personnel. This includes specialists from operations, sales, marketing, finance and customer care. Sustained cross-team collaboration from beginning to end will help keep the project agile and on track.

AI model development is unique and needs consistent inputs and validation. As more patterns are revealed, models can be fine-tuned. For example, by factoring in field users’ time throughout the lifecycle of the AI development, data scientists can verify and validate new patterns with those users and decide whether they are outliers or valid patterns.

Ever-shifting company culture must also be considered. Some of the most significant AI adoption challenges lie with frontline users trusting, adopting and embedding AI into their decision-making strategies and not letting personal judgments override AI-based decisions. End users who have historically made most decisions themselves are not going to immediately trust an application. They will need time and training to reach a point at which they rely on the data, and they’ll need a way to raise issues they encounter for quick resolution.

Plan ahead to scale from “sample.”

The server configuration needed for AI is vastly different from that required for traditional projects. Simple, conventional actions such as increasing memory or CPU capacity may not improve model-training time. While it makes sense to start with a proof-of-concept to determine the feasibility of AI for a specific objective, the plan for scaling the project must begin early in the strategic planning process.

Scaling for AI involves model tuning; infrastructure decisions, such as CPU/GPU; memory and storage; hosting decisions (on-cloud, on-premise or both); and security. The model may also need to be tuned for larger data volumes — and these volumes might require high-end graphics processing units. It is best to address these specifics, rather than assuming that a typical high-end application server will be good enough for AI.

Build the talent pool and induct new roles as necessary.

It’s common for companies to have skill gaps in their organizations, so businesses facing this issue shouldn’t despair — they’re hardly alone. Data scientists, analysts and data engineers can be found through partners and direct hiring; they will form the core technical team.

While these roles are critical and necessary, it is invaluable to create niche and tailored roles. For example, a data curator can act as a custodian to define strategies for data collection and to establish data-capture standards. An AI specialist can identify and certify AI applicability for use cases, evaluate AI readiness, validate data suitability, and design AI road maps. And an end-user enabler will facilitate seamless end-user adoption.

These roles can accelerate AI adoption. For example, the data curator can help with unrefined data and avoid data issues by collating and addressing issues up front.

Continuously monitor the transformation journey.

Unlike traditional applications, for which business logic holds the key, in decision-driven AI applications, the data primarily influences the outcomes. Therefore, continuous monitoring of model performance is critical. As data changes over time, the model needs to be retrained to ensure consistent and optimal results. The key is to factor in ongoing model training and to create a feedback mechanism for reporting model performance degradation. In the real-world example noted at the outset of this paper, the model actually became outdated before production even began.

While traditional project plans may work initially, moving AI to production requires a different approach. Given the complexities and wide-ranging adoption of AI initiatives, we recommend recording takeaways from every implementation so that lessons can be carried forward.

AI is becoming more mature, and it’s infiltrating more and more business functions. Having the right tools, technology and implementation partners is a great starting point.

Learn more by visiting our AI page, or contact us.

Related Thinking

Save this article to your folders


Save

PERSPECTIVES

AI Inflection Point: Balancing Responsible...

To usher AI into the business mainstream, companies need to complement...

Save View

Save this article to your folders


Save

PERSPECTIVES

5 Es to Guide Your AI Journey

The artificial intelligence journey is unlike other digital technologies....

Save View

Save this article to your folders


Save

PERSPECTIVES

How Machine Learning Can Lead the AI...

Advanced artificial intelligence in the form of machine learning holds the...

Save View
5 Ways to Overcome AI Model, Data and User Gotchas