Given these issues, how will you get the project back on track? More importantly, what could have been done to avoid these problems at the outset?
Unlike most traditional software projects, AI initiatives can’t be approached in a formulaic way. Rather, approaches must be more dynamic, which includes up-front planning and collaboration. From our work with AI teams, we recommend the following five best practices to avoid potential pitfalls and achieve AI's objectives.
1 Verify and enable data readiness.
To arrive at impactful business decisions using AI, quality data is essential. Ensuring there is a sufficient amount of data (or that modeling is accurate), avoiding data-entry shortcuts, and formulating an ongoing data extraction process will help companies avoid key bottlenecks. It’s important to understand going in that not all data will be perfect or complete, and that good data preparation is an essential first step.
Factoring in special data patterns like holiday sales and sales during weather emergencies will enrich the model. Looking beyond the initial dataset that was shared and performing a thorough data quality exercise on multiple years of production data generates more accurate data that in turn yields more reliable predictions and business recommendations.
2 Ensure consistent cross-team collaboration and seamless end-user adoption.
AI engagements typically span business units and involve data scientists, domain experts, business specialists and the active participation of field personnel. This includes specialists from operations, sales, marketing, finance and customer care. Sustained cross-team collaboration from beginning to end will help keep the project agile and on track.
AI model development is unique and needs consistent inputs and validation. As more patterns are revealed, models can be fine-tuned. For example, by factoring in field users’ time throughout the lifecycle of the AI development, data scientists can verify and validate new patterns with those users and decide whether they are outliers or valid patterns.
Ever-shifting company culture must also be considered. Some of the most significant AI adoption challenges lie with frontline users trusting, adopting and embedding AI into their decision-making strategies and not letting personal judgments override AI-based decisions. End users who have historically made most decisions themselves are not going to immediately trust an application. They will need time and training to reach a point at which they rely on the data, and they’ll need a way to raise issues they encounter for quick resolution.
3 Plan ahead to scale from “sample.”
The server configuration needed for AI is vastly different from that required for traditional projects. Simple, conventional actions such as increasing memory or CPU capacity may not improve model-training time. While it makes sense to start with a proof-of-concept to determine the feasibility of AI for a specific objective, the plan for scaling the project must begin early in the strategic planning process.
Scaling for AI involves model tuning; infrastructure decisions, such as CPU/GPU; memory and storage; hosting decisions (on-cloud, on-premise or both); and security. The model may also need to be tuned for larger data volumes — and these volumes might require high-end graphics processing units. It is best to address these specifics, rather than assuming that a typical high-end application server will be good enough for AI.
4 Build the talent pool and induct new roles as necessary.
It’s common for companies to have skill gaps in their organizations, so businesses facing this issue shouldn’t despair — they’re hardly alone. Data scientists, analysts and data engineers can be found through partners and direct hiring; they will form the core technical team.
While these roles are critical and necessary, it is invaluable to create niche and tailored roles. For example, a data curator can act as a custodian to define strategies for data collection and to establish data-capture standards. An AI specialist can identify and certify AI applicability for use cases, evaluate AI readiness, validate data suitability, and design AI road maps. And an end-user enabler will facilitate seamless end-user adoption.
These roles can accelerate AI adoption. For example, the data curator can help with unrefined data and avoid data issues by collating and addressing issues up front.
5 Continuously monitor the transformation journey.
Unlike traditional applications, for which business logic holds the key, in decision-driven AI applications, the data primarily influences the outcomes. Therefore, continuous monitoring of model performance is critical. As data changes over time, the model needs to be retrained to ensure consistent and optimal results. The key is to factor in ongoing model training and to create a feedback mechanism for reporting model performance degradation. In the real-world example noted at the outset of this paper, the model actually became outdated before production even began.
While traditional project plans may work initially, moving AI to production requires a different approach. Given the complexities and wide-ranging adoption of AI initiatives, we recommend recording takeaways from every implementation so that lessons can be carried forward.
AI is becoming more mature, and it’s infiltrating more and more business functions. Having the right tools, technology and implementation partners is a great starting point.
Learn more by visiting our AI page, or contact us.