COGNIZANT CONSULTING
Helping organizations engage people and uncover insight from data to shape the products, services and experiences they offer
Learn More

Contact Us

THANKS FOR YOUR INTEREST IN COGNIZANT.

We'll be in touch soon!

x CLOSE

Refer back to this favorites tab during today's session for access to your selections.
Refer back to this favorites tab during today's session for access to your selections.x CLOSE

Perspectives

AI Inflection Point: Balancing Responsible Development & Deployment with Accountable Results

2019-01-04


To usher AI into the business mainstream, companies need to complement their technology advances with a focus on governance that drives ethical usage and trust.

Most people already interact with artificial intelligence (AI) every day as they read product recommendations on Amazon, receive automated fraud alerts from credit card companies or ask their Google Assistant to play a certain song. But AI is also at work in less visible ways as well — scoring bank customers for creditworthiness, analyzing warranty claims to uncover upstream production problems, grading college essays and even helping courts determine how to sentence criminals.

Nevertheless, businesses are in the early stages of AI adoption, and companies are still learning how to put the technology to work. The challenge today is less about understanding technical questions and technology capabilities, and more about crafting a strategy, determining the governance structures and practices needed for “responsible AI,” and accelerating the move from experiments to full-scale AI adoption.

To better understand the state of AI in business, we surveyed 975 executives across industries in the U.S. and Europe. Here are the key findings.

AI enthusiasm outstrips deployment

Executives are enthusiastic about AI. They see it as having a growing impact on their businesses, with 63% believing it is already extremely or very important to their company’s success, and 84% saying that will be true in three years, including 48% who expect it to be extremely important. Business leaders also believe it can be applied throughout their organizations.

Positive attitudes toward AI are particularly pronounced among respondents who said their companies are growing much faster than the average company in their industries (see Figure 1). Executives at these faster-growing companies are more likely to view AI as important to their company’s success and more likely to expect major benefits. If these leaders can unleash the potential of AI, their current lead could widen further.

  • There is a disconnect between executives’ enthusiasm for AI and their actual deployment of AI applications. In reality, companies’ real-world experience with AI is fairly limited. Only about two-thirds of respondents were knowledgeable about an AI project at their company, and out of that group, only 24% were knowledgeable about AI projects that were fully implemented. Factors such as executive talent (45%), budget concerns (43%), securing senior management commitment (40%) are cited for this.

The AI strategy challenge

There is considerable uncertainty around how to proceed, and a lack of strategic focus when it comes to AI. While talent and skills top the list of challenges, executives appear to give all challenges equal weight. Among the 13 issues raised with respondents, the difference between the most-cited and least-cited was just 7%, and roughly 40% of executives considered each of the 13 issues to be extremely or very challenging.

Figure 1

This lack of strategy is further underscored by the fact that roughly the same percentages of executives cited specific AI technologies as being used in their projects: virtual agents (55%), computer vision (51%), advice engines/machine learning (49%), smart robotics/autonomous vehicles (48%) and analysis of natural language (48%).

  • To create rigorous AI strategies, companies need to look beyond technological capabilities. There is no recipe that companies can use to embed AI in the fabric of the company — each business challenge will require different tools, techniques and approaches. Rather than a sequential process, leveraging AI will demand extensive experimentation and the ability to apply learnings to the next stage of deployment. Companies need to factor that reality into their plans.

  • Companies often think of AI as a tool for reducing costs, but if done right, AI can help improve product and service quality, reduce cycle time, create new and better employee experiences, and enhance safety, among other things. Employing AI effectively requires a clear focus on applying intelligent technologies to solve tough operational challenges and deliver a lift to the business. While every company’s situation is different, there are some general guidelines that companies can follow in the search for AI-enabled business value:

    • Look for opportunities to leverage data. As more data is generated in the modern digital era, separating signal from noise becomes more difficult. AI, with its ability to apply human-like assessments and decisions to that data, rapidly and efficiently, provides a possible solution to that dilemma.
    • Cast a wide net. AI has the potential to touch many parts of the company, and it’s important that underlying algorithms “understand” the larger context in which they operate. Organizations should establish cross-functional teams to identify AI opportunities, and support them with a structured approach.
    • Solicit input and insight from external parties. Partnering can be an important factor in AI strategy. In our survey, executives most often cited access to AI skills as a key challenge, and working with vendors can allow companies to quickly gain access to the required skills.
    • Encourage experimentation — and discipline. Companies should encourage a tolerance for risk taking and innovation with AI, but balance that with rigorous testing and measurement of return on investment (ROI) and tangible business value.

AI governance: Building transparency, trust and personalization

For companies to successfully unleash the potential of AI, people will need to see it as a reliable, dependable colleague. It will interact with customers, employees and partners, running operations and making important decisions. If an AI application is not well designed and managed, it may end up “misbehaving.” The ramifications could be significant, ranging from damaged customer relationships, to missteps in factories that affect quality, to discriminatory decisions that elicit regulatory scrutiny (for more on this read our paper AI: Ready for Business).

  • Transparency beyond the black box. Transparency boils down to allowing people to understand how an AI application makes decisions. Currently, AI systems tend to be “black boxes” whose operations are not well understood by humans, and that makes some people uncomfortable.

    Transparency helps build trust, but trust is based on other factors, too. Can people feel confident that the system is working as it was designed to work, and that it understands their needs? Consistency, too, is important — people learn to trust over time, through experience.

  • Trust: Start with the data. High-quality data is key to earning trust. Data needs to be not only accurate and free of “noise,” but also free of bias. AI is not programmed in the traditional sense; rather, it learns from examples — and biased data is essentially a bad example. That was clearly illustrated by Microsoft’s “Tay” chatbot, an AI application that was intended to learn how to behave well by interacting with Twitter users but ended up producing racist comments after a few hours of listening to online trolls.

  • Personalization: Making it relevant and useful. Finally, personalization will be important in driving the acceptance and success of AI, because it is key to performing tasks and making decisions that are relevant and useful for the people they impact. Employees, for example, typically tailor their interactions with one another based on the individuals and situation involved. AI applications will need to do the same, whether it is suggesting an action to a banker or recommending a product to a customer.

Making AI Responsible – And Effective

Do the right thing: Responsible, ethical AI

In the end, the goal of good governance is “responsible AI.” If AI is not responsible, it won’t be embraced by employees or customers, not to mention shareholders who are put off by potential legal and reputational risk.

There is another, more fundamental thread at the heart of good AI governance and responsible AI: ethics. Increasingly, the development of AI ethics is recognized as critical to the success of AI, and it needs to be interwoven into everything companies do with the technology.

Companies will need to find ways to train AI to behave ethically. This is not as arcane a process as one might think. The concept is familiar to parents who provide feedback and guidance to raise their children to be good members of society, and there are well-understood tools and frameworks from the world of human sciences that can be used to instill ethics into the design and operation of AI.

The way forward

Unlike many established information technologies, it is often easy to get started with AI. New AI systems can be developed fairly quickly — an experimental version can typically be up and running in a month, and pilots can be rolled out in a month or two after that. The trick is to move rapidly from that starting point to grow and sustain full-scale, business-ready AI applications.

Companies should create an approach that can provide an opportunity to gain experience with the technology without risking high-profile problems with customer-facing systems. They should also look for quick-win opportunities that allow them to chalk up successes and financial benefits to build AI momentum.

As they nurture AI and move it from experimentation to the business core, companies should also keep an eye on the big picture. That will mean formulating plans for governance, and always keeping the need for responsible AI top of mind. Through it all, they should look at their efforts through a human-centric lens — because that will be key to designing and reaping business value from AI in the long run.

To learn more, read “Making AI Responsible – And Effective,” visit the AI & Analytics section of our web site, or contact us.

Related Thinking

Save this article to your folders


Save

PERSPECTIVES

How Machine Learning Can Lead the AI...

Advanced artificial intelligence in the form of machine learning holds the...

Save View

Save this article to your folders


Save

PERSPECTIVES

5 Es to Guide Your AI Journey

The artificial intelligence journey is unlike other digital technologies....

Save View

Save this article to your folders


Save

PERSPECTIVES

Accelerate Business Growth and Outcomes...

Here’s a look at 10 organizations that are realizing business success...

Save View
AI Inflection Point: Balancing Responsible Development & Deployment with Accountable Results