Skip to main content Skip to footer


September 4, 2023

Customer engagement and gen AI: a balancing act

Here’s a multipronged strategy to balance the risk of using generative AI in customer-facing engagements with the technology’s benefits.


Generative AI is taking center stage when it comes to customer engagement. This is especially true in sales and marketing, with 73% of US marketers saying they’re already using generative AI. There’s good reason for that; in our September 2023 survey of senior business and technology decision makers at large businesses in the US and UK, executives unanimously agree that generative AI will enable faster personalized campaign and content production.

Further, when it comes to customer service, almost three-quarters of senior leaders in our survey said they’re already using generative AI for a variety of tasks, from pulling data from customer conversations to address their needs, to triaging inbound requests.

But for all generative AI's capabilities, it poses potential risks. Consider that gen AI programs are well known to confabulate facts—a phenomenon commonly known as hallucinations. In some cases, this might not be a big deal, but when it comes to customer interactions, false, biased or inappropriate responses can tarnish a brand's image, leading to decreased customer trust and potential financial repercussions.

In the many conversations I’ve had with clients interested in generative AI solutions, I know this is what keeps them up at night: “Will this thing scare off more people than it helps?” In our survey, 88% of execs were concerned about the unpredictable outcomes of gen AI on their organization, and four in five were concerned about the factual reliability of generative AI outputs.

One tool is not enough

Responsible organizations have begun to build safeguards that increase the trustworthiness of their gen AI-based systems. This includes checking for unwanted bias and ethically problematic responses emanating from customer-facing tools. So far, these efforts have been highly dependent on keeping humans in the loop, which can result in unsustainable cost.

And while human oversight often ensures that gen AI outputs align with a brand's values and customer expectations, having humans oversee every interaction simply isn't feasible at scale. What’s needed instead is a multipronged approach.

A multipronged approach would layer in an array of tools—generative AI agents, surrogate predictive machine-learning models, explainable decision models, human oversight and intervention, and even yet-to-be-invented tools. These would all work together to ensure the generative AI system provides non-biased, brand-compliant and ethically responsible responses.

Uncertainty modeling

As part of this approach, we recommend that businesses lean heavily into the little-explored field of “uncertainty modeling.” Uncertainty modeling helps businesses balance the benefits of generative AI against the risks it introduces. That is, they must consider cases where the input or output of their predictive models is unfamiliar and, based on that, assess the point-uncertainty of such models.

If the decisioning system was trained or tuned on samples that are no longer accurate or operational, the uncertainty model should raise a flag. A flag should also be raised if the model suggests decisions that it has rarely, if ever, suggested before.

Statistical and probability modeling using mathematical models such as Gaussian programs can be used to determine the point-uncertainty of the system’s prediction for the consequences of a suggested decision.

When the system isn't sure about its response, this should trigger human intervention. This ensures that potentially problematic responses are caught and corrected before reaching the customer.

Flexibility is key

The devil is in the details, of course, and the million-dollar question is: What exactly should trigger human intervention? Each company will need to answer this question on its own terms, factoring in myriad variables:

  • Confidence in the program’s ability to correctly address the specific question or situation.

  • The level of risk to the brand if the generative AI system makes a mistake.

  • Regulatory concerns, including data privacy considerations.

  • The competitive landscape—how strong are competitors’ generative AI solutions?

In other words, the key is governance. Each organization must tune its human intervention approach to its own risk/reward profile, remaining flexible and adaptable. For example, an airline might decide that while fare quotes are highly accurate and thus require only automated double-checks, last-minute flight change requests require the intervention of a human agent. The healthcare and financial industries, due to regulation and the sensitivity of outputs, will likely demand more human intervention than other sectors.

It's crucial for businesses to be transparent about how their generative AI systems operate. Customers and employees alike should be able to understand how the AI came up with its responses, ensuring trust and clarity.

Getting a handle on it

Generative AI is rapidly changing the way we automate our decision-making, and it can be hard to keep up. But as it stands now, businesses are either leaving money on the table by over relying on human intervention or risking reputational damage by overrelying on the technology.

By embracing the checks and balances I’ve discussed, leaders can find the trustworthy sweet spot and position themselves for serious competitive advantage.



Babak Hodjat
VP, Evolutionary AI
Babak Hodjat

Babak Hodjat is VP of Evolutionary AI at Cognizant and former co-founder & CEO of Sentient. He is responsible for the technology behind the world’s largest distributed AI system and was the founder of the world's first AI-driven hedge fund.

Babak.Hodjat@cognizant.com


Latest posts

Related posts

Subscribe for more and stay relevant

The Modern Business newsletter delivers monthly insights to help your business adapt, evolve, and respond—as if on intuition