Skip to main content Skip to footer


February 16, 2024

Responsible AI: five steps businesses should take now

As the technology continues to grow in importance, enterprises must build internal scaffolding that ensures trust and addresses risks.


Cognizant was proud to participate recently in the World Economic Forum’s annual meeting in Davos, where the topic of artificial intelligence was high on the agenda—and that’s an understatement. We were also happy to contribute to the newly published AI Governance Alliance briefing papers on safe systems, responsible applications and resilient governance.

In Davos, we held in-depth conversations with hundreds of global leaders on the topic of responsible AI. We heard a broad range of perspectives on what focus and action is needed, but there is unanimous agreement that AI risks need to be better managed as an urgent priority.

It’s clear that trust will be at the core of successful AI adoption. Trust will enable us to scale and realize the potential of generative AI, the most revolutionary new technology in a generation. When they experience these disruptive new solutions that feel like magic, consumers will naturally be skeptical; trust will need to be earned from the start. And if that trust is lost, it will be difficult to reacquire.

Creating trusted AI

With trust so critical, we should start by understanding what it is and how it’s obtained. In 1995, professors from Notre Dame and Purdue published a model for trust that has become widely adopted. Highly applicable to AI-powered services, it proposes that trust derives from the perception of ability, benevolence, and integrity. What we heard at Davos aligns with this model and helps make sense of the challenges in front of us.

First, trust in AI systems rests on their ability to solve real-world problems and be useful. Ability isn’t something we can take for granted—I’ve seen amazing demonstrations of generative AI only to be slightly underwhelmed when trying out the tools in the real world.

AI solutions that over-promise and under-deliver will cause major trust issues in the long run. We’ve seen this problem before in the form of chatbots and voice assistants that promised conversational convenience—but delivered limited understanding and static decision trees. Users were underwhelmed, and these technologies’ promise went unfulfilled.

To make AI systems useful, we must focus them on the right problems, support them with relevant and high-quality data, and seamlessly integrate them into user experiences and workflows. Most importantly of all, continuous monitoring and testing is needed to ensure that AI systems deliver relevant, high-quality results.

The second area that drives trust is the idea of benevolence. AI models need to positively impact society, businesses and individuals, or they will be rejected. Here we face two core challenges:

  1. Implementing for positive impact. We must ensure that enterprises implement AI responsibly and that negative impacts are fully addressed. This will include blocking unacceptable use cases, respecting intellectual property rights, ensuring that diverse groups are treated equitably, avoiding environmental harm, and enabling displaced workers to access alternative employment.

  2. Preventing malicious use. It is not sufficient for responsible enterprises to implement benevolent AI; we must also safeguard against malicious use. Governmental and regulatory entities must take steps both to endorse legitimate providers and eliminate bad actors. Core issues here include validating providers, validating genuine content, preventing new AI-powered attacks. and moderating digital distribution channels.

Finally, integrity creates trust when users see that the services they consume are secure, private, resilient, and well governed.

Technologists and enterprises have spent decades building the web-scale infrastructures and cloud-native architectures that power mission-critical digital services. The practices that allow the world to rely on these services need to be extended and adapted to AI capabilities in a way that is transparent and convincing to user communities.

The only way to bring this requisite integrity is to adopt platforms that build in transparency, performance, security, privacy, and quality. Building point use cases in parallel, based on localized objectives and siloed data, is a dangerous path that will lead to increased cost and risk, worse outcomes, and ultimately a collapse of system integrity.

The three factors of ability, benevolence, and integrity show what is needed to build and maintain trust in AI.


The challenge to implement responsible AI

While it’s well and good to have clarity regarding objectives, it’s also undeniable that we face a daunting challenge. Addressing responsible AI will require collaboration between the public and private sectors across a range of issues. It will also require the adoption of new practices within the enterprise to design, engineer, assure, and operate AI-powered systems in a responsible manner.

We don’t have the luxury of waiting for someone else to solve these challenges. Whatever your role and industry, you can be sure that competitors are pushing ahead with AI implementations, employees are covertly using untrusted solutions, and bad actors are devising new ways to attack and exploit weaknesses.

At Cognizant, we are helping to build responsible, enterprise-scale AI in hundreds of organizations, as well as within the core of our own business. Based on this experience, we believe enterprises need to act now in five areas:

  1. Align leadership to a consistent vision and accountabilities. AI is a CEO issue that requires collaboration across all functions of the organization. Leadership teams should spend time discussing the issues surrounding responsible AI and should agree on areas of opportunity, approaches to governance, responses to threats, and accountabilities for actions.

  2. Manage standards and risks. Establish a governance, risk, and compliance framework to standardize good practices, and systematically monitor AI-related activity. It is critical to consider the full scope of an AI-powered system within this framework, including training data, AI models, application use cases, people impacts, and security.

  3. Create a focal point of expertise. Responsible AI cannot be managed without centralized transparency and oversight of activity. Creating a center of excellence for AI enables scarce expertise to be leveraged most effectively and provides a coherent view to leadership, regulators, partners, development teams, and employees.

  4. Build capability and awareness. Sustaining responsible AI practices requires that everyone in the enterprise understands the technology’s capabilities, limitations, and risks. All employees should be educated on the concept of responsible AI, the vision, and the organization’s governance processes. Select groups will then require further assistance through training and coaching to take a more hands-on role in developing and leveraging AI solutions.

  5. Codify good practice into platforms. AI is a pervasive, horizontal technology that will impact almost every job role. If we want teams to build trustworthy solutions quickly, they will need the data and tools for the job. Platforms for AI can make sharable assets accessible for re-use, ensure that effective risk management is in place, and provide transparency to all stakeholders.

With these five elements in place, organizations are set up to operationalize their position on responsible AI, enabling the enterprise to execute and govern activities effectively. We view this as an urgent priority for every organization that is adopting AI or is exposed to AI-powered threats.

To learn more, visit the Generative AI section of our website.



Prasad Sankaran

EVP, Software and Platform Engineering

Author Image

Prasad Sankaran is the EVP, Software and Platform Engineering at Cognizant. In this role, he leads strategy, offerings, solutions, partnerships, capabilities and delivery for digital engineering, digital experience, application development and management, and quality engineering and assurance.




Pramod Bijani

SVP and Global Delivery Head of DE/DX

Author Image

Pramod is an SVP at Cognizant and a technology leader with 25+ years of IT experience in leading large, diverse teams on client servicing initiatives that are focused on digital product development, application innovation and transformation.




Mike Turner
VP, Software and Platform Engineering, Cognizant
Mike Turner

Mike Turner is a Software and Platform Engineering practice lead, responsible for helping clients to grow their businesses through the use of digital technology to create new and compelling experiences.

Mike.Turner@cognizant.com


Latest posts

Related posts

Subscribe for more and stay relevant

The Modern Business newsletter delivers monthly insights to help your business adapt, evolve, and respond—as if on intuition