Skip to main content Skip to footer
  • "com.cts.aem.core.models.NavigationItem@57aeb3e8" Careers
  • "com.cts.aem.core.models.NavigationItem@6e07941f" News
  • "com.cts.aem.core.models.NavigationItem@18b53daa" Events
  • "com.cts.aem.core.models.NavigationItem@7a21d026" Investors
Cognizant Blog

As organisations move into the era of agentic AI to increase efficiency & productivity, AI governance should be front and centre to safeguard trust.

Although there remain several definitions, Kieran Gilmurray defines agentic AI systems as those that "act autonomously, make decisions, and adapt dynamically to complex environments."

When combined, the AI agents can autonomously solve problems, organise data, create plans, leverage memory, operate tools, and perform tasks in a range of other applications with which they are integrated. The benefits include lower resource and infrastructure costs, as well as increased efficiencies, an outcome that is enough to lure all organisations, irrespective of their size.

Typical use cases include managing customer service, managing holiday bookings and posting social media content, to name a few.
 

What are the core challenges and risks?

Agentic AI amplifies risks, such as transparency, explainability and bias. This is because agents are LLM-powered and are expected to be non-deterministic and autonomous in contrast to more traditional forms of software automation and workflows.

Agentic AI Challenges

The most novel challenges posed by agentic AI include:

(i) Human out of the loop- In the quest for efficiency gain and competitiveness in the market, it becomes dangerous to become overly trusting of the AI agents by removing human oversight.

For example, an AI agent involved in responding to fans' posts through a social media platform would require some level of human review and approval before any action is taken. Vetted human governance procedures would help ensure sufficient quality. The absence of the Human-in-the-loop could result in unsolicited content being sent to fans, which may be reposted several thousand times.

(ii) Cascading hallucination. Cascading hallucination (as described by Open Web Application Security Project OWASP) is when an "AI agent generates inaccurate information, which is then reinforced through its memory, tool use, or multi-agent interactions, amplifying misinformation across multiple decision-making steps". For example, if the AI agent is autonomously handling and booking holidays for the customer, how many improper interactions could occur before this is picked up and addressed?

(iii) Data privacy and security – All risks are exacerbated in the world of Agentic AI. Agents can mine copyrighted material without an appropriate licence or mine sensitive personal data without permission/consent.  These agents could also disclose and reveal data to unauthorised persons. This elevates the risk of data breaches and information leakage. Examples include agents having access to medical records and booking appointments. Therefore, encoding privacy by design and data governance guardrails will be a challenging but necessary part of agentic AI governance.

(iv) Agent poisoning. This takes the form of malicious actors incorporating harmful/false data into an agent's memory base to undermine or alter its performance. Some of these forms include attacking communication channels between agents to disrupt the functioning of multi-agent systems, or stealing data over long periods, which can have detrimental impacts over time. These types of agent-in-middle attacks could distort trust and undermine the solutions to automate and scale projects.

 

Although guidance for traditional machine learning and gen AI has been provided by the EU AI Act, NIST AI Risk Management & ISO 42001 frameworks, they do not address many of the risks and challenges of agentic AI. 
 

Key Agentic AI Governance Considerations

Some of the essential elements to consider when incorporating agentic AI into governance frameworks and models include:

1. Multi-agent architecture – Map out functional and non-functional elements to the technical infrastructure for your use cases, and focus on interoperability for agents and tools. Typically, multi-agents are expected to perform multiple tasks through collaboration with other agents in the ecosystem to achieve a desired goal. Hence, to understand the risks and threats, only a robust multi-agent architecture framework will be sufficient to highlight the potential challenges.

2. Access & accountability – Just because an agent has a higher level of autonomy doesn't mean we should let them do what they want. In fact, we should place even tighter controls on them than humans and only allow them to do the smallest unit of work needed. Configuring access and permissions appropriately, along with tooling to manage observability, is a crucial aspect of agentic AI governance. We can determine exactly which actions an agent can and cannot perform.

Allowing access to only specific sources of trusted and verified data will increase the trustworthiness of the AI tools or applications. Furthermore, we can provide a clear channel for accountability, which means the agent and the human have responsibility for doing the right thing at the right time. For example, suppose an autonomous agent is booking a family holiday. In that case, it should have finance restrictions in place when undertaking transactions over a specific limit. But when designing the agentic architecture, decomposing the work into smaller agents with limited permission is better than one agent with lots of powers.

3. Policy and Standards – When developing approaches, models and frameworks for the agentic ecosystem, the organisation will be required to formulate policy principles which can provide clear direction. Principal elements to be considered include impact on people, financial value impact, training capabilities and managing ethical dilemmas. Policies such as access policy, data retention policy, oversight policy, audit policy, training standards and transparency policy will help the organisation assess which applications agents should and should not be integrated with, as well as which datasets should and should not augment into their knowledge base.

4. AgentOps – As we move into AgentOps (managing the AI agent through their lifecycle), the goal is to automate the monitoring, oversight, and management of multi-agentic AI systems. This allows us to observe their actions and behaviour, as well as the impact of these actions. Furthermore, managing agents' observability requires proper tooling in place to determine whether any agents are operating outside their guardrails and permissions.

5. Risk management – Risk assessment to evaluate use cases that need to be updated, while risk scoring criteria need further evaluation to understand the likely impacts. High-risk activities, such as energy infrastructure, recruitment, or loan credit scoring, would need risk/impact outcomes to be evaluated very closely by human judgment.

6. Human in the loop  Human oversight requires a new approach and no longer means having a human to review every single action, task, and decision. In this context, human oversight could mean those certain elements that meet a certain threshold or risk level at an aggregated/dashboard level. These parameters will need to be agreed upon and defined, and continuously reviewed. An example could include a research report generated from document scanning across the internet. Furthermore, humans should always retain final control and should design a kill switch for an agentic AI system.
 

Concluding remarks

As we continue to explore the new world of agentic AI to obtain efficiency and cost advantages. Organisations, governments, and regulators will need to establish safe and trustworthy governance structures to manage evolving risks and challenges. We will need to develop architectural designs, create policies and standards, and implement upgraded risk assessments along with continuous monitoring to determine what, how, and where agents can access.

The investment into AI governance in order balance risks and opportunities in the new world of agentic AI is even more important than ever.


Jatin Patel

Data & AI/ML, Consulting Principal, Cognizant UK&I

Author Image




In focus

Latest blog posts

More blog posts