July 14, 2025
To manage AI agents, start by demystifying them
In the coming era, competitive AI agents are more like us than they may first appear. And a wise division of labor will ensure that organizations get the most from both humans and machines.
This content was originally featured in a World Economic Forum article in July 2025.
For many people, the concept of AI agents conjures a sense of intrigue, mystery, even fear. The idea that a group of invisible, non-human coworkers can work independently and invisibly with each other to get things done faster and more efficiently than humans can feel uncanny.
But AI agents are a lot more familiar than they might appear. They need to be onboarded like any new hire. They’re assigned a specific role, as well as instructions, guidelines and constraints. They need to be trained and monitored. They’re granted access to the data and other resources they need to do their job.
Even when you put a bunch of these agents together, things should look pretty familiar. Multi-agent systems closely resemble the organizations that business leaders have run for decades. They form a network of specialized entities, each with its own goals, inputs and outputs. They operate with processes made up of discrete, definable steps. They use interfaces to coordinate and collaborate with each other.
We can also speak to agents using our own language. And, out of the box, agents can use that same language to communicate and coordinate with each other.
What’s different—in addition to AI’s unique ability to churn through vast amounts of data from seemingly unrelated sources—is that multi-agent systems mimic how businesses would operate in an ideal world: no siloes, no politics or hierarchies impeding collaboration, no out-of-office coworkers, no entrenched thinking or emotions getting in the way.
Ultimately, by focusing more on what’s familiar about multi-agent systems, businesses will be in a better position to adopt and manage them.
In fact, believe it or not, it’s not much of a stretch for humans in the workforce to think of themselves as a type of agent too.
We’re all agents in a multi-agent world
This is not to say that humans are—or ever will be—relegated to the position of an AI bot. Far from it. But there are insights to be drawn by seeing ourselves as part of a constellation of agents. It makes it easier to envision how work should be divvied up between humans and AI, with each agent—whether silicon or carbon—taking on a different part of a multi-step workflow.
In customer support, for example, a multi-agent system might include a triage agent (to classify incoming issues), a resolution agent (trained on a troubleshooting knowledge base) and a handoff agent (to flag complex or sensitive issues for human representatives). In marketing, a human might write the creative brief for a campaign (even while enlisting help from AI for ideas), while one AI agent segments audiences and another runs 10,000 A/B tests in parallel.
What’s important is that each agent, human or AI, has a defined set of responsibilities that match its intrinsic capabilities. Tasks that are high-volume or require split-second decision-making, like a billing lookup following a customer dispute, are best allocated to an AI agent. For more ambiguous, relational or financially sensitive tasks, like a customer complaint threatening legal action, you’d make sure there was a handoff to a human.
Like any workforce, these systems are governable. If any particular agent is acting erratically or working improperly, it can be pulled out of the process. This is because while they’re connected, multi-agent systems are not monolithic—they're made up of modular parts. Humans will often oversee agent performance, but AI agents themselves can be trained to operate as supervisors that oversee the work of other AI agents, even hitting a “kill switch” when necessary.
And, as in any workforce, the workflows and responsibilities can morph and shift. Both human agents and AI agents can be made to respond to context and learn more efficient ways to do things over time. They dynamically adapt to changes in their environment and learn from new data to make informed decisions.
Working the same, but also different
Focusing on what’s familiar about AI agents allows us to work better alongside them—and them with us. It enables businesses to more effectively discern work and task allocation and, ideally, free people to apply their best, most intrinsically human thinking to the job at hand.
Some key questions for senior leaders to consider include:
1. As workers grow more accustomed to working with agents, should they be able to create or source their own?
Agents are not as difficult to provision as you might think. Most cloud providers and many software platform vendors offer out-of-the-box agents. Many of these have adapters that allow them to plug into preexisting data sets, including commonly used systems, like Slack or OneDrive. If you have a microservices design or API you’ve exposed, you can sit agents on top of those. Imagine finance, payroll, legal and sales, each with their own agent plugged in.
And that’s what you want: people throughout the business experimenting with this technology. Agent creation should not be entirely centralized, but this requires training and empowerment—people need to know the guidelines and gates they need to clear for responsible AI.
2. How will new agents be vetted and validated?
Clearly, the pipeline needs to be managed before these agents are plugged into the multi-agent system or made discoverable to other employees. Defining this process starts with the chief information officer, but it's a multidisciplinary effort that involves legal and human resource functions.
Overall, the modularity of multi-agent systems allows ample opportunity for vetting and validation in a safe experimental environment. Individual agents can be inserted, multiplied or extricated without disturbing the system as a whole. This makes it possible to build and extend these systems incrementally, testing and fine-tuning various agents in isolation and within a sandboxed larger system before plugging them into the live agent network.
3. How much say will human workers have about how tasks are allocated?
Agentic work is a two-way street. Employees might initiate a prompt with their AI agent counterpart, which would then discover the other agents it needs to work with to come back with actions or suggestions. Or an agent might surface something that requires the employee’s approval before it can move forward.
In either case, human workers need to have a say in how to direct this agent so it’s productive, makes them more efficient and frees them from spending too much time on mundane work. And, as the agentic system grows more sophisticated, employees need to be part of deciding what they should continue doing versus what the agent should be allowed to do.
Seeing the familiar in the new
AI agents are something of a phenomenon. They work in a way we’ve never seen before and are developing more rapidly than even those of us with years of experience in AI could have imagined.
But maybe the thing that’s most uncanny about how agents work is how much they’re like you and me—how we’d operate in an ideal world without obstacles and hindrances. Seeing ourselves as part of their world will empower us to become more of something we’d previously only imagined we could be.
Babak Hodjat is CTO of AI at Cognizant and former co-founder & CEO of Sentient. He is responsible for the technology behind the world’s largest distributed AI system and was the founder of the world's first AI-driven hedge fund.
Latest posts
Related posts
Subscribe for more and stay relevant
The Modern Business newsletter delivers monthly insights to help your business adapt, evolve, and respond—as if on intuition