March 16, 2026
Behind the Build with Olivier Francon
An inside look into the work and expertise from our AVP-Neuro AI, Olivier Francon
Behind the Build is a series featuring developers and engineers from Cognizant AI Lab. The series offers an inside look at how AI systems are designed, built, and taken to production, spotlighting the people behind the code, the technical decisions they make, and the lessons learned along the way and how their work is shaping AI technologies that are becoming part of everyday enterprise systems
What first pulled you into AI development or systems engineering?
- Video games were what first drew me to computers. I wanted to study computer science, and during engineering school I specialized in a branch of CS. At first, I thought I would go into multimedia, but once I was exposed to AI as a specialization, I became deeply interested in it.
- After graduating, I joined an aerospace company and was asked to work on the Raphael fighter jet program. It would have been fascinating, but the project was eventually delayed due to budget constraints so I transitioned into finance, spending about 15 years working on projects at organizations like J.P. Morgan.
- Around 2014, as neural networks started gaining serious traction again, I felt pulled back to AI. The field was evolving rapidly, and it was clear something transformative was happening. I made the decision to return to AI full time. In retrospect, I’m glad my earlier finance work was only a temporary chapter, as coming back to AI to build enterprise systems that empower people feels far more aligned with what I want to contribute.
When designing an AI system like a Neuro-AI Multi-Agent Accelerator, how do you decide what’s genuinely worth building versus what’s just a trendy agent pattern?
- I take a very engineering-driven approach. The first questions I ask are: What problem are we trying to solve? Why is it a problem? And why is it important to solve it now?
From there, I think about what a good solution would actually look like. Do we have the right data? Do we have the necessary tools? If the foundation is there, we design the simplest possible version first. If that version doesn’t deliver value, we drop it quickly. If it proves useful, we iterate and scale.
There’s a lot of trend-driven noise in the agent space right now. But building something just because it fits a fashionable pattern rarely works. Real systems succeed when they’re anchored to meaningful problems and validated through practical experimentation.
Is there an agentic AI system or feature you’re especially proud of deploying? What broke along the way, and what did that teach you about building robust agents?
One system I’m particularly proud of started from personal frustration. Navigating internal Cognizant systems to do simple tasks — like checking available vacation time or booking time off — was unnecessarily complex. So we prototyped an agent that could check a user’s remaining vacation balance and book time off automatically.
The initial prototype was built about a year ago. It took longer than expected to bring it fully into production — more than a year — but today it’s actively used by a large number of employees. One of the most satisfying indicators of success has been seeing a significant reduction in support tickets related to those workflows.
Along the way, several things broke. We had to secure proper API access across multiple internal systems. We needed strict permission controls to ensure users could only see their own vacation data — early versions exposed broader visibility, which required immediate correction to protect privacy. And scaling to support roughly 350,000 employees introduced performance and reliability challenges that weren’t obvious in early prototypes.
- The biggest lesson was that production-grade agents are not just about intelligence — they’re about integration, privacy, permissions, and scalability.
What are you currently building or improving in the agentic space, and what core technical breakthrough or design philosophy is guiding that work right now?
Right now, we’re focused on making agents more autonomous and proactive rather than purely reactive. Instead of functioning as chatbots that wait for prompts, we want agents that monitor relevant systems in the background, take action when needed, and inform users proactively.
A major challenge is orchestration which involves enabling multiple agents to communicate with each other, delegate tasks, and collaborate effectively. We’re also working on decomposing large tasks into smaller, manageable subtasks, allowing agents to coordinate more intelligently. This ties into broader research efforts, including work reflected in the MAKER paper, which emphasizes structured task breakdown and coordination.
Looking back, what skills mattered more than you expected in becoming effective at building agentic or multi-agent systems, and what advice would you give developers who want to work seriously on agents?
Early in my career, I assumed success in AI systems would mostly come down to strong engineering skills and writing good code. While that’s still important, I’ve learned that managerial skills matter just as much.
When building multi-agent systems, you essentially become a manager of agents. You need to clearly define expectations, review outputs, and coordinate responsibilities. You shouldn’t micromanage how an agent does its job. Instead, you should ask what the agent needs to succeed and provide it with the right tools and constraints.
It’s also critical not to over-rely on LLMs. Code is deterministic; LLMs are probabilistic. You can depend on well-written code to behave consistently. LLMs require oversight, validation, and guardrails. Robust systems combine deterministic components with probabilistic intelligence rather than replacing one with the other.
Looking ahead, what domains or workflows will agents transform next, and what emerging agentic trend or capability makes that transformation possible?
Healthcare and life sciences are areas where I expect to see profound transformation. There’s a growing global shortage of healthcare professionals — projections suggest a shortage of around 11 million health workers by 2030. Agents can help empower doctors rather than replace them, supporting personalized health guidance, proactive care, drug interaction checks, and second-opinion analysis.
The goal isn’t to remove humans from the loop — it’s to amplify the capacity of the experts we already have and extend access to quality care to more people.
Another transformative trend is that LLMs and agents can now write increasingly high-quality code. This doesn’t eliminate the need for engineers, but it reduces the interpretation gap between domain experts and implementation. Experts can increasingly build or customize the tools they need themselves. As tools become more specialized and accessible, more people inside enterprises can create solutions tailored to their workflows.
If you could deploy a personal AI agent for yourself today with no constraints, what silly thing would you want it to do?
I’d want embodied agents — agents that can interact with the physical world. If I had no constraints, I’d deploy one to do my laundry and take out the trash.
It sounds simple, but robotics still struggles with tasks that are trivial for humans. The world is designed for people, not machines, and physical interaction remains extremely complex. A humanoid robot — or even something as simple as a wheeled trash-carrying assistant — that could reliably handle household chores would be transformative. Laundry may seem mundane, but in robotics, it’s still very much a work in progress.
AI engineering expert in decision optimization and multi-agent systems, leading AI for Good initiatives