Abstract image of blocks of colorfully lit cubes

Look to AI capability—not just new AI capacity—to unlock trillions in stalled tech work

<p><br> <span class="small">May 06, 2026</span></p>

Look to AI capability—not just new AI capacity—to unlock trillions in stalled tech work

<p><b>AI agents have changed what's economically possible in IT ecosystems modernization work. The key is to aim them at the right place: the discovery process.</b></p>
<p>In the past 18 months, we’ve watched agentic AI tools, frameworks and frontier LLMs dramatically compress the time it takes to conduct investigations and analyses into complex enterprise systems. In some cases, AI agents can complete investigations in days instead of the months it would take teams of senior engineers.</p> <p>This time compression dramatically changes how businesses need to think about their <a href="https://www.cognizant.com/us/en/insights/insights-blog/ai-economics-mainframe-modernization" target="_blank">technology modernization</a>, migration and M&amp;A integration programs.</p> <p>But this rethinking goes beyond speed. While it’s true that AI adds a considerable amount of capacity to the investigative process, there’s something even more important that AI adds to the equation: entirely new capability.&nbsp;</p> <p>This is because, with thoughtfully engineered AI agents, businesses can lower the cost of investigating complex systems so far that questions previously impractical to ask become routine. And when any stakeholder can ask these new questions, it results in value far greater than just speeding the work they already do.</p> <p>This is why, to turn the tide on their technology modernization, migration and integration programs, businesses need to direct AI-powered capability at the discovery phase, where engineers work to understand what the system does before they can decide what to do about it.</p> <p>This phase is a well-documented cause of failure for many of these initiatives. Typically, the architecture documentation is absent, outdated or inaccurate. The system's logic lives in the heads of developers who have since retired. As a result, organizations invest millions just to produce a systems map accurate enough to <a href="https://www.cognizant.com/us/en/insights/insights-blog/legacy-modernization-mandate-ai-timeline" target="_blank">begin modernization planning</a>. It’s no wonder that the discovery phase is where most modernization programs stall.</p> <p>With AI agents, however, businesses can finally overcome this seemingly intractable barrier—a barrier that has stalled trillions of dollars of documented, funded and deferred work. Further, they can begin to ask deep, investigative questions about their complex systems that they’d otherwise never be able to justify because it was simply too costly to do so.</p> <h4>AI in discovery: Capability, not just capacity</h4> <p>First, let’s be clear about the difference between AI agents and AI coding assistants. The latter are something a business can simply install. They are autocomplete tools, in-editor suggestion engines and conversational chat interfaces. They add capacity by helping developers write code faster.</p> <p>That isn’t the kind of AI we’re talking about here. AI agents need to be built. They are orchestrated systems that use tools, execute multi-step tasks and require substantial upfront engineering.&nbsp;</p> <p>For example, in a recent engagement for a Fortune 100 client, we created a system research agent to traverse and analyze a codebase exceeding six million lines of code across multiple programming languages. The codebase sat on top of a data architecture with more than 600 distinct entities and available system documentation.&nbsp;</p> <p>The task was to conduct an AS-IS discovery by answering structured problem statements about the system: how it behaves for a given set of data or configuration, how data flows, where the integration points sit, where risk concentrates.&nbsp;</p> <p>The AI agent did not replace the discovery process, but it compressed it in ways that would have been unthinkable 18 months ago. Supervised, controlled and validated against known system behaviors, it produced actionable intelligence on business logic, system boundaries, data dependencies and risk concentrations.&nbsp;</p> <p>To put that scale in context, a team of engineers would take multiple years to conduct that kind of discovery investigation.&nbsp;</p> <p>This isn’t to say the agent didn’t have real constraints: context compression losses, inefficient tool choice, inaccuracies in task decomposition, data loss during the retrieval process when traversing large codebases and unavailability of factual data. Despite that, the agent performed well enough to change the economic calculus of what is possible.</p> <h4>Root cause analysis done in minutes</h4> <p>The most instructive finding was in root cause analysis (RCA). This particular agent, configured for this client, could produce a complete RCA across the entire codebase and data ecosystem in under five minutes. Accuracy was validated through structured field testing, with subject matter experts reviewing each RCA output against known system behaviors.&nbsp;</p> <p>Assessed on that basis, accuracy sat at approximately 85%. In comparison, an experienced developer with deep working knowledge would take a couple of days to produce the same analysis at perhaps 90% accuracy. (Note that this is a finding from a single engagement, not a generalizable benchmark.)</p> <p>But as we’ve said, this is not a simple gain in capacity. Unlike the developer, the agent's five-minute analysis is available on-demand, to anyone who can articulate the question. This makes it a capability gain because it makes it possible to ask questions that previously went unasked. For example:</p> <ul> <li>Which user interactions are affected by a given entity (and vice versa), and which entities does a specific user interaction touch?</li> <li>Under what conditions, and through what logic, is a specific business transaction performed?</li> <li>What is the end-to-end flow of a given business journey across systems?</li> </ul> <p>And if these questions don’t get asked, the problems they might have surfaced stay hidden until they become crises.</p> <p>What this also represents is knowledge democratization. A product manager, business analyst or newly onboarded developer can interrogate the system directly, without waiting for the one senior engineer who happens to hold the institutional context. Teams that previously depended on a single expert gain operational independence.</p> <p>This capability has never been more in demand.<b> </b><a href="https://www.researchgate.net/publication/318811113\_Measuring\_Program\_Comprehension\_A\_Large-Scale\_Field\_Study\_with\_Professionals" target="_blank">A peer-reviewed IEEE field study</a> of professional developers found that approximately 58% of development time is spent on understanding what existing code does before it can be changed. That is the dominant cost of working with legacy systems, and it is the cost that AI agents compress most directly.</p> <p>Further, the developer skill to do this work is very difficult to find. <a href="https://www.gao.gov/products/gao-25-107795" target="_blank">A 2025 GAO review</a> of the most critical federal legacy systems found that agencies face significant mission risk from shortages of personnel with the expertise to maintain systems written in languages like COBOL and assembly.</p> <p>As a result, the AI agent's 85% accuracy at five minutes is not competing against the two-day expert. In many cases, it is competing against &quot;we have nobody left who can answer this question at all.&quot;&nbsp;</p> <p>This is the correct frame for evaluating AI agents: not only whether they make developers faster at tasks they already do, but whether they make it economically viable to investigate problems that were previously too expensive to begin.</p> <h4>Why knocking down the discovery barrier is critical</h4> <p>The evidence that discovery is a primary cause of program failure is documented across government audits, judicial inquiries and professional research. But the discovery phase is not skipped because organizations are unaware of its importance; it is underinvested because the cost of doing it properly has historically exceeded what program budgets could absorb.</p> <p>The adjacent discipline of requirements management—which defines what the new system should do—shows the same underinvestment pattern. <a href="https://www.pmi.org/learning/thought-leadership/pulse/core-competency-project-program-success" target="_blank">According to the Project Management Institute</a>, 47% of projects that fail to meet their goals do so primarily because of inaccurate requirements management. Further, for every $1 billion spent, $51 million is wasted on this single cause.</p> <p>It is an established principle in software engineering that correcting a requirements error at the discovery stage costs orders of magnitude less than correcting it after deployment. But that isn’t what typically happens.</p> <p>While there are many examples of this that could be cited worldwide, the UK government’s National Audit Office offers just one case in point. Three successive NAO reports from 2021 to 2025 found a systemic pattern. Program teams <a href="https://www.publicsectorexecutive.com/articles/nao-challenges-implementing-digital-change" target="_blank">do not spend enough time</a> understanding the existing system; <a href="https://www.nao.org.uk/press-releases/digital-transformation-in-government-addressing-the-barriers/" target="_blank">digital change involves complexity, uncertainty and risk</a> often unique to each specific program due to legacy systems, existing operations and the difficulties of integration; and <a href="https://www.nao.org.uk/wp-content/uploads/2025/01/governments-approach-to-technology-suppliers-addressing-the-challenges.pdf" target="_blank">contracts are often awarded</a> for digital development work without sufficiently understanding the complexities posed by the existing environment. The consequences were substantial, with nearly £11 billion in IT programs either canceled, dismantled or written off without delivery.</p> <p>In short, the discovery phase is underinvested, the gap between assumed and actual system state goes undetected, and the cost of correction compounds catastrophically downstream. This is the specific barrier that AI agents address.</p> <h4>Directing AI capability at the discovery barrier</h4> <p>The scale of the deferred work can be measured in trillions. The annual cost of tech debt in the US alone <a href="https://www.aei.org/technology-and-innovation/inside-techs-2-trillion-technical-debt/" target="_blank">is estimated at $2.4 trillion</a>. COBOL processes <a href="https://www.metaintro.com/blog/cobol-developer-shortage-legacy-systems-career-opportunity-2026" target="_blank">roughly $3 trillion</a> in daily global transactions. And <a href="https://www.thestack.technology/cobol-in-daily-use/" target="_blank">the number of line of COBOL in active use are</a>&nbsp;estimated at over 800 billion lines, maintained by an aging and shrinking workforce. In these cases, the barrier is not the build itself; it is the discovery that must precede the build.</p> <p>If the capability exists and the problems are documented, why haven’t businesses already acted? The primary reason is the engineering cost of discovery itself, which AI agents now directly address. But there are also non-technological barriers: procurement cycles that cannot move fast enough to absorb new capability, regulatory liability that makes organizations hesitant to begin legacy replacement before the completion risk is understood; and the institutional inertia of programs deferred so long that no stakeholder wants to own the first step.</p> <p>AI agents can change the first of these barriers directly and the third indirectly. When the cost of the discovery phase drops categorically, the risk calculus shifts. The business case becomes defensible, the program that was perpetually deferred becomes fundable, and the stakeholder who would not own the first step can now point to a concrete, affordable analysis phase as the starting point.</p> <p>Businesses need to embrace the opportunity of discovering and mapping what was previously out of reach rather than only getting discovery done with fewer people.</p> <h4>Responding to the new discovery calculus</h4> <p>For decades, the discovery phase has been the hidden killer of technology modernization—too costly to do properly, too consequential to skip. AI agents have changed that calculus, and the businesses that act on this moment first will capture an outsized share of the trillions in work that has been waiting for exactly this breakthrough.</p>
Hari Ramanathan Parameswaran
Hari Ramanathan Parameswaran

SVP and Global Delivery Head

<p>Hari is Senior Vice President and Global Delivery Head for Cognizant's Application Development and Management and Digital Engineering business. He has held successive global leadership roles across Industry Solutions, Health Sciences and ADM, shaping delivery strategy and driving AI-enabled modernization at enterprise scale.</p>
Manas Mohanty
Manas Mohanty

Portfolio Delivery Lead

<p>Manas is Portfolio Delivery Lead for Cognizant's Digital Engineering practice across RCGTH. He owns digital portfolios across global clients, shaping program direction, leading large-scale transformation, mentoring engineering teams and applying emerging technology to enterprise outcomes.</p>
Anish Thanaseelan Kaspaar
Anish Thanaseelan Kaspaar

Enterprise Solution Architect

<p>Anish is Enterprise Solutions Architect in Cognizant's Digital Engineering practice, with around two decades of experience. He combines architecture depth with hands-on AI implementation, specializing in technology modernization and transition, enterprise integration, mergers, acquisitions and divestitures, and applying agentic AI to legacy systems.</p>
Latest posts