<p><br> <span class="small">March 26, 2026</span></p>
Safe autonomy, provable trust: Securing AI from build to run
<p><b>Security for AI must be designed into the full lifecycle, not bolted on as an afterthought.</b></p>
<p>AI is ushering in a new era of opportunity for enterprises. For years, we built security assuming systems were deterministic, bounded and operated by predictable logic. That assumption no longer holds. Today, AI systems act, decide, orchestrate, learn and improve. Increasingly, they are becoming intelligent and autonomous. </p> <p>This shift from assistive AI to agentic and autonomous AI changes the security problem fundamentally. The question is no longer “Can AI be secured?” It is “How do we design trust into systems that are expected to act on our behalf ?”</p> <p>AI introduces a new attack surface: models, prompts, context pipelines and agent actions. Threats like prompt injection, context poisoning, model theft and agent impersonation don’t just leak data; they can change decisions and actions at scale. That’s why security for AI must be designed into the full lifecycle, not bolted on as an afterthought.</p> <h4>Autonomy at scale requires trust by design</h4> <p>AI agents are moving from handling narrowly scoped tasks to coordinating entire workflows. That creates systemic risk. When autonomy fails, it doesn’t fail at the edge; it fails across processes, decisions and outcomes.</p> <p>The idea of “safe autonomy” needs to be defined explicitly. Trust does not come from freezing innovation or forcing everything through human approvals. It comes from architected control, which entails the following:</p> <ul> <li>Tiered autonomy based on risk, so low-impact tasks can run fast while high-impact actions trigger oversight</li> <li>Enforced escalation paths that create guardrails so exceptions can be handled consistently, not ad hoc</li> <li>Kill switches that are operationally viable, not theoretical, so stopping unsafe behavior is practical in production, not just in theory</li> </ul> <p>Enterprises must develop a clear security-for-AI strategy and adoption roadmap. This is an emerging discipline. While tools and platforms will mature over time, waiting for a perfect solution will only result in entrenched legacy. Setting up policy, process and accountability takes time and hence needs to be prioritized; the time to act is now. </p> <p>We have lived that cycle before. We know where it leads.</p> <h4>Context is the differentiator—and the vulnerability</h4> <p>Context integrity is the new control point—because context is the “brain” that drives agent behavior. Context is what gives AI agents their power. Enterprise data, historical decisions, workflows, customer signals. At the same time, that context becomes the most attractive attack surface.</p> <p>In retrieval augmented generation-based and memory-enabled systems, context poisoning is not an edge case; it’s an expected threat.</p> <p>Before AI, integrity checks validated inputs and outputs. In an agentic world, integrity must be enforced continuously inside the system.</p> <p>This is why guardrails are not optional. The industry is already moving toward guardian agents that observe, validate and challenge agent behavior. What matters is not the specific implementation, but the principle:</p> <ul> <li>Policy-driven guardrails</li> <li>Real-time detection of anomalies—manipulation or drift</li> <li>Establishing a process for remediating outliers </li> </ul> <p>Static checks and actions will not survive adaptive systems.</p> <h4>Guardrails for agents, not just applications</h4> <p>Traditional cybersecurity controls were designed for humans and applications. Agents don’t fit neatly into either category.</p> <p>We are entering a hybrid enterprise. Humans and agents will co exist, acting and making decisions together. Security architectures must reflect that coexistence. Agents need:</p> <ul> <li>Strong, non-repudiable identity</li> <li>Least-privileged access</li> <li>Policy-aware retrieval of context</li> <li>Signed actions and tamper-evident audit trails</li> </ul> <p>Without these, autonomous systems will bypass compliance quietly and efficiently—not just out of malice, but due to design gaps.</p> <p>This is what “trust engineered twice” means in practice:</p> <ul> <li>Build-time assurance to validate models, data and pipelines before release</li> <li>Run-time detection and response to monitor behavior, control actions and preserve evidence after deployment</li> </ul> <p>Security architectures that are both rigorous and nuanced are needed. Over-control will break autonomy. Under-control will break trust.</p> <h4>A new, expanded attack surface</h4> <p>AI accelerates defense. It also accelerates offense. As noted, the attack surface now includes:</p> <ul> <li>Models</li> <li>Prompts</li> <li>Context pipelines</li> <li>Orchestration layers</li> <li>Decision loops</li> </ul> <p>Cybersecurity was already complex due to tool sprawl. AI compounds that complexity. The cybersecurity software ecosystem is moving fast, but approaches vary widely. Enterprises need an architecture that reduces fragmentation and creates consistent controls across AI and non‑AI systems. </p> <p>This is where service providers must step up. I believe the answer is not more fragmented tooling, but rather coherent architecture featuring:</p> <ul> <li>Consolidated security signal pipelines</li> <li>Continuous red teaming of AI workflows</li> <li>Blast-radius modeling for autonomous failure scenarios</li> <li>A security data fabric that spans AI and non-AI systems</li> </ul> <p>Simplicity becomes a form of risk reduction.</p> <h4>Governance must become adaptive</h4> <p>Boards and regulators are increasingly asking for evidence of trust by design. Static policies will not scale in adaptive systems.</p> <p>AI-aware governance must be dynamic, with the following characteristics:</p> <ul> <li>Establishing clear ownership and accountability for AI trust and governance</li> <li>Autonomy risk charters tied to business outcomes</li> <li>Alignment with global AI and privacy regulations</li> <li>Continuous evaluation of model behavior and decision quality</li> </ul> <p>Governance must shift from periodic reviews to continuous assurance. Manual-only governance won’t scale when agents operate at machine speed, across systems and jurisdictions. Enterprises will increasingly use AI to help govern AI-monitoring behavior, generating and enforcing policy dynamically. </p> <h4>People remain the control plane</h4> <p>Autonomy reshapes operating models, but it does not remove human responsibility. Trust still rests with people.</p> <p>Human-in-the-loop controls must be intentional, not symbolic. Oversight thresholds need to be defined clearly. New roles will emerge, including autonomy safety officers who understand both risk and system behavior.</p> <p>Skilling is foundational. Technical skills, behavioral judgment and decision accountability must all evolve together. Security for AI training cannot be optional or isolated; it must be deeply integrated into an enterprise’s AI charter.</p> <h4>You cannot improve what you do not measure</h4> <p>Security for AI cannot survive on narratives alone. Enterprises need metrics like the following to quantify trust:</p> <ul> <li>Mean time to detect and recover</li> <li>Rollback frequency of autonomous actions</li> <li>Integrity and provenance indicators</li> <li>Business impact metrics reported at the board level</li> </ul> <p>Metrics shift the conversation from security as cost to resilience as value.</p> <h4>Responsible AI and security will converge</h4> <p>Responsible AI and security for AI are often treated as parallel tracks. In reality, they intersect deeply. Bias, explainability, transparency and accountability are inseparable from trust and security. Technical assurance without ethical assurance will fail.</p> <p>This demands unified governance models that address both facets. Ownership debates matter less than outcome alignment. Trust is multidimensional.</p> <h4>From secure SDLC to secure ADLC</h4> <p>While secure software development lifecycle remains necessary, it is no longer sufficient. Agentic systems evolve post-deployment. Security must span the entire autonomous development lifecycle, which requires:</p> <ul> <li>Threat modeling for agent behavior</li> <li>Security built into orchestration</li> <li>Continuous drift monitoring</li> <li>Runtime detection and response</li> </ul> <p>Security for AI must be embedded across the enterprise with a common operating model led by cybersecurity. This is not a niche capability. It is a foundational one.</p> <h4>Why leaders must act now</h4> <p>We are building security-for-AI and responsible AI offerings with our strategic OEM partners now—grounded in real customer deployments, real constraints and real accountability.</p> <p>We’re doing so not because everything is solved, but because autonomy is already here.</p> <p>The winners won’t be the ones who slow it down; they’ll be the ones who secure it well. The question for leaders is simple: will trust be an afterthought, or will it be engineered into how AI is built, deployed and governed from the start? </p>
<p>Vishal leads Cognizant’s global cybersecurity strategy, strengthens threat protection capabilities and advances digital trust across client enterprises. Under his leadership, Cognizant is scaling its cybersecurity offerings to meet the evolving needs of global organizations, with a focus on resilience, regulatory alignment and secure digital transformation.</p>