May 08, 2025
Seizing healthcare’s agentic AI opportunity
AI in healthcare is on a clear trajectory: from augmentation tools for human workers to semi-autonomous AI agents, overseen by humans.
When businesses first adopt generative and agentic AI, they typically begin with low-risk use cases. But in highly regulated industries like healthcare, even seemingly simple administrative tasks come with inherent risks.
This is because many core processes—such as records maintenance and payment coordination—require human oversight and access to personally identifiable information (PII) and protected health information (PHI). This makes AI-enabled automation far more complex in healthcare than in other industries.
Despite these challenges, healthcare organizations can harness AI advancements—including generative and agentic AI—to drive the next level of automation, provided they implement a framework that prioritizes responsible use, compliance and continuous innovation.
The trajectory of AI in healthcare
The healthcare industry is no stranger to automation. To lower costs, serve more patients and deliver better outcomes, health organizations have long relied on software solutions to automate repetitive tasks, such as alerting processing teams to an error within a claim.
It’s been less possible, however, to automate more complex or nuanced processes, like validating and resolving that error. Those types of tasks would typically require a level of cognitive processing and inter-departmental coordination that couldn’t be supported by traditional automation frameworks. These tasks might also lack clear, consistent rules or require time-sensitive decisions that demand human intervention.
That is changing with new AI capabilities, such as agentic AI: autonomous systems capable of proactive and adaptive action. Agentic AI systems can mimic human cognitive processes, enabling automation of complex, high-value healthcare tasks that require flexibility, contextual judgment or collaboration across different groups.
Unlocking the next era of automation in healthcare
For example, Cognizant TriZetto is piloting an AI-enabled desk-level procedure assistant that can automate complex repeatable tasks that can’t be managed by traditional software.
In phase one, the assistant acts as an augmentation tool. It can support human agents by automating actions, such as retrieving procedural information or checking provider notes within the system. The agent can also serve as a guide, providing human employees with recommendations or step-by-step instructions for how to perform actions, such as updating a provider record or approving the change.
Phase two capabilities would allow the human agent and the AI agent to reverse roles. The AI agent would take the lead, proactively executing tasks, such as routing provider record change requests to the appropriate department, validating the request and updating the record. The human agent would maintain oversight of the entire process.
Figure 1 outlines how an organization can use the AI assistant to update a provider record. Depending on whether they’re using a version of this agent with phase one or phase two capabilities, organizations can selectively automate parts of the workflow. As a result, what was once a nine-step, human-centric workflow can now be streamlined to five or six steps wherein the human agent has minimal involvement.
Figure 1
Our team is also piloting a care management AI agent that automates the time-consuming prep work required before patient check-ins. Instead of a human care manager manually reviewing case notes, prescription histories and self-reported progress—which typically takes 15 to 20 minutes—an AI agent retrieves this data from multiple sources and uses generative AI to summarize it in a clear, consumable format.
This not only reduces hands-on data-gathering and analysis time but also allows case managers to interact with the information. For example, care managers can ask follow-up questions directly within the platform, request more information or use prompts to check on parts of the patient profile.
Building a strong foundation for AI evolution in healthcare
As AI continues to evolve, healthcare organizations must develop frameworks that allow them to harness technology advancements while adhering to critical regulations. To that end, here are three key steps for building an AI foundation that prioritizes responsibility, compliance and innovation.
1. Embrace responsible AI practices from the outset
Before considering specific industry regulations, healthcare organizations must first embrace responsible AI practices that build trust in technology and its application. To do so, they need to continually question if and how the use of generative or agentic AI raises privacy or security issues that will require legal review and process adaptation.
For example, another agentic use case our team is working on involves speech-to-text transcription of an intake assessment, as well as the use of a gen AI agent to summarize the call. This application offers incredible time savings, but it also raises important legal and compliance questions that organizations may not initially anticipate. For example, while the patient may have given consent to the call recording, feeding those files into AI tools introduces a third-party intermediary and could require additional permissions.
This scenario highlights the need to evaluate AI integration not just from a functional standpoint, but also through the lenses of consent management, legal risk, auditing and intellectual property.
2. Leverage data retrieval and role-based access standards for AI agents and tools
Integrating agentic AI into healthcare workflows isn’t just a matter of building the technical capabilities for the AI assistant—it’s also about safely and securely managing the model’s access to PII and PHI.
For example, if an AI agent is attempting to access a patient’s medical records to create a summary for a care manager, it must adhere to the same hierarchical role-based access controls (RBAC) as the human initiating the request. While this doesn’t necessarily require creating AI agent-specific RBAC from scratch, businesses do need to incorporate existing controls from software or solution providers, as well as the provider’s API ecosystem, into the AI tool. This ensures that all data access is validated and appropriate.
(For more on how healthcare organizations can overcome challenges related to role-based access, data retrieval and regulation, see Three keys to enterprise-wide gen AI adoption in healthcare.)
3. Establish regulatory frameworks that support continuous evolution
As government agencies and regulatory bodies work to define where AI can be deployed, how it can be used, what decisions it can make, and where liability begins and ends, it is up to companies to ensure AI is integrated responsibly and transparently.
Regulatory requirements may vary by healthcare team or market. For every AI use case that is deployed, healthcare organizations need to constantly assess the technology’s impact on compliance not just in general but also as it relates to every group involved.
Finally, companies must also be capable of change if and when regulation demands it. This includes incorporating state and local requirements alongside federal or regional guidelines to ensure full compliance across all operational areas.
Enabling the next level of productivity and efficiency
The use of AI within healthcare is on a clear trajectory: from augmentation tools for human workers to semi-autonomous agents that require human oversight. As the accuracy of AI tools improves, AI agents may become fully independent and self-learning, raising questions about the future role of traditional software altogether.
This rapid rate of change underscores the need for healthcare organizations to approach their AI journey with urgency and precision. Early adopters can start building essential security and legal frameworks while also potentially influencing policy and product decisions—setting their organizations up to lead in an AI-driven future.
A leader with 20+ years of experience in software architecture, design and development, Scott Johnson is the Chief Technology Officer for Cognizant TriZetto® Healthcare Products. In this role, he’s responsible for the technology vision for the TriZetto payer and provider product portfolio. He also leads the architecture team in research-based experimentation of new and critical technologies across the portfolio.
Latest posts
Related posts
Subscribe for more and stay relevant
The Modern Business newsletter delivers monthly insights to help your business adapt, evolve, and respond—as if on intuition