Skip to main content Skip to footer

October 27, 2022

How to ethically use AI in healthcare

When a wrong decision could be fatal, it’s critical to vet how AI is used.

In the news

In the US, the White House recently published its Blueprint for an AI Bill of Rights that aims to prevent, as the document’s foreword notes, “the use of technology, data and automated systems in ways that threaten the rights of the American public.”

As this article notes, artificial intelligence touches many aspects of modern life, from financial and social services to public safety and government benefits—but it is healthcare that the federal document is primarily concerned with.

The nonbinding blueprint is built on five principles (safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation, and human alternatives, consideration and fallback). It also follows on the heels of the European Union’s proposed AI Act, which would assign all applications of AI to one of three risk categories.

“As AI is rolled out in healthcare to help make some of the highest-stake decisions there are,” says this MIT Tech Review article, “it’s more crucial than ever to critically examine how these systems are built.”

The Cognizant take

“Despite growing technological maturity, AI’s ethical dimensions remain a work in progress,” says Avi Kulkarni, Senior Vice President of Life Sciences at Cognizant, in a recent AiThority article. He outlines three ways to address and improve AI in life sciences.

First, he notes, practitioners must recognize that AI tools lack transparency, and so humans must rigorously test their conclusions. Sometimes, people simply cannot understand the reasoning behind algorithmic results; this has been dubbed the black-box problem of neural networks.

Next, humans must “question the techniques used to arrive at AI-based decisions,” Kulkarni writes, “because they can be prone to inaccuracies or biased outcomes due to embedded or inserted bias.” Human skepticism, he adds, is a “sharp tool in searching out such bias.”

Finally, AI should be deployed only where it is fit for the purpose, Kulkarni urges. Today, AI is reasonably mature, “where structured or uniform data standards make machine learning possible, such as natural language generation for clinical study reports, patient narratives for submission data and medical coding.”  But it remains in the proof-of-concept stage in clinical data review and clinical data management. “AI is still not ready to be the final arbiter of decisions with a direct impact on an individual’s care,” he writes. 

“The important tools life sciences practitioners must bring to this work are skepticism and humility—two very human qualities,” Kulkarni points out.

Tech to Watch Blog
Cognizant’s weekly blog
Headshot of Digitally Cognizant author Tech to Watch

Understand the transformative impact of emerging technologies on the world around us as they address our most significant global challenges.

Latest posts

Related posts

Subscribe for more and stay relevant

The Modern Business newsletter delivers monthly insights to help your business adapt, evolve, and respond—as if on intuition