Artificial intelligence and machine learning (AI/ML) can help life sciences providers understand the human genome, assess the effectiveness of treatments and reduce the spread of disease.

Regulators are only just beginning, however, to write the ground rules for assuring the effectiveness and safety of these solutions. By understanding and helping to shape these regulations, life science companies can be among the leaders in using AI/ML effectively and safely.

To prepare for evolving regulations, we recommend life sciences companies focus on these three areas.

1    Governance models

We expect the deployment of AI/ML-based life sciences solutions will require significant changes to existing governance requirements, including:

  • Proof of reliable feedback and learning mechanisms to ensure AI/ML solutions meet the needs of patients and caregivers and produce ethical and unbiased recommendations.

  • Identifying KPIs and targets for AI/ML solutions, and explaining outliers in the data used to train the algorithms as well as plans to address those issues.

  • Evaluation frameworks and governance models to ensure the data used in AI/ML models is clean and reliable.

  • Pre-market assurance protocols governing change control and documentation, the determination of acceptable risks, and acting on those determinations.

  • Building or partnering with regulatory veterans who understand the quality, format and other characteristics of the data required to prove compliance.

2    Data management

Because data is so central to the development and refinement of AI/ML solutions, expect strict controls on how life science companies choose, use and protect such data.

Companies that provide software as a medical device (SaMD), which is software used to diagnose or treat conditions that is not part of a physical device, will need to show they can leverage usage data to understand how their products are being used, identify opportunities for improvement, and respond proactively to safety or usability concerns.

To protect data privacy and personalization, regulators will require that:

  • Data used to train AI models is accurate and has not been compromised.

  • Datasets created in real time have been anonymized in real time.

  • Any hidden biases or other defects in the algorithms do not provide erroneous diagnoses or recommendations to the intended patient.

  • New data and algorithms don’t contain malware or other threats that could jeopardize patient data, the security of the AI/ML system or other platforms with which it shares data.

When choosing data testing and analysis models and tools, look for robust document management and reporting, audit trails, content protection, integration with other information management and workflow systems, and the ability to present safety information in the form of diagrams.

3    Reporting, tracking and validation

Because AI/ML life sciences solutions and their uses will change more often than conventional offerings, regulators will demand robust processes for change and risk management, safety and audit, and quality assurance.

Devices relying on AI/ML must be validated using analytical and clinical data and follow Good Machine Learning Practices (GMLP). These provisions include assuring the relevance of the data to the clinical problem and current clinical practice, that data is acquired in a consistent, clinically relevant and generalizable manner, and that appropriate separation exists between training, tuning and test datasets.

Developers of SaMD solutions will need reporting frameworks and tools that reflect the SaMD pre-specifications and the algorithm change protocol documents, which define the changes planned in a device and how the manufacturer will implement them.

Life science companies will also need to tailor the frequency of their reports and the information they contain to the changing risk categorizations of the solution, the number and types of modifications made to it, and the maturity and reliability of the algorithms used.

Explainable AI/ML models can help describe how models made their predictions, while defining their uncertainty — describing questions the system cannot answer. Finally, life sciences companies need frameworks to track and measure changes in the intended use of AI/ML solutions.

Now, while the regulatory landscape is still in flux, is the time for life sciences companies to understand and help shape these new rules. That will help them best understand how AI/ML can improve their business; identify the skills they need to leverage AI/ML; find the right partners; and deliver the greatest value to patients, healthcare providers and the entire life sciences ecosystem.

To learn more, read, "AI Regulation Is Coming to Life Sciences: Three Steps to Take Now," visit the Life Sciences section of our website, or contact us.