Skip to main content Skip to footer
Cognizant logo
Cognizant Blog

The evolution of artificial intelligence (AI) technologies is unlocking new possibilities for automation, decision making and creativity across all industries, including in life sciences.

Yet alongside the transformative potential of AI, organisations must grapple with a host of challenges and ethical considerations. AI-driven automation threatens jobs displacement and potentially exacerbates inequalities. In the face of such challenges, responsible organisations will need to embrace practices that prioritise ethical integrity throughout the AI lifecycle.

The EU AI Act is a legal framework for the development, supply and use of AI products and services. The legislation—designed to ensure safety and to respect fundamental rights and values—has the objectives of fostering investment and innovation in AI while minimising the risks to consumers.

The diagram below outlines the main concepts of the EU AI Act, which impacts a wide range of stakeholders. By navigating these areas, life sciences companies can drive innovation while remaining mindful of regulations to prevent AI misuse and avoid fines. Life sciences companies need to ensure that they do not inadvertently invest in AI that could be prohibited under Article 5, and also avoid scenarios where the benefits of tech-driven change are outweighed by compliance requirements.

EU AI Act
 
AI Regulatory Sandbox—Collaborative Opportunities for Pharma Firms, Big and Small

Although the EU AI Act has been criticised for putting the brakes on innovation, it also contains elements that very much encourage it. Sandboxes, for example, are controlled environments where AI systems can be tested and validated in compliance with the Act, enabling collaboration across regulators, businesses and other stakeholders. The Act’s approach here is for compliant entities to ‘train’ innovative AI before it is active in the market. However, it will be important to ensure that sandboxes are readily accessible to smaller companies and startups—not just to large companies with greater resources.

Article 53 of the Act requires that all member states shall ensure that their competent authorities establish at least one AI regulatory sandbox at national level, which shall be operational 24 months after the EU AI Act comes into force. These sandboxes may also be established jointly with one or several other member states. The commission may provide technical support, advice, and tools for sandbox establishment and operation. It is recommended that each country regulator should set up a sandbox by 2026.

In the UK, the MHRA has launched its own AI sandbox “AI Airlock”, suggesting that in the future individual sector regulators may offer this approach as a more targeted way forward.

Working together—The collaboration imperative

There are good reasons for larger pharma companies, which have access to regulatory sandboxes, to make such innovation journeys in partnership with smaller pharma companies.

Smaller companies often possess innovative technologies and research capabilities, such as novel computational models, that complement the resources and expertise of larger firms. In addition, many smaller outfits are also rightly known for their agility, flexibility, and entrepreneurial spirit, enabling them to explore unconventional approaches and novel therapeutic targets. By partnering with small pharma firms, big pharma companies can tap into disruptive technologies in areas such as precision medicine, immunotherapy and gene editing. With AI added to the mix, the potential for innovation has never been greater.

There is also the question of speed to market. Collaborative partnerships between big pharma and small pharma organisations can translate scientific discoveries into tangible therapeutic solutions, often more quickly than either party could do alone. By combining forces, both partners can streamline drug development processes and expedite the commercialisation of promising drug candidates. An example of a successful collaboration is the relationship that Pfizer’s Centre for Therapeutic Innovation (CTI) has with academic institutions, which is helping to accelerate the process of turning scientific discoveries into novel therapeutics. AI allows that collaboration to be taken to another level.

The downstream use of data

Chief among the challenges that pharmaceutical companies face as they harness the power of AI, is how to preserve data integrity and align downstream processes with the original purpose of data collection. The EU AI Act underscores the importance of transparency, accountability, and fairness in AI deployment. This requires stringent compliance measures to counter algorithmic bias and other unintended consequences.

The Act also provides opportunities for data-driven innovation in pharmaceuticals. Working with a clear understanding of the regulatory scrutiny organisations are under, appropriately deployed AI technologies can turbocharge drug discovery with algorithms that offer unparalleled insights into complex biological systems, expediting the identification of promising candidates and optimising preclinical tests. Similarly, more efficient clinical trials and patient care pathways are possible with AI-driven predictive modelling.

New risks, new frameworks

The pursuit of AI-driven innovation in pharmaceuticals is not without risks. Algorithmic biases, data drift and unintended uses of data pose significant threats to the integrity and fairness of AI systems. To mitigate these, pharmaceutical companies must adopt comprehensive governance frameworks so that data can be verified, tracked and audited. Moreover, transparent documentation of AI model development and validation processes are paramount to ensuring accountability and regulatory compliance under the AI Act.

The operationalisation of this compliance requirement will likely sit under the data protection officer; however, this is dependent on the size of the company. Life sciences companies will need to ensure they have structures in place that ensure all parts of the delivery chain can input the right information into a comprehensive real-time record of processing.

EU AI Act and the liability regimes

Two notable and complementary pieces of legislation that work alongside the AI Act are the AI Liability Directive and the revised Product Liability Directive (PLD). The PLD applies to claims made by private individuals against a manufacturer for damage caused by defective products. Other defects could include a lack of software updates or failure to address cybersecurity vulnerabilities. It is also applicable to refurbished products. The revised PLD specifically extends the nature of damage to include loss or corruption of data and medically recognised harm to psychological health. This has huge relevance for the pharmaceutical industry and is intended to ensure that victims are fairly compensated for damages without the need to prove fault.

The Act is designed around a ‘presumption of causality’. This means that the burden of proof is simplified for victims to be able to establish the damage that has been caused by an AI system, so that a plaintiff should not have to go through long and expensive processes to prove that the AI system has done something wrong. The onus is on the party responsible for the AI system to present evidence that it was not at fault.

How to act on the act

Life sciences organisations need to consider how their business will be impacted by both the EU AI Act and the new liability regime. They need to carry out thorough risk assessments to see whether their AI systems are classified as “high risk” according to the legislation, and to consider how they will comply with potential disclosure requests. Such activities should form part of their development roadmap to comply with the Act as well as the AI Liability Directive. This will reduce risks when procuring AI systems.

Of course, AI is already hard at work in the pharma industry. In the realms of drug discovery, companies such as Novartis employ AI algorithms to analyse vast datasets and identify potential therapeutic targets with unprecedented precision. Similarly, Pfizer uses AI-powered predictive analytics to optimise clinical trial designs and expedite regulatory approvals.

By embracing the principles of transparency, accountability and consumer protection, pharmaceutical companies can chart a course towards a future where AI-driven technologies empower healthcare transformation, while upholding the highest standards of integrity and compliance.

References:

Liability Rules for Artificial Intelligence—European Commission (europa.eu)

EU Artificial Intelligence Liability Directive (shlegal.com)

Artificial intelligence liability directive (europa.eu)

The European AI Liability Directives—Critique of a Half-Hearted Approach and Lessons for the Future | Oxford Law Blogs

Artificial Intelligence in Pharmaceutical Sciences—ScienceDirect

Downstream AI products benefit (and suffer) from access to upstream AI (substack.com)


Cognizant UK & Ireland
Author Image

Our experts are contributing exciting insights about what's going on within technology and innovation.



In focus

Latest blog posts

More blog posts