Skip to main content Skip to footer
Cognizant Blog

As 2023 draws to a close, we can look back on a year that—from a business perspective—has been dominated by talk of artificial intelligence (AI) and its potential to transform organisations and the way we work. Nowhere is that potential greater than in the global Life Sciences sector.

AI has the power to revolutionise the Life Sciences industry, driving change at every stage of the value chain: from early discovery through to the commercial phases. It is projected that AI will find candidate drug molecules a thousand times faster than existing models. This scale of gain is an unambiguous sign that your business must change the way it works, or be at risk of being left behind.

To ensure this level of change happens within sustainable business practices, businesses must consider their responsibilities when innovating with AI technologies. This means adhering to regulation throughout the cycle, and also accounting for wider ethical principles of harm minimisation, safety and equity. Life Sciences is already a tightly regulated sector, with high thresholds of compliance when it comes to upholding ethical research standards and patient safety. How will, or should, regulators and the sector proceed to ensure that AI practices do not encroach on these protections?

The nature of the risk

One key risk for Life Sciences is bias in data sets. If AI tools are trained on biased data, the outputs will magnify these biases. Historically, women and ethnic minorities have been excluded from clinical research and this bias is transferred into the data sets used by AI tools. If not properly handled, AI-enabled R&D based on such data sets risks outcomes that are unfit for these underrepresented demographics.

Another risk is the lack of moral agency present when AI decisions are generated. Mistakes (known as hallucinations) can be made by these tools, which raises questions of accountability for AI-enabled outputs and liability for harms. The claims and promises being made for the potential of AI are often on a grand scale, and we must assume that its mistakes will be of a similar magnitude without the right checks and balances.

A complex regulatory landscape

Lawmakers and policy makers understand that regulation needs to be established or strengthened to prevent and mitigate AI-enabled risk and there are already a number of players on the global regulatory pitch.

The UK government has published a white paper detailing the intent to take a principles-based and context-specific approach to AI regulation, indicating that Life Sciences will be regulated as its own domain by existing regulatory bodies. Additionally, the government has labelled certain AI tools as posing ‘frontier’ risks that require international cooperation. The recent AI Safety Summit called out several of these risks to society, including those pertinent to Life Sciences, such as the development and use of biological weapons. The resulting Bletchley Declaration marks an important first step in achieving international cohesion, though concrete steps to sustain this global dialogue are yet to be announced.

The EU AI Act takes a risk-based approach, regulating by the level of risk an AI tool poses. The AI Liability Directive and the Product Liability Directive complement this, legislating fault- and no-fault-based compensations claims in regard to AI technologies. This will likely impact how UK businesses engage with the EU market and R&D within Life Sciences, because the speed of AI-enabled innovation may conflict with the rigidity of the EU approach.

Further afield, the US is introducing an Executive Order on AI and China is taking a controlled approach, particularly with regard to generative AI (gen AI) tools. It’s clear, however, that we are only at the beginning of this story, with the UK and others finding their feet when it comes to regulating these fast-moving technologies.

Cognizant and Northeastern University London  

How can organisations navigate this complex regulatory landscape and ensure the responsible development and delivery of AI technologies, in the Life Science industry and beyond? Cognizant believes that a considered and collaborative approach is required. As an organisation passionate about technology, we understand the transformative potential that AI offers, while recognising that the technology industry alone won’t have all the solutions to the risks it poses.

To this end, we have partnered with Northeastern University London to develop a series of workshops that bring our associates and clients together with wider industry, academia and civil society, to discuss how to approach AI responsibly. Through its education programmes, Northeastern University has a track record in examining the impacts of AI in the UK and beyond, and it recognises the value in bringing together AI ‘thinkers’ (like academia) and ‘doers’ (like Cognizant).

This event series kicked off on Tuesday, 21 November, with a discussion on AI Policy & Regulation. The event consisted of five expert panellists with diverse experiences. It saw lively engagement from our audience and produced key takeaways pertinent to the Life Sciences industry.

1. Building on existing legislation

Existing legislation such as the GDPR and bodies such as NICE, MHRA and FDA exist to regulate and guide data, technology and Life Sciences around the globe. As businesses wait for AI-specific legislation, these foundations provide mandates and guidance for responsible innovation. One argument went as far as suggesting that this existing framework might already be sufficient, and that, without clarity of what outcome we want to see, or what we are looking to protect ourselves from, new regulation risked muddying the waters, potentially stifling innovation.

2. Scaling regulation with AI audits

Something that was not in question was the pace at which these new technologies were expected to develop, and how the scale of their impact will accelerate. Applying and scaling AI regulation across the R&D innovation lifecycle might also require new professional roles to reinforce accountability—'AI auditors’, for example, to ensure that pathways to innovation are transparent and that suitable actions are taken to counter non-compliance.

3. A people-first approach

Like other technologies, AI is an enabler for people to achieve their objectives. For regulation to be effective, people need to understand what regulation means for their day-to-day interaction with AI technologies. And as with any significant change programme, businesses need to take their people with them.

Cognizant looks forward to continuing this partnership with Northeastern University London in the new year. The workshops will continue on a quarterly basis, with ‘Sustainability’ and ‘Creativity’ lined up as our next two discussion themes. Look out for more information, with the next workshop scheduled for 20 February, 2024, at the Cognizant London office.

 


Cognizant UK & Ireland
Author Image

Our experts are contributing exciting insights about what's going on within technology and innovation.




In focus

Latest blog posts

More blog posts