Skip to main content Skip to footer
Cognizant logo
Cognizant Blog

AI has started to take centre stage at all levels of government. But how can the public sector move from hype to reality and start realising its value?

It’s impossible to scroll through LinkedIn or the news these days without every other post being about AI. AI has even been named the 2023 ‘Word of the Year’ by the Collins Dictionary, reflecting its pervasive influence in our daily conversations. 

What’s striking is that this wave of enthusiasm is sweeping through the public sector with equal, if not greater force, to other sectors. In fact, this year has seen government excitement around artificial intelligence (AI) that’s unlike anything I’ve witnessed before.   

More hype than Cloud and Big Data

Ten years ago, the public sector experienced a transformational wave with the emergence of Cloud and Big Data technologies. Like AI, these technologies held the promise of flexibility, agility, scalability and cost savings from modest investments. Yet they didn’t attract quite the same levels of hype that we’re seeing now with AI. 

This is partly because the push to adopt AI is coming from the very top—evidenced by Prime Minister Rishi Sunak’s recent AI Safety Summit at Bletchley Park

But it’s also evident lower down. At the government’s DataConnect conference in September, for example, every AI session I joined had hundreds of Civil Service attendees keen to learn how to put AI into action. This certainly isn’t the norm for conferences where people have to join yet another Zoom call during their working day at a specific time. 

Notably, the Turing Institute-led session drew substantial interest, with over 300 attendees tuning in to hear its analysis on which government services could benefit from the use of AI. The data-focused Institute shared research showing services with higher transactions and share of routine tasks had the highest potential for AI.  

Departments are exploring AI through lighthouse projects and hackathons  

From my own previous experience when I led the Home Office Digital, Data and Technology Strategy function, I’ve seen first-hand how AI is becoming central to forward-looking digital, data and technology agendas.

Some, like the Department for Work and Pensions (DWP), are testing AI out with lighthouse projects, finding ambitious and high-impact initiatives that can serve as exemplars to showcase the potential of the technology to the rest of the department. DWP has said its lighthouse programme is being used to “test and learn” in a safe and governed environment to improve customer outcomes and department efficiencies.   

Others are using hackathons: defining a problem and inviting technologists to hack away at an AI-based solution. The Department for Education (DfE) has looked at how AI could reduce teachers’ workloads by writing lesson plans or marking exam papers, while the Home Office has explored how AI could help reduce the asylum backlog.

The centre is focused on AI capability-building  

At the same time, a strong push is underway at the heart of government around AI-driven transformation. The 10 Downing Street Data Science team, Evidence House, aims to “radically upskill civil servants in data science, development and AI,” while the Incubator for AI (i.AI) has been created to upgrade AI capabilities and skills across the Civil Service. 

Meanwhile, the Cabinet Office’s Central Digital and Data Office (CDDO) has been quizzing departments on which of the top 75 cross-government services show potential for AI transformation and is producing a guidance for GOV.UK on how to use AI safely.

Caution around ethics and risk means real applications are still lacking

But while there's lots of AI noise and capability-building going on, actual use cases are still in short supply. Even at the AI-focused DataConnect event, discussions focused on broad themes rather than specific examples.

One notable exception is DWP, which has discussed an early AI project that focuses on prioritising vulnerable people’s enquiries about access to DWP systems and benefits. But that project is couched in caution: DWP has stated that it will never use AI to determine or deny a payment to a claimant. A human agent will always make the final decision, to ensure safeguarding of individuals. 

Even at the top, Sunak’s enthusiasm for AI is tempered by concerns over ethics and risk. His Bletchley Park Summit kicked off with a declaration about the global need for coordinated action to mitigate AI risks, and he recently introduced the UK AI Safety Institute, signaling the government’s commitment to responsible AI use. 

Elsewhere, I’ve heard that a large chunk of the much-anticipated AI guidance from CDDO will be on ethics, and at departmental level, the current focus is on drafting AI ethics strategies, rather than embarking on lighthouse projects. 

Early indications point to the use cases of the future

This sensible focus on ethics and risk means it may be a while before we see real use cases being showcased at high-profile events. Yet there are signs that the public sector is already discreetly using AI in certain areas, and these may point to what lies ahead.

Somegovernment recruitment teams have embraced generative AI to speed up the process of writing job advertisements, for example. AI has also made inroads into communications, helping to craft letters and other types of citizen-facing content, and some developers are using AI to speed up the writing of code. 

There are also indications that AI technologies are assisting customer service representatives to manage calls more efficiently—for example by using sentiment analysis tools to help representatives respond appropriately. 

These emerging applications chime with analyst firm IDC’s analysis of AI use in the public sector across Europe, which lists the top five use cases (in priority order) as:  

  1. Knowledge management applications, like synthesising data to respond to freedom of information requests 
  2. Conversational applications, like citizen or employee chatbots
  3. Marketing applications, like creating content and schedules for a social media public information campaign
  4. Design applications, like writing RFP documents for public tenders or generating urban planning design ideas 
  5. Software development, like writing and testing code
AI will need stronger collaboration between UCD and technology communities

The cautiously-emerging use of AI in government may help to heal a long-running rift between user-centred design (UCD) folk and technology-focused professionals. 

In my experience, there’s always been hesitancy on both sides to form deep collaborations and get involved in each other’s disciplines. AI has the potential to bridge that divide by bringing professionals together in a way that innovative technology has not been able to do before. 

That’s because, while AI is a technology first and foremost, its wider implications should interest UCD people focused on accessibility, ethics and experience. After all, the idea behind using AI in many services and applications is to mimic human-interaction or abilities. UCD experts will therefore be needed to design AI services that feel human. 

In fact, it wouldn’t be an exaggeration to say that the success of AI government rests on the willingness and ability of UCD professionals and their technology counterparts to deeply collaborate.   

A cautious but determined approach is the right way forwards

In conclusion, the Civil Service is rightly approaching AI with a deliberate and cautious mindset. However, the time must come when the public sector transitions from discussing AI as a concept to actively implementing and benefiting from its potential. 

As a robust AI infrastructure is established under the guidance of the CDDO and supported by AI ethics strategies, departments will gain more confidence to explore AI applications. Insights gained from lighthouse projects and hackathons will provide AI advocates with valuable iterative opportunities. Ultimately, success will hinge on deep collaboration between the UCD and technology communities.

The pivotal question is how fast the pace of AI adoption should be. Based on my analysis above, the prudent course appears to be one of cautious integration, similar to the approach to cloud adoption a decade ago. 

As with any technology innovation, the public sector must first conduct risk assessments, and this can sometimes impede progress. However, it is through these deliberate conversations around risk that the public sector can navigate the challenges and start to move forward confidently in its AI journey.


Rosalie Marshall

Senior Manager, Public Sector Consulting, UK&I, Cognizant

Author Image




In focus

Latest blog posts

More blog posts