UK public servants share both excitement and trepidation about using AI in government

By on 31/10/2023 | Updated on 31/10/2023
A picture from the Generating insight: how artificial intelligence can be used to identify efficiencies in government roundtable at Public Service Data Live

UK civil servants came together at Public Service Data Live to discuss the need for guidance to support its deployment in government, and the potential use of AI to deliver better services

Need a quick digest of lengthy meeting notes, or perhaps a project appraisal in just a few seconds?

All this and more is now possible thanks to the rapid rollout and easy accessibility of generative AI tools such as Ernie Bot, Bard and most famously ChatGPT. These private large language models, to give them their proper name, are much hyped, but in their ability to produce something very like human interaction and human-generated content the hype is justified. The speed with which these models have moved from launch to the attention of corporate and government leaders has dazzled technology watchers.

The question of how government can harness the power of generative AI was the subject of a roundtable discussion in partnership with Deloitte, took place at Public Services Data Live in London on 14 September.

The session, supported by knowledge partner Deloitte, was held under the Chatham House Rule to allow participants to speak freely – meaning that we may not identify those speaking, but attending civil servants were in curious but cautious mode. Many were keen to discover if, where and how generative AI was being used in government, how to scale up and align approaches across departments and how to use it safely and responsibly.

How to put AI into practice

While the transformative potential of AI was readily acknowledged, the discussion drew out early concerns about putting powerful ideas into practice.

Associated with this was the practical question of hosting and infrastructure. Using cloud hosting can present a security risk, but the immense computational intensity required to run generative AI algorithms makes it difficult to run them on site. An increase in the number of cloud locations in the UK that are able to provide this could make a “massive difference” to this problem.

Some round table attendees expressed frustrations that their departments had banned them from using the software because of security risks, while others lamented the slow progress towards local implementation.

This was countered with the perspective that very big language models was unlikely to be the way  either the corporate world and governments were likely to go.

“You don’t need a 175 billion parameter model that knows everything from the lyrics of the Spice Girls to the latest financial transactions,” the round table heard. “It’s interesting, but it doesn’t give you huge business value and you’re training it on loads of stuff that it doesn’t need.”

What was more likely was government departments using their own smaller models specifically trained on relevant data and for specific use cases.

“The minute you start doing that, the output you get is so much better than what you get from ChatGPT and other models.”

This made defining clear and specific use important for the use of AI in government, another attendee observed.

Across the breadth of the civil service, expectations of generative AI were fairly modest. One delegate shared that in a staff survey most people saw the benefit in answering questions about, and helping people interpret, HR policies and entitlements.

It was agreed that generative AI could help create more openness and transparency, not just for civil servants working within government, but for the citizen. It offered “really good, low-hanging fruit across departments”. Helping citizens navigate their way through, for instance, complex international travel advice was highlighted as a way generative AI could add a lot of value and improve efficiency.

AI use in government remains unclear

There was concern, however, that departments would act in silos, duplicating work rather than sharing knowledge.

While the Central Digital and Data Office is leading the internal AI strategy there was some concern that much still remained unclear.

“CDDO are trying to coordinate but obviously, it’s mushrooming up everywhere so I think one of the key challenges for them is to actually find who is doing stuff and try to bring us together.”

Still many in departments were hesitant and risk averse. One participant talked of approaches from “unscrupulous private sector providers”, selling ideas that would harness the technology’s potential.

“But actually, I can’t move on this stuff now because I don’t think we’re in a policy position to be able to do those things unless somebody tells me otherwise.” This meant disappointing colleagues who were excited about the technology’s potential.

Legalities, however, were sometimes used as inhibitors to action, another attendee observed.

“But until you move from strategy to trying some of these things out, learning the lessons, having the arguments with legal and various stakeholders as to why it can’t be done you’re just not going to get anywhere.”

Another delegate shared that the government’s Evidence House data science initiative, was working with a lot of different departments on use cases and will present findings at the global AI summit in November.

However, there was some challenge back that testing ideas and proof of concepts needed to be accompanied by practical guidance to transform these ideas into delivery.

The discussion then moved on to consider the potential for generative AI to improve interactions with citizens, with attendees highlighting both the opportunities and challenges around the issue of public trust.

It was important to use “human in the loop” systems, where humans assure and validate machine outputs, to improve accuracy, adjust tone and retain trust, one attendee said.

“We can work with the data that we have, giving it a certain tone, but there always needs to be that sense checking,” one attendee said.

“It’s essential for public trust to know that this is a trusted government voice.”

But how to make human-in-the-loop systems work was itself a challenge when AI is likely to automate so much of the work junior and early career civil servants currently produce. large language models were now at the stage of producing work equivalent to that done by a junior with one or two years’ experience, the session heard.

“How do we as humans learn to become those expert humans in the loop if we have the technology doing all this stuff for us?” one attendee asked.

“It’s worth thinking about this now, because it’s a problem that’s going to become very acute and very apparent in two or three years.”

Getting the right skills for AI

It was likely, however, that capability frameworks would adapt, and the skills required from a junior person may more closely align with those currently required of a senior person, another attendee interjected.

“Then you release the senior people to do other things. You give them the time to do the thinking, to consider broader strategic [issues]… wouldn’t we all love a couple of extra hours in our week because we didn’t have to write a note after a meeting?”

Attitudes to AI differed across government and tended to follow a classic adoption curve, the round table agreed. There were resisters and early adopters.

“I haven’t seen any patterns about where the early adopters are,” one delegate shared. “They’re just everywhere. It’s not the case that our technology department are all early adopters, which is a presumption that a lot of people make. They’re everywhere, dotted about.”

Data and technology experts were often more nervous of adopting AI than their non-techie counterparts, often because they had a greater appreciation of the risks involved, another couple of attendees pointed out.

“The people who are most worried are the people who understand the difficulty of implementing something like this.”

One attendee shared that her department had developed and communicated a policy for the use of Chat GPT. “It’s the clearest policy I’ve seen [but] it’s not resulted in widespread usage as people are so nervous.”

So while there is clearly much enthusiasm in government about the potential of large language models  – across a large range of use cases – the lack of clarity of common understanding over where and how the technology is being used might be scaled up is a clear concern for civil servants.


The ‘Generating insight: how artificial intelligence can be used to identify efficiencies in government’ roundtable was held at the Public Service Data Live conference in London on 14 September in partnership with knowledge partner Deloitte.

It was attended by:
Edward Palmer, Ministry of Defence
Stephen Edmundson, Department for Business and Trade
Amanda Svensson, Cabinet Office
Joe Torjussen, Cabinet Office
Ashely McAlister, UK Health Security Agency
Elinor Godfrey, Foreign, Commonwealth and Development Office
Katie C Government Communications Headquarters (GCHQ)
Tim Ketton-Locke, Ministry of Defence
Kevin Giles, Crown Commercial Service
Event moderator: Vivienne Russell, Global Government Forum

About Partner Content

This content is brought to you by a Global Government Forum, Knowledge Partner.

Leave a Reply

Your email address will not be published. Required fields are marked *