How to get the public sector ready for artificial intelligence

By on 01/11/2023 | Updated on 01/11/2023
A picture from How to get the public sector ready for artificial intelligence roundtable at Public Service Data Live

Governments worldwide are straining to catch up with advances in artificial intelligence (AI) but attempts to do so are only throwing up more questions around security and instructive use cases. Senior public servants discussed their concerns about the future of this revolutionary technology at Public Service Data Live

Keeping on top of the fast pace of technological change in the modern world is one of the biggest issues that governments face.

When dealing with major digital and data developments, public sector organisations need to be careful not to move too quickly, or too slowly. Either error risks wasting time and energy, as well as financial resource entrusted to them by the taxpayer.  

This dilemma is particularly acute for government as they attempt to determine the best way to use artificial intelligence. Move too fast and there’s a risk that public trust could fail to keep up; move too slow and opportunities to improve public services could be missed.

The roundtable discussion held at Public Service Data Live and supported by knowledge partner Hitachi Solutions, looked how government can get this balance right. The session was held under the Chatham House Rule – meaning that we may not identify those speaking – but participants highlighted many of the best use cases of AI in government. 

One participant highlighted the fact that, in the education sector, students applying to universities are already using large language models such as ChatGPT and Google Bard to complete assignments. From here, the conversation centred on how ill-prepared the public sector was to deal with the speed of adoption. 

“They [students] will openly use Open AI to complete work [because] that technology is available to them,” the participant said, even if, at an organisational level, staff are still discussing what they should do.

“The point is that we’re catching up with what do we do about the technology that’s out there.”

The roundtable heard from officials that many government departments are deciding whether to encourage or discourage the use of AI.

However, the result of needing to get the balance right on AI deployment means the protocols feel vague, and even at times appear contradictory.

Officials are being told not to use [AI] for any work-related matters, said one attendee, “everyone’s using it”.

Another participant added that in the UK, departments such as the Home Office have tended to roll out new tools with in-built limitations designed to make them more secure. The problem with this, they said, is that such limitations often make such tools less functional and even counterproductive.

“As a change person, I want to encourage the people to follow and then go past me, and be trailblazers in this area,” they said.

One of the attendees from Hitachi Solutions then explained how some of the current difficulties with AI security and protocol were being addressed by leading corporations. They pointed to the fact that Open AI had recently released a playbook specifically for education.

Microsoft have also taken AI models typically retrained using information fed into them and applied a layer “enterprise grade security”. What this meant, they said, was that such models could no longer be retrained using potentially sensitive information from within organisations.

Bolt-ons vs transformative change

Another participant expressed doubts that AI would necessarily form an integral part of making public servants’ jobs easier going forward.

“The challenge is that we’re all jumping to [an AI] solution before we’ve even considered whether there are other solutions to the problems that we have in government,” they said.

They gave the example of chatbots, an AI tool that is increasingly familiar to anyone who has ever tried to refund a product purchased online. Because chatbots are so common, government departments are keen to incorporate them as a fix or bolt-on feature to a service. However, the participant said these departments often overlook the risks of incorporating chatbots, relative to the benefits. Another participant gave an example of where AI assistance is both highly risky and highly necessary. The UK government regularly receives large volumes of feedback via its main website, amounting to around 40,000 user comments each month. As well as containing users’ feedback, however, the information gathered often also contains personal details, such as full names and phone numbers. AI tools could comb this information for useful insights that could improve government policy, but separating this high-risk personal data from the more instructive data is hazardous, especially when the information comes in at such scale, and at such regular intervals.

The future for public service careers in the AI age

The conversation then turned to jobs and job security. How would AI change the career paths and skills for civil servants?

One speaker asked how career pipeline would be maintained for officials if junior roles were more and more likely to be replaced by AI. How can anyone learn to fill a senior role without first filling a junior role?

This concern was picked up by a participant who said that they had noticed certain government departments only permitted staff in certain roles to use AI tools. If an entry-level official does not enjoy access to the same tools as their senior colleagues, they said, then there is a risk they will get stuck in that role indefinitely.

“Everyone [should] have an equal access to the opportunities [of AI],” they said.

Another participant said that job security should not just depend on the ability to programme or use AI, but that traditional skills such as languages and interpersonal instincts would remain crucial to a functioning civil service.

“The conversation that I’ve been trying to have with my organisation is that we should be using open-source information to the best of its ability, but it doesn’t replace their traditional skills that actually [are] really valuable.”

Steps to implement changes

In the closing section of the discussion, participants also discussed some possible ways to make progress on AI deployment. A four-point model, being deployed in some organisations, was highlighted. To use AI, employees need to: secure their boss’s permission to use the tool; record what was said and/or inputted; check that what the AI has produced is true; and declare the fact that they’d used the AI tools. 

“That feels like a common-sense solution, whether it’s education or CVs,” the session heard, “If people declared that they had used some form of AI system, that would make us all a bit more comfortable that we’d understood that.”

A participant for knowledge partner Hitachi Solutions summed up the discission, noting that though AI had caught widespread attention over the preceding 18 months, real-world applications remained few relative to the hype.
“If you compare noise to practical application, it’s astronomical. There’s probably nothing at the moment that has more noise and less application anywhere in the world.”

A lot of potential uses are currently as bolt-ons to existing services, rather than a fundamental solution that meets a business objective or improves the everyday lives of citizens.

“What everyone’s looking for at the moment is “how do we actually…find use cases where there actually isn’t a better, simpler way to do it in a more traditional way”.

There’s no denying that AI will bring epoch-defining new capabilities to the public sector. But for now, the message from this roundtable is that many government organisations are waiting to discover their best use cases – and to set the rules around its use that could well set the groundwork for the public services of the future.

AI is a change to the way we collectively interface with technology. To ensure its responsible deployment within government, guard rails both technical and ethical must be considered as part of any adoption path. Hitachi Solutions are helping multiple organisations on this journey. 

Jack Murphy

‘How to get the public sector ready for artificial intelligence’ roundtable was held at the Public Service Data Live conference on 14 September in partnership with knowledge partner Hitachi Solutions.

It was attended by: 
Funmi Adeusi, Department for Levelling Up, Housing and Communities 
Juan Batley, Department for Education 
Michele Beard, Ministry of Justice 
James Bowsher-Murray, Ofsted 
Elizabeth Wright, Ofsted
Craig Whiteley Department for Energy Security & Net Zero
David Reeves, Home Office
Daisy Wain, Cabinet Office  
Alex Jones, Office for Standards in Education, Children’s Services and Skills  
Elinor Godfrey, Foreign, Commonwealth and Development Office  
Katie C, Government Communications Headquarters (GCHQ)  
Edward Palmer, Ministry of Defence 
Jack Murphy, Hitachi Solutions Europe 
Ed Pikett, Hitachi Solutions Europe 
Moderator: Kevin Cunnington, Global Government Forum 

About Partner Content

This content is brought to you by a Global Government Forum, Knowledge Partner.

Leave a Reply

Your email address will not be published. Required fields are marked *