Governments navigate the double-edged sword of AI

By on 19/05/2024 | Updated on 04/06/2024
Steve Johnson on Unsplash

Artificial intelligence could boost civil service productivity by 45%, estimates suggest – but it also presents major new challenges for government, including rapid economic change and an urgent need for regulation. At the Global Government Summit, national civil service leaders debated the opportunities and threats presented by these fast-evolving technologies.

Artificial intelligence (AI) is “the single best thing we’ve got to try and improve the way government works”, said Alex Chisholm, who at the time of the event was the permanent secretary of the UK’s Cabinet Office and has since stepped down. Other technologies have passed through the hype cycle in recent times, without leaving much of an impact on public services. But according to Chisholm, AI – unlike some of the “false hopes of the past” – is likely to prove “genuinely transformational”, unlocking huge benefits for citizens, public servants and taxpayers alike.

Speaking in the session on AI at the 2024 Global Government Summit – which brought top civil service leaders from 16 countries to Singapore for frank, free-flowing discussions on the challenges they face in common – Chisholm pointed out that public sector productivity growth has been slow in many countries. Applying AI technologies could, he said, boost productivity by a dramatic 30-45%. This will be particularly valuable as average ages rise across much of the rich world, he added, increasing demand for services while the working-age population shrinks.

AI also has the potential to improve service quality, said Chisholm – and this will become ever more important as public expectations continue to rise. One recent study found that 79% of UK citizens “expect government services to be at the same level as private sector services”, he commented, but too often, they’re disappointed. As a consequence, “satisfaction with government services has fallen quite dramatically, as people make comparisons with what they’re getting in the rest of their lives”.

Using AI could substantially boost service quality, argued Chisholm. The UK’s chief technology officer, David Knott, has for example “made the very profound point that the ability of AI to process natural language means that rather than teaching people to act in ways which are convenient for machines, we can teach machines to behave in ways which are convenient for people”. So rather than filling in a form, people can simply talk to an AI interface, and in countries with many languages, they’ll be able to use their mother tongue – much improving the reach and accessibility of public services.

Stay up to date: Sign up for our monthly AI Monitor newsletter for the latest on how governments are using AI and mitigating risks.

The generative generation

These capabilities will be much improved by the arrival of generative AI systems such as ChatGPT, which represent a step change from the previous generation of AI technologies. “GenAI can comprehend text, interpret images and, most importantly, reason and generate content,” said Joon-Seong Lee, senior managing director at event knowledge partner Accenture and lead at the company’s Centre of Advanced AI. This enables it to advise knowledge workers, create new content or code software, opening up the potential for organisations to adopt completely new business models. Unlike earlier AI systems, Lee added, GenAI can make use of the 80% of available data that is “unstructured”, while its ability to converse with users “democratises” access to AI.

This combination of characteristics makes GenAI far more accessible and versatile than its predecessors. Alongside those earlier generations of the technology, it is set to disrupt a huge range of working roles, activities and industries. Accenture research has found that GenAI will impact around 40% of working hours globally, said Lee. Around 30% of hours worked across various fields – including administrative support, management, law and life sciences – have a high potential for automation.

“Whatever we do that is language-related, it can help solve the problem,” said Lee. “Think about banking, wealth advisory, insurance, the legal profession. No industries can escape the impact of AI.”

This presents a policy challenge for civil servants, pointed out Rashad Khaligov, deputy chairman of Azerbaijan’s Innovation and Digital Development Agency. “It’s like how robots replaced high-skilled manual workers in the past,” he said. “We have a challenge on our hands: figuring out how to help the people who lose their jobs to AI, and how to help future generations to develop the right skills.”

This applies to civil servants as well as private sector staff, of course. And civil service leaders have another mission, said Khaligov – that of redeploying staff away from repetitive administrative work, and refocusing them on “more crucial, important and creative tasks, to the benefit of citizens”.

New risks

 “Everybody needs to be trained in AI, but not in the same way,” said Alex Chisholm, the former permanent secretary of the UK’s Cabinet Office

Meanwhile, civil service executives must start to get their heads around the risks associated with generative AI, which can be even greater than those inherent in earlier AI technologies. The problems of AIs learning discriminatory behavior from their training data and developing opaque ‘black box’ algorithms are amplified in GenAI systems. Unlike traditional models that use curated datasets, GenAI systems hoover up unverified data from the internet, and can reason their way to unexpected and inexplicable results.

Feed a particular set of information into a GenAI, said Lee, and “you may not get the same answer all the time”. These challenges around explaining decision-making represent the biggest single constraint on GenAI’s deployment in public services, said Chisholm: “To be able to provide comprehensibility and auditability, and to be accountable, we’ve got a long way to go.”

The solutions to all these problems begin with training the civil service workforce to design, commission and manage AI systems. “Everybody needs to be trained in AI, but not in the same way,” commented Chisholm. Senior leaders must “understand its capabilities and be convinced of them, because we have control over resources and priorities within our organisations”.

“Other people need to be absolute wizards at the technology. And others need to understand how to work efficiently with their new desktop tools,” he added.

Picking up on Khaligov’s point about staff redeployments, Chisholm noted that other officials will need retraining as their work shifts from “administrative, to more value-added use of their judgement; more interpersonal work”.

Effective deployment of AI is also dependent on putting in place the essential “building blocks”, said Chisholm. “We can’t wish AI into the world without having good-quality data infrastructure: cloud-based systems, data available in digital form to be shared between different systems across government.”

Lessons from Singapore

Singapore has worked hard to put these foundation stones in place, providing a stable platform for the rapid introduction of AI-powered services. But in deploying AI, said Sim Feng-Ji, deputy secretary in the country’s Smart Nation and Digital Government Office, the city-state has taken a very careful line. The civil service first launched a wide range of programmes to improve the workforce’s understanding of AI, to raise skill levels, and to “encourage people to experiment”. These included training, online newsletters, workshops, community-building activities, and the creation of a digital test environment.

“You need to have a culture of being able to start small, and to fail quickly – and cheaply!”, he commented.

Digital leaders then sought use cases that were “relatively low-risk, but quite easy to adopt”. As examples, Sim Feng-Ji cited search functions guiding citizens to the right public services, and transcription software to aid civil service note-taking. More than 500 such projects are now up and running, he added.

Sim Feng-Ji’s office inspects and approves proposed applications via an AI Development Group, which includes representatives from across government; and over time, departments steadily expand their use of AI into more advanced and complex fields. His team are particularly alert to legal risks, he explained, and to data security: AI proposals are checked carefully to ensure that providers won’t retain data, that sensitive data won’t be compromised, and that there’s no danger of de-anonymisation.

Read more: How AI could accelerate the shift to clean energy and net zero

In terms of putting AI to work within mainstream public services, Singapore is probably the world’s most advanced nation (others have greater capabilities in intelligence, security or murkier fields), and as they roll out this new technology, Sim Feng-Ji and his colleagues have paid careful attention to ethics, transparency and security. That’s crucial to maintaining public trust, suggested Gordon de Brouwer, Australia’s public service commissioner. His government’s research into people’s views of civil service AI found that: “Performance matters, integrity matters: that comes down to transparency and honesty in how things are used. And empathy matters: that it’s used for people rather than against them, and that they can see that.”

The regulatory challenge

“I worry about the idea of regulating this sector because it’s moving so fast,” said former UK cabinet secretary Lord Gus O’Donnell

There is no such governance in the private sector – to the alarm of former UK cabinet secretary Lord Gus O’Donnell. “To what extent can we expect firms to naturally adopt responsible AI? Will the market do it? And if not, could it be achieved by regulation?”, he asked. “I worry about the idea of regulating this sector because it’s moving so fast; I think regulations might become otiose very, very quickly.”

Public sector regulation “typically responds in cycles of years, and the improvements in AI are happening in cycles of weeks or days or hours – so there’s a fundamental mismatch there”, agreed Chisholm. But governments have achieved good results by working with AI firms: OpenAI, for example, has adjusted its ChatGPT version 4.0 to prevent misuse. “There’s a self-restraint there, influenced by conversations with people in the US administration and others around the world,” he said.

Some sectors are voluntarily adopting responsible practice rules, agreed Lee – but others are not. In his view, there’s a need for regulation. This is particularly urgent in order to deal with bad actors who deliberately harness these technologies to cause harm, such as the creation of deep fakes and the use of AI in scams. But Lee also highlighted the dangers that can emerge in mainstream businesses: if insurance companies use AI to predict risk with high accuracy, he noted, they could render some individuals effectively uninsurable.

A global issue

AI “could be as divisive as it is creative”, noted Leo Yip, head of the Singapore Civil Service

“Given the global nature of these developments, how important is it for governments to be collaborating on shared ethical standards, norms and regulations?” asked one participant. “This is a genuinely international issue,” replied Chisholm. The UK’s autumn 2023 AI Summit involved countries with very different interests and traditions, including the USA, EU members and Far Eastern nations, “and we weren’t quite sure what would come out of it”, he recalled, “but there was agreement on some of the guidelines the same week”.

South Korea has agreed to host a successor event, said Chisholm, followed by France, “so every six months, there’s going to be one of these events bringing together governments, industry and academic experts”. The goal, he added, will be “to create frameworks that maximise the good and minimise the harm”.

There’s one further point to consider, commented Leo Yip, head of the Singapore Civil Service. “For those who harness AI, there are huge opportunities,” he said, but not every citizen will do so. Governments have already seen the risks of digital exclusion, finding that some people cannot or will not engage with digital services. Now they face the same dilemma as AI appears in various aspects of public sector operations. “This could be as divisive as it is creative,” he said. “As policymakers, we have to look into that.”

The Global Government Summit is a private event, providing a safe space at which civil service leaders can debate the challenges they face in common. We publish these reports to share some of their thinking with our readers: note that, to ensure that participants feel able to speak freely at the Summit, we check before publication that they are content to be quoted. 

The 2024 Summit will be covered in four reports, covering the four daytime sessions:

How governments are building resilience to address today’s crises and tomorrow’s catastrophes
– The opportunities and risks of AI
A contemporary approach to productivity
A truly diverse civil service leadership

About Matt Ross

Matt is Global Government Forum's Contributing Editor, providing direction and support on topics, products and audience interests across GGF’s editorial, events and research operations. He has been a journalist and editor since 1995, beginning in motoring and travel journalism – and combining the two in a 30-month, 30-country 4x4 expedition funded by magazine photo-journalism. Between 2002 and 2008 he was Features Editor of Haymarket news magazine Regeneration & Renewal, covering urban regeneration, economic growth and community development; and from 2008 to 2014 he was the Editor of UK magazine and website Civil Service World, then Editorial Director for Public Sector – both at political publishing house Dods. He has also worked as Director of Communications at think tank the Institute for Government.

Leave a Reply

Your email address will not be published. Required fields are marked *