Being smart with artificial intelligence: deploying AI in the public sector

By on 17/05/2021 | Updated on 21/05/2021
Assisting, not replacing: AI can amplify the work of officials – but the webinar panellists agreed that it should not displace them. Image by Pixabay

AI has huge potential value in policymaking and service delivery – but the emerging tech has a unique set of risks. At a GGF webinar, civil service leaders from the USA, Canada and Germany discussed how governments can realise AI’s benefits while steering around its pitfalls. Adam Green reports

Nothing better illustrates the need for governments to harness artificial intelligence (AI) than the challenge faced by Germany’s Ministry of Labour and Social Affairs. In the next 15 years, 30% of its staff will retire and become users of the agency’s pension system instead. This twin dynamic of staff retrenchment and greater demand for services sums up why governments across the world are so keen to embrace technology: how else to deliver more with fewer resources?

“The demand that is really driving the diffusion of AI is demographic change,” Michael Schönstein, head of strategic foresight and analysis at the ministry’s Policy Lab Digital, told a Global Government Forum webinar in April.

Rather than following a predetermined set of rules, AI algorithms ‘learn’ how to achieve their mission by identifying connections in large data sets. At its best, AI can revolutionise service delivery – scanning a patent application to make an instantaneous adjudication without the need for human intervention, for example.

But the risks are also substantial. It can be hard to ensure that services are fully accountable and transparent if highly-complex, fast-evolving algorithms lie at their heart. Development can be costly, requiring the assembly and standardisation of huge quantities of data, and any biases in that data can be reproduced in faulty decision-making. As a result, many senior civil servants are understandably nervous about adoption.

Cautious beginnings

Michael Schönstein, head of strategic foresight and analysis, Policy Lab Digital, Work & Society, Federal Ministry of Labour and Social Affairs, Germany

In Germany, Schönstein has been able to launch some services that are completely AI-driven: a bot that uses text, image and speech recognition to help businesses register new employees for the country’s social security system, for example. But there are limits to how fast he can move, he commented.

In many cases, German legislation requires a signature to be applied before a government decision can be taken, limiting where AI can be applied. Works councils and trades unions must approve any new services in the areas of pensions and social security, complicating sign off of proposed deployments. And even for processes where AI has been applied, a human confirmation step may still be required.

As a result, 80-90% of AI deployments by the Ministry of Labour “involve the improvement of individual steps within existing administrative processes,” Schönstein said. The largest single field of experimentation has involved the existing child benefit application process, rather than any greenfield services.

This approach made sense to the other panellists, who argued that governments should  move forward cautiously and by delivering practical, measurable improvements to existing services – often assisting human decision-making, rather than replacing it entirely.

Small steps, real results

“Very often, when we see a [new] technology stack such as AI, it’s very alluring to just start playing with it and make predictions and all kinds of awesome recommendations,” said Dr Vik Pant, chief scientist and chief science advisor at Natural Resources Canada. “But really the question [should be]: how does this map to the priorities and plans of our department?”

Dr Vik Pant, chief scientist and chief science advisor, Natural Resources Canada

Pant leads an ‘accelerator’ within Natural Resources Canada. These institutions, borrowed from the world of startups and venture capital, involve interdisciplinary teams coming together to push forward a promising idea as fast as possible.

That might sound like the antithesis of gradual, pragmatic change. But Pant is keen to point out that the accelerator team work closely with the operational and frontline civil servants whose problems they’re trying to solve – ‘co-creating’ solutions alongside subject matter experts. This ensures both that the AI tool will closely match the requirement, and that service managers and elected leaders are familiar with its operation and characteristics.

“Ensuring full line of sight for our departmental decision-makers into the rationales, the way we make choices, the way we make decisions, how we evaluate projects, [means] we can get buy-in from all of the leadership within our department,” Pant told the online audience.

Nadun Muthukumarana, a data analytics partner at the event’s knowledge partner, consultancy Deloitte, was equally adamant that technologists must collaborate closely with policy and service managers rather than building AI products in splendid isolation. “A lot of applications of AI fail [because] somebody is doing it as a proof of concept or an experiment, but when you try to scale it up to delivering services to millions of citizens it doesn’t work,” he said.

He advises his public sector clients to only consider using AI where enterprise tooling exists, ensuring that algorithms can be embedded in software able to process the extremely high volumes of transactions required in many government services. And Muthukumarana also emphasised the need for civil servants to cooperate across organisational boundaries, assembling data sets of sufficient scale and quality to support the effective use of AI.

Do no evil

Nadun Muthukumarana, data analytics partner, Deloitte

In the United States, elected representatives have the resources and structures to hold the executive arm of government to account – and this has led to heightened scrutiny of the use of AI. Taka Ariga, chief data scientist and director of the Innovation Lab at the US Government Accountability Office (GAO), explained that he’s leading investigations into a number of cases where AI has already been deployed by American agencies.

He compared the current capabilities of AI unfavourably to those of a small child: Google Translate can help him navigate museums and restaurants in Paris, he noted, but remains unaware of the context of conversations – meaning that its translations lack nuance. These constraints, he argued, should limit both AI’s deployment and the weight put on its results. In many conversations with government digital professionals, he recalled, he’s been assured that their models have been designed to squeeze out bias. But in his view, it’s still not clear “how you operationalise ‘do no evil’” – creating systems that can both guarantee and demonstrate equity in outcomes. “From a GAO perspective, we are in the business of verification,” said Ariga. “We would love to trust those AI implementations, but as an oversight entity, we want to see evidence”.

Generating this evidence is becoming easier as the technology advances, commented Pant. “People always talk in terms of ‘black boxes’,” he said: algorithms whose operation has evolved to the point where decision-making becomes opaque to the system’s managers. But “just as much as there have been these advances in algorithms and models, there have been commensurate improvements also in explainability; in interpretability.” So even as AI systems become more complex, their developers are finding new ways to maintain oversight of how individual decisions are being made.

Pant’s staff already produce documents explaining AI systems to senior civil servants and decision makers, he added, and these could also be distributed to regulators.

Domain-specific regulation

Taka Ariga, chief data scientist and director, Innovation Lab, US Government Accountability Office

Regulation looks set to dominate the debate around AI for some time, and there is some concern that premature and overly-prescriptive rules could dampen innovation. When Schönstein recruited a specialist technology team and came up with proposals for several areas where AI implementations could be explored, he recalled, “that immediately made our political leadership a little bit nervous.”

The German government had already signed up to guidelines from global organisations governing the ethical use of AI, and his superiors were worried the proposed experiments might run counter to those. So Schönstein’s team ended up working on two projects simultaneously – developing guidelines on how to develop systems that comply with those standards, even as they built new AI systems for use within government. “We’d have liked to do these sequentially, in an ideal world,” he said.

Schönstein suggested that in many cases it will be sufficient to apply existing laws to AI, rather than passing new legislation focusing on the emerging technology. “You’re not meant to discriminate when you hire people. That’s already the law, [so] what we need is to make sure the current law is applied when you use an AI-enabled recruitment system,” he said.

But regulators worry that technological changes will render existing governance frameworks obsolete, said Ariga. “Too often, oversight entities are playing catch-up,” he commented. “There’s plenty of examples out there where we wait until a certain maturity of technology before we dive into the accountability implications.’

One way forward, said Muthukumarana, may be to vary regulatory scrutiny and compliance requirements between industrial sectors – tailoring oversight to the risk involved. Self-driving cars, which make thousands of life-or-death decisions every day, might need greater scrutiny than an AI deployed to produce transcripts of conferences, he commented.

“I think different industries will have domain-specific frameworks,” said Muthkumarana. “There might be some universal truths that can be incorporated. But a lot of these things will be developed on a domain-by-domain basis.”

Clarity on transparency

Underlying all of the issues discussed in the webinar – from legislation to oversight – lay the same trade-off: AI provides an opportunity to deliver better services faster, but it does not lend itself to traditional methods of government accountability.

The panellists’ solutions to this tension could be summed up in one word: transparency. Technologists need to find effective ways to explain their new methods and the benefits they can deliver – and not just within government, but also directly to citizens.

“Be transparent about why we are applying AI, how we apply AI, and what data sets we use to apply AI,” Muthukumarana urged. That way, he concluded, public servants can show people “how AI is actually improving some of the services you as a citizen are going to be receiving.”

This Global Government Forum webinar was held on 13 April, with the support of Deloitte. You can watch the event via our events pages or below.

About Adam Green

Leave a Reply

Your email address will not be published. Required fields are marked *