Taming the tiger: national digital chiefs on the powers and perils of AI
At the Government Digital Summit, top digital leaders were as daunted by the risks around AI as they were excited by its potential to transform public services. In a fascinating debate, they explored how to realise its potential while dodging its dangers
“I’m wondering how not to be daunted in the face of the pace of change here. The AI companies are in a race, and we don’t know if it’s a race to the top or a race to the bottom – but we do know that, with that pace of change, it’s tough to maximise the potential at the same time as managing the risk,” said Nadia Ahmad, chief data officer and head of evaluation at Global Affairs Canada.
Since the 2008 financial crisis, a series of major shocks have exposed weaknesses in the complex technological, logistical and economic systems underpinning everyday life. When more than 50 senior digital leaders from 15 countries discussed the potential benefits and risks of artificial intelligence late last year at the Government Digital Summit, several expressed their fears that in some future crisis we might discover an achilles heel in this emerging technology – and fall foul of new vulnerabilities unwittingly introduced through the adoption of AI.
“Through the pandemic, the war in Ukraine, some of the ‘just-in-time’ models that technology has enabled over the last 30 years have suddenly not been fit for purpose in those moments of crisis,” commented one departmental digital leader. “What happens if there’s a crisis, and things that we’ve relied on and built into the fabric of society are not up to the job?”
Ahmad had another obvious example: “Thinking back to the launch of social media, and our inability to predict the threats of that tool to democracy, to security, this exponentially more powerful tool could create even greater threats in that space,” she said.
Expressed by senior leaders who’ve spent their working lives developing and promoting digital technologies, these concerns must be taken seriously – and it will be public servants, such as these digital chiefs and their colleagues, who determine how and whether we address the risks.
“We are seeing significant acceleration in the experimentation, adoption and scaling of AI – and whether the changes are going to be for the good or not will depend on how well we do a few key things,” said Steven Maynard, a senior partner at event knowledge partner EY Canada. “The people in this room – who have responsibility for digital leadership within your respective governments – have a critical role to play in establishing the long-term foundations to be able to adopt this at scale.”
Realising the potential
If governments can roll out AI technologies safely, they’ll realise huge benefits. ‘Predictive’ AI systems, for example, can spot patterns in large datasets and model the likely impact of interventions, helping civil servants to target services and assess policy options. While the newly emerging technology of ‘generative’ AI – meaning ‘large language models’ such as ChatGPT – is dominating today’s headlines, Eric Hysen, chief information officer of the USA’s Department of Homeland Security, argued that departments have barely begun to realise the potential of predictive systems “in spaces like anomaly detection and computer vision”.
He commented: “How do we take advantage of this moment that generative AI is having to get some much-needed investment in this other area?”
Generative AI also presents huge opportunities for public servants, but – like its predictive cousin – demands careful handling. Chang Sau Sheong, deputy chief executive of Singapore’s Government Technology Agency, highlighted AI systems’ tendency to “hallucinate” – inventing an answer if they lack solid information on a particular topic. When deploying AI in a chatbot that would interact with the public, he explained that rather than allow the AI to write answers from scratch, his agency “created thousands of packets of data verified by humans to be true. So instead of generating the text, the chatbot searches for the correct packet of data to present back to the inquirer.” This represents an “intermediate measure for us to start exploring” the capabilities and limitations of AI, he added.
Across all these applications, the common factor is that AI is used – in Hysen’s words – “as decision-support and augmentation for our people, not as a replacement for human decision-making”. Often, its greatest value is to give “the human an assistant that augments their capacity, so they can redirect their efforts towards things that only humans can do”, commented Maynard. “We believe in putting humans at the centre of all things around AI.”
While some applications may be very low-risk – where there’s no danger of a discriminatory outcome or an unjust case management decision, for example – in most cases, human oversight remains essential in order to maintain clear lines of accountability. There are, for example, obvious dangers around the use of ‘black box’ AI systems – whose decision-making is opaque – and the training of AI systems on data that includes historic biases that could skew decision-making, noted Chang Sau Sheong.
Sign up for our monthly Government AI monitor newsletter
Addressing the dangers
Addressing these threats will demand strong, well-informed leadership, and careful training of both digital staff and the wider workforce. Given the wider social and economic implications of AI, “we’ve had this debate within our government as to whether we should build AI into our IT organisations, or stand up new AI organisations”, said one digital leader. That individual is firmly in favour of taking an integrated approach – and Stephen Burt, chief data officer of the Canadian Government, very much agreed. “I had a little shiver of horror when you talked about chief AI officers,” he said. “We had a similar conversation with folks who were asking who’s going to be in charge of this, and I reminded them that they all have chief data officers.”
Asked which new skills will be required among technology staff, Chang Sau Sheong highlighted the importance of understanding ‘natural language processing’ and the value of a mathematical background. Where governments want to develop their own AI systems, they’ll have to invest heavily in skills and recruitment. Hysen’s department, for example, built the capability to adapt an open source model to meet its requirements. On the other hand, in future they may require fewer technical skills in some fields: AI systems can take on aspects of work such as coding, noted Maynard.
Skills for all
More widely, non-digital staff will need a better understanding of the characteristics and capabilities of AI – strengthening their ability to ask the right questions and interpret the answers they receive. Clare Martorana, chief information officer of the US federal government, noted the importance of “prompt engineering: as a user you have to prompt correctly in order to get the correct output from these models. When we’re putting this technology in the hands of our employees, it is going to be something we’re going to have to rigorously train people on.”
“Governments may also have to develop their capabilities in understanding ethical risks,” commented Gayan Peiris, head of data and technology at the United Nations Development Programme. “We need more technical talent. At the same time, the world needs more philosophers, more linguists, more theologians. They should play a role in shaping the use of AI; and ethics and safeguards should be ingrained into school curriculums now, to prepare society for what’s coming.”
Those skills will be important not just in deploying AI, but in regulating it. Many civil services are relatively well placed to address both challenges, argued Hysen.
“With recent technological advancements, like cloud, it felt like we in government were playing catch-up with the private sector, and following well-established best practices from private sector adoption,” he commented. With AI, on the other hand, “because we’ve invested in building our own ability to leverage agile software development, open source technologies, we have in-house technical expertise and we’re not relying on contractors”.
“This is the first time where I’ve seen a new technology start to bubble up where we’ve been able to jump on it just as quickly as other organisations, and it’s based on that strong foundation,” he continued. “We’re figuring it out alongside the rest of the world, and that represents a pretty remarkable point of maturity for the government digital movement globally.”
Read more: US government taps agencies for AI project ideas
Regulation and risks
In terms of regulation, Canada’s Stephen Burt was confident in his ability to control the use of AI across the public sector.
“I think we’re in a good place, in terms of having a minimum policy set that allows us to evolve as the technology does,” he commented. “While I spend a lot of my time cheerleading and trying to get departments to take more risks, I have enormous powers to slow them down – that, fundamentally, is what central agencies are able to do to line departments.”
Singapore has established a framework to help civil servants assess risk in AI deployments. “It has two basic dimensions,” explained Chang Sau Sheong. “The first is how well the public will accept the technology, and the second is about the potential harm it might bring to the public sector.”
GovTech evaluates proposals for the use of AI, working with business-owners to assess and manage risk – and these are quite subtle judgements. The risks are greater, for example, where an AI system is making decisions rather than providing advice; and where it is providing advice, the risks are greater where the individual receiving that advice is a frontline caseworker, rather than a digital professional expert in the technology’s characteristics.
The biggest challenge lies in regulating the use of AI within the private sector – and this is becoming urgent, said Maynard. “There are some potentially scary elements of AI. We have seen Elon Musk and other tech luminaries raising the alarm that the horses are out of the barn on this, and there’s perhaps not the control that’s needed,” he commented. “We really need to get serious.”
The European Union’s AI Act presents one way of addressing the risks, he added, setting out “some practical rules around, for example, how we can leverage historical data”.
Regulating such a fast-moving technology is not straightforward – but Burt had a way into the problem: “Don’t think about policymaking in the context of specific technology applications, but in how it affects citizens.”
In regulating the use of data, he noted that the government has set out rules that, for example, protect transparency in decision-making and guarantee people a right of appeal. With regards to AI, “we need to make sure we’re anchoring this in the same policy considerations”. There may be some specific fields that require additional rules, said Hysen – his department has published a policy on facial recognition – but he broadly agreed that government already understands the principles involved.
“We need to be evaluating any automated system against these criteria,” he said.
The limits of control
Governments must also recognise that not everybody will play by these rules.
“Throughout history, people have been more than willing to ignore guardrails in their quest for domination,” said Andre Mendes, chief information officer at the US Department of Commerce. AI provides an “enormously low entry barrier for massive computational power” putting powerful new tools in the hands of authoritarian regimes.
US leaders are “mindful but encouraged”, commented federal government CIO Clare Martorana. “As innovators, we have an ethical duty to build and use technology responsibly – which includes proactively considering the potential impacts, both positive and negative, from the outset.”
“Governments probably should be a lot more front-footed in the legislative and regulatory area,” argued Maynard. “We don’t need to have perfect policy lined up in order to start moving in this direction. Starting at a smaller scale is helpful to build the muscle: doing what we can in a safe way while we’re learning, and staying active on the legislative and regulatory front to make sure no major harm is done.”
Meanwhile, though, those horses have left the barn.
“AI is fundamentally changing the work of government; it is fundamentally changing society writ large,” Maynard concluded. “It’s working, and it is going to change the world.”
While Government Digital Summit sessions are held in private, GGF produces these reports to reveal to our readers around the world the priorities and preoccupations of national digital leaders – checking before publication that participants are content to be quoted. Our four reports cover the four daytime sessions:
Practical plans: how to build a digital strategy that gets delivered
The shelves of governments around the world are groaning with digital strategies that laid out their ambitious goals – then collapsed on contact with reality. In this session, top leaders from around the world explored how to produce a strategy that generates real change.
The evolution of AI: fresh challenges and emerging opportunities
Digital leaders are as daunted by the risks around AI as they are excited by its potential to transform public services. In a fascinating debate, they explored how to realise its potential while dodging its dangers.
Winning the cyber arms race
Cyber criminals and hostile intelligence agencies present an ever-growing risk to your organisation’s systems, assets and reputation. Here, national IT chiefs identified the keys to security in a perilous digital world.
The battle of the data strategies
When ChatGPT was invited to pick a winner, a session on data strategies took on an unexpectedly competitive tone. Would the UK or Canada win this strategy slamdown?
We’d like to express our gratitude to our knowledge partners, EY and Blackberry, whose support enabled us to provide this event at no cost to the public sector.