Aiming high with AI: making artificial intelligence ubiquitous across government

Governments are working hard to harness AI – but how to move its use from being considered novel to the norm? At the Global Government Digital Summit, digital leaders discussed the fundamentals, from achieving workforce buy-in to gaining public trust
“How do we harness the benefits and opportunities of AI at scale, at an enterprise level, across all of our governments?” asked Kirsten Tisdale. “This isn’t about how we get started or find use cases. It’s about how we make AI ubiquitous.”
Introducing the first daytime session at Global Government Forum’s 2024 Digital Summit, Tisdale – EY Canada’s managing partner for government and public sector – was putting a stretching question to her audience: 58 top digital leaders from 24 nations and international organisations, who’d come to Ottawa to discuss and debate the challenges they face in common.
To draw out some answers to her question, Tisdale broke the group into three and set them to work on different ways into the problem – and when they returned, their spokespeople presented some genuinely fresh thoughts on the fast-moving but well-trodden topic of artificial intelligence. The digital chiefs also heard from Biren Agnihotri, EY Canada’s artificial intelligence and data and digital engineering technologies leader, who also had some novel perspectives on how best to develop and introduce AI systems (see box at the bottom of this report).
Tackling the topic of workforce skills, Mark Schaan, Canada’s deputy secretary to the Cabinet – artificial intelligence, emphasised the vast scale of the retraining and staff development required across government. Discussions of AI capabilities tend to focus either on building a cohort of digital specialists, or on raising the general digital literacy of the wider workforce – but as Schaan pointed out, there’s another facet to this question: the new skills that almost every public servant will require as AI technologies change the practices and processes of their daily jobs.
“What does AI mean for the meteorologist who now needs to work with a new, augmented set of tools that are fundamentally different to those they were using to predict weather a year ago? What does it mean for the agricultural scientist who can run very different models on food safety?”, Schaan asked. “That disruption to work functions, to tasking and skilling, gets very complex very quickly. I wouldn’t underestimate how methodical we’re going to have to be to work through all of that.”

Similarly, discussions of the key groups that need a better understanding of AI typically home in on senior executives and frontline staff. However, Michael Wernick, formerly Canada’s clerk of the privy council, picked out a different demographic: “One of the forces – for positive change or for resistance – is the middle management layer,” he pointed out. “They own the workflows, they own the teams, they have an enormous impact on culture, and they range from early adopters to resisters.” His group’s discussion had, he explained, identified that an “explicit strategy about middle management” may be required to achieve lift-off for AI in government.
Wernick also highlighted the need to engage with another crucial interest group. “The unions are a factor in this: they are defenders of job models and classification systems and staffing processes and all sorts of things,” he commented, adding that nobody within his group “had a recipe for this, but we need some way of bringing unions into the conversation in a more constructive way”.
Ima Okonny, chief data officer at Employment and Social Development Canada, suggested one approach: “Sometimes we miss the narrative about the new jobs we’re creating,” she said, citing emerging roles in data ethics, fairness frameworks and data cleansing: “I think there’s a big opportunity to shift the narrative to more of a public sector values-based AI approach, which I think will help us build trust.” Certainly, civil service leaders will need to find a way to discuss the implications of AI with staff representatives: as Wernick commented, “you can’t have a conversation about public sector workforces if you’re not willing to say the word ‘union’.”
Read more: Boosting productivity with AI: The year ahead with Mark Schaan, Canada’s deputy secretary for AI
Changing perspectives
Third, Wernick mentioned the need to change the perspectives of people within the “external feedback loops” through which civil servants are scrutinised and held accountable. Groups such as parliamentarians, government auditors, project evaluators, public commissions and political reporters need a better understanding of the dynamics and characteristics of AI technologies, he said, or they’re “going to be stuck five, ten years behind the curve – fearing the worst, doing ‘gotcha journalism’’ and ‘gotcha politics’.”
The wider public is also wary and uninformed when it comes to AI, commented another digital leader – and they recounted an experience that illustrates some of the difficult decisions facing public bodies as they begin adopting AI. “We implemented AI in our call centre, and citizens didn’t know whether they were talking to a human or AI,” they recalled. “When we turned it on, sentiment scores went up and call resolution times fell. It was a win from every technical and business operations perspective.”
However, “when AI became a big thing, political leaders began talking about this publicly – and sentiment scores dropped precipitously. Call resolution times increased, because people were trying to get to talk to a live human,” they said. When the public realised that they were dealing with an AI system, their trust collapsed. “So what did we do? We stopped talking about it, and all the scores reversed again: sentiment went back up. Our main takeaway was: during this critical time of adoption, how we talk about this matters – so be very thoughtful about that.”
Read more: New guidance issued to help UK government departments evaluate AI’s impact
In time, the digital leader concluded, people will grow familiar with AI – as they did with earlier technological revolutions. “We do see acceptance and adoption of AI in some demographics: those who were early adopters on mobile-enabled, cloud-based technologies and the internet,” they said. “With those technologies, there was a domino effect through other demographics – and we suspect we’ll probably follow the same path here.”
That process is already underway – for as Mark Schaan said, “the tiger is out of the cage”. The task facing government, he commented, is “to think about AI not as a technology that you’re helping society transform to, but as a deployed technology”.
AI is already becoming ubiquitous. For public servants, concluded Schaan, the challenge is to fulfil “government’s responsibility in terms of equipping society to be able to deal with its impacts, and in ensuring that we have the right guardrails and safety mechanisms in place to keep our citizens’ wellbeing intact as AI gets deployed”.
Five fresh perspectives on deploying AI
1. “Most of the time, proofs of concept die at that stage,” said Biren Agnihotri, EY Canada’s artificial intelligence and data and digital engineering technologies leader. This, he suggested, tends to happen because proofs of concept are – as the name suggests – designed to test whether a planned system will function effectively, rather than to gather evidence that it will help leaders to realise their goals. In consequence, such pilot projects “often don’t lead to a valuable outcome, and they lose the organisation’s support”.
The solution is a “mindset shift: from proofs of concept, to proofs of value”, said Agnihotri. “We need to convert these science projects into economics projects”. Rather than developing “minimum viable products”, he argued, project teams should be building “minimum valuable products” – finding ways to demonstrate through models and pilots that rolling out a system will generate tangible benefits for all stakeholders: budget holders, staff and citizens.
2. “Wherever we go, the first thing that comes up is: ‘Show me the use cases’,” said Agnihotri. “And use cases are good, but they achieve only a transactional value. But if you want a transformational outcome, you have to look a step above that at the capability level.”
Rather than testing whether technologies can cut costs or time in a particular process, he argued, digital leaders should be exploring whether they can be used to equip organisations with powerful new capabilities that can be used to build multiple use cases with small tweaks. “Take the example of forecasting: as a capability, AI can be used for revenue forecasting, immigration forecasting, supply chain forecasting,” he said. “You can use it in different ways, but inside, 80% of the models and compute engine is the same.” Here too, Agnihotri argued, “a tectonic shift is required in thought processes, moving from building individual use cases in silos to capability-driven use cases”.
3. In seeking to test out a new system or service, said Agnihotri, digital teams often create a stripped-down, bare-minimum product. But if they don’t ensure it’s compatible with enterprise-wide requirements, retro-fitting these later can be a complex and time-consuming task. “We start out bypassing the security and scalability aspects and end up spending a lot of time and energy making sure they’re production-ready,” he commented.
Even proofs of value should, he argued, incorporate compliance with organisational policies such as those governing log-in systems, responsible AI and cybersecurity.
4. “There are two aspects of AI: offensive and defensive,” said Agnihotri. “Offensive is the art of the possible, while defensive is about making sure we’re doing things responsibly. Particularly in the public sector, defensive comes first.”
Developments in AI technologies have, he said, addressed the “black box” problem that threatened governments’ ability to ensure accountability and transparency. AI-powered decision-making processes can nowadays be followed and audited, permitting public servants to explain their actions to citizens and stakeholders. However, new risks are emerging, such as ‘hallucinations’ in generative AI and the potential to spread misinformation. “These dimensions are evolving every day. You cannot put the genie back in the bottle, but you need a safety net around it,” he said.
5. “Traditional” deterministic or predictive AI systems, said Agnihotri, require clean, structured data but much of governments’ data is neither. So predictive AI project teams often “spend 80% of their time cleansing the data”. But generative AI works well with unstructured data, and can also be fed structured data. Hybrid ‘retrieval-augmented generation’ or ‘RAG’ models can combine both kinds of data to improve their responses. “The beauty of it is that if your policy documents are in PDFs or emails, and your structured data is in an ERP, CIS or HR system or any relational database, you can combine the power of both datasets to get meaningful insights,” he said. “This was lacking in the past.”
The invitation-only Global Government Digital Summit is a private event, providing a safe space at which civil servants with senior digital, data and AI roles in government can discuss and debate the challenges they face in common. GGF produces these reports to share some of their thinking with our readers – checking before publication that participants are content to be quoted.
Our four reports cover the four daytime sessions. The next three reports – on the cloud, digital credentials, and data sharing – will be published in the coming weeks.
Read more: A quiet revolution: driving efficiency in government using ‘pragmatic AI’