Refresh

This website www.globalgovernmentforum.com/trusting-the-process-trusting-the-product-how-governments-can-win-over-the-public-on-ai/ is currently offline. Cloudflare's Always Online™ shows a snapshot of this web page from the Internet Archive's Wayback Machine. To check for the live version, click Refresh.

Trusting the process, trusting the product: how governments can win over the public on AI

By on 26/05/2025 | Updated on 22/05/2025
Woman using transparent tablet innovative technology
Photo by Freepik

Trust in artificial intelligence (AI) may not have been the main issue on which Canada’s federal election was fought earlier this year, but its one that looms as large there as anywhere in the developed world. Like many governments, the Canadian federal public service has been busy preparing for a world in which AI, whether generative or agentic, is likely to form the bedrock of many future digital public services.

Part of that preparation has been to create an AI strategy that seeks to embed Canada’s democratic values in its adoption of the technology. According to the government, that means using AI to uphold human rights, public trust and national security.

But trust must be earned – and a recent webinar hosted by Global Government Forum, featuring three voices from within the Canadian public service, discussed how to earn that trust both from the public service and the wider public.

Trusting AI within government

Canada’s AI strategy took just over a year to complete, and as director of responsible data and AI at the office of the chief information officer within the Treasury Board of Canada Secretariat Jonathan Macdonald explained, a lot of that work was spent through “an extensive consultation campaign”. This involved roundtables with academics and research institutes in Canada, along with various bargaining agents, industry representatives, civil society and indigenous groups. The feedback, like the outreach, was extensive, but it allowed the government to start on a truly collaborative footing.

Steve Rennie, director of data foundations and enablement at Agriculture and Agri-Food Canada, also shared his experience of building trust in the use of AI.

In 2023, Rennie’s team entered – and won – the Public Service Data Challenge, organised by Global Government Forum, with a generative AI chatbot that provided conversational information on government agricultural programmes. He said that this experience “gave us a mandate and a clear sort of validation of the work we’ve been doing”, adding that the product would never have launched were it not for the trust-building exercises that preceded and followed its development.

Communication was key, he said. It meant “looping in people early and often to let them know what it is we hope to do [and to] try to understand what challenges they saw, and how we could work with them to sort of resolve [them]”.

Asked what he and his team’s biggest ambition is with respect to trust-building, Rennie said that “breaking down any barriers to understanding and comprehension of AI” was a priority. In short, accessibility is crucial to making AI-based public services friendly and trustworthy.

“You may not like to read a very long, lengthy text-based page. You might like shorter information. You might like bullet points. You might like rhyming [text]. There are so many different permutations, so many ways you can have the content presented to you using something like generative AI,” he said.

“[Accessibility] to me stands out as the most important aspect, especially as a government delivering services and providing information to Canadians… being able to have that content tailored to the individual person.”

Read more: Agricultural advice AI wins Canada’s Public Service Data Challenge

Winning trust from the public

Dr Saeid Molladavoudi, director of the centre for AI research and excellence (CAIRE) at Statistics Canada, added that while the opportunities present to government to AI are immense, the risks are equally daunting. He stressed deploying AI “efficiently and responsibly” was going to be as important as adoption.

“Downside risks are important, and it is important for the government…to mitigate [that] risk,” he said, adding finally that “AI literacy and AI capacity” would prove key to the process of mitigate potential harms.

When it comes to adoption of a technology, there are three things to consider, Molladavoudi said. Firstly, does the technology being put forward work? Second, is it legally and ethically sound? And thirdly, do people sufficiently understand and trust that technology, based on their understanding?

“We can have the best technology in the world, but if the public doesn’t trust it, there is no point,” Molladavoudi said.

Molladavoudi said that one important way in which governments can build public trust in AI is to establish what he described as a “public AI registry for all AI projects that are within the government”. These projects would not only be visible to citizens but also “open to the public for consultation, for scrutiny”, with contact information to enable engagement.

Read more: Delivery driver: how the Canadian Data/AI Challenge makes data dreams come true

Trusting the process, and being free to fail

A member of the audience then asked speakers whether Canadian public servants tasked with building AI products felt free to take risks. This drew an important distinction between getting the public to trust the results of AI development and getting public servants to trust the process of developing AI.

Macdonald said that AI brought with it “fear of the unknown” for both the public and public servants, and that within government, he could observe a certain reticence to launch anything new in “an untested world where nobody wants to get it wrong”.

One way to overcome that fear is for teams to engage directly with users who are going to be most affected by AI systems. Macdonald described this as “working in the open and failing forward”.

“The more we put out there, the more we enforce trust. There is some reticence to talk about where things didn’t go according to plan.”

Macdonald then emphasised the extent to which even the best laid plans are “really just roadmaps of what likely isn’t going to happen”. Recognising this in the form of open conversations both internally and externally could therefore temper anxieties around risk and the repercussions of taking bold action.

Read more: Why public legitimacy for AI in the public sector isn’t just a ‘nice to have

‘Never lose sight of the fact that we are talking about humans’

In the later stages of the conversation, it was suggested that public trust remained significantly harder for governments to secure than for private sector firms. In democratic nations especially, governments and private enterprises are rated on their accountability, but while the latter may be forced to shut up shop for its misdeeds, governments endure even when ruling parties bow out.

For this reason, governments that have any interest whatever in maintaining legitimacy to the electorate must work especially hard to put the citizen first, especially where the private sector tends to move fast and break things.

“The private sector is driven for profit, so if people pay for a service, they’re more willing to share their data to get what they’ve paid for,” Molladavoudi said, adding:

“Government is a different story. We are not for profit, and we have different mandates to serve.”

“It’s really important that we never lose sight of the fact that we are talking about humans,” Macdonald said.

“There are some competing forces here in AI. We’re seeing a lot of fast development and then these strong expectations of leaning in and adoption for these revolutionary gains in service delivery.”

He then said that good AI governance starts with the recognition that trust in government is already low, meaning teams within departments not only have to work very hard to build it but “can also lose it in in a flash”.

“These risk calculations about trust are really quite central to our willingness and our appetite to really take these courageous steps forward,” he said.

Finally, Rennie noted that clear and honest communication from government about what it is trying to achieve with AI, whether to itself or to the public, is what ultimately garners trust.

“It’s not necessarily about saying that you hit a home run every time you stepped up to the plate,” he said.

“Maybe you didn’t quite get there, maybe you had some shortfalls, but what did you learn, and how do you publicise that? I think that’s a huge piece to building trust, because it shows that you’re willing to admit when you didn’t quite get it right the first time, but that you had the capacity to learn something, and you were able to use that information to develop something that ultimately achieves your objective.”

The Building trust in AI to help government deploy it webinar was held on 13 May. You can watch the full webinar here.

About Jack Aldane

Jack is a British journalist, cartoonist and podcaster. He graduated from Heythrop College London in 2009 with a BA in philosophy, before living and working in China for three years as a freelance reporter. After training in financial journalism at City University from 2013 to 2014, Jack worked at Bloomberg and Thomson Reuters before moving into editing magazines on global trade and development finance. Shortly after editing opinion writing for UnHerd, he joined the independent think tank ResPublica, where he led a media campaign to change the health and safety requirements around asbestos in UK public buildings. As host and producer of The Booking Club podcast – a conversation series featuring prominent authors and commentators at their favourite restaurants – Jack continues to engage today’s most distinguished thinkers on the biggest problems pertaining to ideology and power in the 21st century. He joined Global Government Forum as its Senior Staff Writer and Community Co-ordinator in 2021.

Leave a Reply

Your email address will not be published. Required fields are marked *