AI deployment in government: laying the groundwork for success

In this webinar, experts discussed how best to implement artificial intelligence in the public sector, and how civil servants could ready themselves for an AI future
Governments around the world are moving at pace to develop and implement artificial intelligence but often these projects – and the resulting technologies – hit avoidable stumbling blocks.
During a Global Government Forum webinar with knowledge partner SAS, public and private sector experts from the UK Department for Education, the Treasury Board of Canada Secretariat, SAS, Booz Allen and Warwickshire Police shared their advice on how to build a solid foundation for AI.
Ian Knowles, director of analysis at the UK Department for Education, kicked off the conversation with concrete examples of how the department is embedding artificial intelligence into its own processes.
AI is being used in policy consultation to provide analysis of hundreds or thousands of responses, allowing teams to gauge sentiment early before they dig into the detail; it uses automation to sift, store and allocate emails and large language models to draft responses (these are checked by people before being sent); it is piloting tools to draft meeting notes and summaries and to track actions; and it is considering using AI to tackle fraud and error.
It is beginning to adopt AI for use beyond the four walls of the department too.
As Knowles described: “We’ve been on a journey, probably since 2023, of really building up our understanding and evidence of the spaces where we can make a difference [and] make progress on.”
One example of a tool launched as a result, is the department’s lesson-planning assistant which went live last September and is already used by 30,000 teachers, who say it saves them an average of three hours a week.
The tool provides teachers planning a lesson on a particular topic in the curriculum with related content scraped from a large dataset of existing and top rated training and teaching materials.
“It’s an example of where you can really start to use [AI] to make a real difference to people’s lives and allow teachers to focus some more of their time on teaching children in the classroom,” Knowles said.
The department is “taking steps to gradually work out how we can provide that kind of central and targeted support [and] not prevent them from innovating in their own space,” he added.
The department has made great strides in testing and deploying AI but it hasn’t been easy. Knowles listed some of the pitfalls government teams can fall into, including realising too late that you don’t have the right training data, or can’t get access to it; not involving subject matter experts who “understand the system you’re trying to put a solution into”; and a lack of shared vocabulary across the public service.
“Too often you see someone really good on the tech side, someone really good on the business side, but they can’t actually talk the same language,” he said. “So having someone who can do that interpretation for both sides has been a crucial step in making this work.”
Pulling the threads together
Jonathan Macdonald, director for responsible data and AI in the Office of the Chief Information Officer, Treasury Board of Canada Secretariat (who was speaking before the Canadian election was called), ran through the principles set out in the government’s recently-launched AI strategy for the public service. The aim is to spur good AI practices across government and mitigate some of the pitfalls Knowles described.
As Macdonald explained, the strategy was 13 months in development, seven of which were spent on consultation alone, which was, he said, “one of the keys to success”.
Also taking inspiration from similar strategies around the world, its principles revolve around human-centred design; the government’s readiness for AI adoption – including policy and infrastructure foundations; international interoperability, which Macdonald said is “really germane to the AI conversation”; and collaboration.
On the latter, he noted that “as we’re all aware AI doesn’t respect policy boundaries… you see these impacts across human resources, finance, service delivery, the science and research sectors, and stakeholders are really interested to be part of the solution as well”.
Crucially, the strategy hinges on taking a responsible approach to AI, with aspects like trust a “central feature of our policy instruments”.
Macdonald said there had been energy in the Canadian government for using AI but that that energy had been channelled into “individual department level projects pushing forward at breakneck speeds” as opposed to an inclusive cross-government effort. This disconnect had led to “downstream problems of a multiplicity of chatbots or similar functions that are being developed in isolation”.
The government’s new AI strategy acts, therefore, as a “rally point to coordinate federal efforts here in Canada” and is “pulling these threads together”.
The five steps
Iain Brown, head of data science, SAS Northern Europe, and adjunct professor at the University of Southampton, has a perspective from the private sector, public sector and academia – and therefore sees AI “from multiple angles”.
In his opinion there is “huge potential” for the use of AI in government – “but only when it’s approached in the right way”, he said, pointing to projects that have struggled to get going because departments “have jumped into the tech without laying the groundwork… to make it deployable”.
The key, he said, is to have a structured step-by-step approach “to really realise the value of artificial intelligence” and to be “thinking of that proactively” before diving in and reaping the rewards.
He suggested the following five steps:
- The first is to identify the right use cases for artificial intelligence, making sure it is being applied to the right challenges and those where it can have the biggest impact. The governments that have succeeded are those that have “well-defined problems to begin with”, Brown said. For example, tax authorities worldwide are using AI to identify fraudulent claims and make more accurate decisions to improve compliance while ensuring fairness.
- The second is thinking about how AI will be responsibly deployed and governed from the inception of the project, not once a solution has been built – which Brown said is too often the case. AI must be transparent, accountable and maintain public trust, and fairness checks should be embedded into models, ensuring explainability and “rigorous oversight from the outside”.
- The third is ensuring that AI technologies are trained on quality data, something which Brown said is often overlooked. “Garbage in, garbage out… AI is only as good as the data you feed into it”. The input and analysis of live data may also need to be incorporated, said Brown, giving the example of SAS’s work with healthcare providers in Belgium that are using AI and predictive analysis – based on offline databases and real time data – to anticipate hospital bed demand and improve resource planning. “You need to be adaptable to new data that’s coming in,” Brown said.
- The fourth is identifying the time-consuming processes that could be streamlined and optimised using AI and where the efficiency gains are – for example, using AI to undertake administrative tasks and freeing civil servants to do value-added work. On concerns about AI replacing jobs, Brown noted a statistic from the World Economic Forum that while AI is expected to displace around 85 million jobs globally by the end of this year, it will create 97 million new ones.
- And the fifth is scaling up across departments to drive innovation.
“None of this will be possible without investing in skills and AI literacy… If you invest in AI without investing in skills, it’s like buying a Ferrari without a driver, right? You’ve got this great capability, but no one to guide it,” Brown said.
Have a play ‘within your sphere of influence’
This point linked neatly with the advice of Stephen Russell, director of data, strategy and technology at Warwickshire Police in the UK and member of the National Police Chiefs’ Council AI Committee.
Russell is “not a deep technologist” in that he isn’t a coder or software engineer by background but he saw AI coming and took it upon himself to “go on a bit of a journey” so that he would be ready when artificial intelligence inevitably converged with his role.
That meant working to gain a base understanding of what AI is, how it works and where it can be applied. He explained that he spent a few months using free resources to do that, listening to the AI Daily Brief and AI Applied podcasts, and reorientating his use of LinkedIn and the people and organisations he followed to keep abreast of developments.
“Start by signing up to lots of things, listening, and then hone in,” he said.
He suggested that civil and public servants might also sign up for one-month trials of AI tools and “have a play with them” in their spare time – citing Replit as one tool he’d used to build an application.
“That showed me what works and what doesn’t. I think the only way you can get comfortable [with AI] is to start using the tools and have a play within your sphere of influence,” Russell said.
Creating a ‘protective shield against fear’
Bassel Haidar, vice president, Booz Allen, provided the live webinar audience with his overview of what they should and shouldn’t be doing when thinking about AI.
The human element, he said, is usually most overlooked.
“We are now at an inflection point, you know, that ‘ChatGPT moment’. We’ve had AI for the past 30 years, right? We had machine learning algorithms, statistical models, but all of a sudden, when these things started talking credibly or writing credibly, everything changed right in the public eye. And even for somebody like myself, who’s been in this industry for 30 years, you can’t help but think ‘this is different, it feels different, it acts different’.”
At times of unprecedented change such as this, “our brains can’t really process it,” he said. “We go back to our reptilian brain. We’re thinking ‘is this safe, is this not safe?’ It’s the fight or flight response”.
Those who are “super adventurous” see AI as an opportunity and “want to jump in and try these technologies” but most of us are “either scared or on the fence”, he continued. Therefore, “really understanding where we live emotionally with this technology is important”.
He described what Russell had done – effectively his own personal AI training – as creating a “protective shield against fear”.
Like Russell, he recommended Replit, as well as generative AI assistants ChatGPT and Claude; code editor Cursor; search engine tool Perplexity; and Napkin.ai, which creates graphics based on your text.
“For the audience who’s listening, this is what I would say: whether you’re in the government or you’re just a regular citizen, play with these tools, because these tools are coming, they’re coming in force, and you can’t avoid them.”
The ‘How to deploy artificial intelligence in government: a step-by-step guide’ webinar was held in partnership with SAS on 13 March 2025. Watch the webinar in full here and hear the panellists’ answers to a wide range of questions on topics including:
Data security
The importance of data sharing mechanisms when implementing AI
The minimum amount of data an AI technology should be trained on
Civil service training in AI tools
Use of AI to create misinformation and disinformation and how to mitigate this
Ensuring AI is deployed ethically and equitably
AI’s ability to ‘augment’ workers rather than replace them
Areas where AI might never be appropriate
Risks of ‘shadow AI’ in organisations
Build your own vs procuring existing platforms
Benefits of AI vs environmental impact
And more…