A quiet revolution: driving efficiency in government using ‘pragmatic AI’

Artificial intelligence is the subject of fervent discussion over a drink between friends in a bar, at the very top of government and everywhere in between. Often these discussions focus on highly-publicised emerging AI tools that promise to be huge future disruptors. But what of the less shouty AI products that can be integrated relatively seamlessly into existing processes now?
During a recent Global Government Forum webinar with knowledge partner Unit4, public and private sector experts discussed what is known as ‘quiet’ or ‘pragmatic’ AI.
As Bryce Wolf, Unit4’s director of strategic growth, put it: “It’s not about dramatic overhauls or flashy technologies, it’s about integrating AI quietly and pragmatically to make everyday processes more efficient and effective.
“AI operates quietly in the background, reducing errors, saving time, providing insights… and enabling [teams] to tackle their challenges more effectively, optimise their resources and ultimately deliver greater value to the public. It’s transformative yet subtle, making a tangible difference without requiring a complete reinvention.”
Practical examples – from budget processing to consultation analysis
This approach is being advanced in Scotland, where AI tools are being used to automate routine tasks across a number of departments and professions, freeing civil servants up to do more valuable work that requires a human brain.
Examples shared by panellist Carolyne Thomson, senior AI policy officer in the Scottish Government, included artificial intelligence being used to scan forms submitted by members of the public and flag where required fields have been left blank or where signatures are missing; to transcribe meetings; and to help with consultation analysis.
It is also being used in budget processing to collect and collate data and populate the relevant documents, effectively using one system to streamline a process that had previously been completed much more slowly by several different ones.
Expanding on Thomson’s budget processing example, Wolf described various other ways that pragmatic AI can help government finance teams. It can automatically match invoices to purchase orders or flag inconsistencies, allowing finance teams to focus on strategic planning or stakeholder engagement, and it can also produce real-time insights through predictive analytics to enhance decision-making.
“Imagine a public sector finance team planning next year’s budget. Instead of relying solely on historical data and manual forecasting, they can use AI to model different scenarios. What happens if funding is reduced? How will inflation impact operational costs? This level of precision allows leaders to make more informed and proactive decisions even in uncertain economic conditions,” he explained.
Compliance, risk management, resource allocation
Compliance and audits, traditionally cumbersome activities, are also being transformed by AI tools embedded in enterprise resource planning systems which ensure that financial transactions adhere to regulatory requirements and creates automated audit trails. “This not only reduces the administrative burden, but it also improves transparency – it builds trust with oversight bodies and citizens,” Wolf said.
Risk management is another area where AI is, he added, “really starting to shine”. By analysing patterns in financial data, it can flag anomalies like a sudden spike in vendor payments or irregularities in expense claims before they become a bigger issue. “It helps to detect fraud, and for public finance departments managing large scale budgets, this kind of proactive monitoring is invaluable.”
And there are applications in resource allocation too. “We’ve got public sector organisations that operate with tight budgets and competing priorities. AI can help optimise how resources are distributed, identifying areas where funds are underutilised and where reallocations might have the greatest impact.” Artificial intelligence might highlight department funds that are sitting unused and at risk of expiring in time for corrective action, for example.
‘Run fast and make things better – don’t break stuff’
Dr Giorleny “Gio” Altamirano Rayo, chief data scientist & responsible AI official at the US Department of State’s Center for Analytics, discussed how the then president Joe Biden had signed an executive order on safe, secure and trustworthy AI that effectively allowed the government to use AI – responsibly – wherever appropriate. [This order has since been revoked by president Trump, though departments remain able to use AI at their discretion].
Various departments, including the State Department, have drawn up their own AI strategies focusing on getting the technology, the communications, the ethics and the associated guardrails right.
In terms of practical applications, Altamirano Rayo touched on a number of tools utilised by the State Department including North Star, used for media summarisation, translation and social media reporting; DCT, which helps people to categorise different types of data; Summit, which allows those in big meetings like the UN General Assembly to sift through tonnes of information in one central repository; and Co-drafter, that creates drafts using existing information.
Generative AI chatbot ChatGPT also featured in the conversation. A poll of the webinar audience found that 41% are using generative AI and 59% are not.
The US State Department, in partnership with the bureau responsible for public diplomacy, launched a project which taught people “how to use ChatGPT to accelerate and streamline their workflow and get them out of having to do certain tasks that were time-consuming, boring, quite frankly icky, and into the more creative work of having time to connect with people face-to-face,” Altamirano Rayo said.
What’s important when embedding AI tools, she emphasised, is to “make sure that you run fast and make things better, rather than run fast and break stuff”.
Lessons from Estonia
To help ensure this doesn’t happen, lessons from other countries can be invaluable.
Estonia has long been a frontrunner when it comes to digital government. Veiko Aunapuu, product manager for AI in the Department of Digital State Development, explained that the government has been using machine learning since 2018 and currently has around 150 AI use cases.
He shared some of the key lessons the Estonian government has learned that public servants around the world may wish to consider when implementing AI tools themselves:
- The need to be clear about what the problem is and what the best solution is for solving it, bearing in mind that the best solution may change over the lifetime of the project;
- Improving access to data, and identifying where data quality is inconsistent and making steps to standardise it;
- Ensuring a legal system for overseeing AI is in place and that there is co-operation between public servants responsible for implementing AI, AI experts and subject matter experts, and legal professionals;
- Keeping up with the increasingly high expectations of end-users;
- Maintaining a keen eye on open source and reusability when developing AI products.
Addressing the barriers to AI adoption
The discussion moved on to blockers to AI adoption.
Like the US, Scotland has a national AI strategy in place. Nevertherless, as Thomson highlighted, the adoption of AI tools, whether ‘pragmatic’ or otherwise can be a big cultural change for an organisation and its employees.
One of the challenges, she said, is to allay colleagues who are “absolutely terrified” of AI and think something will go “horribly wrong” the moment they use it, while reining in those who are, conversely, “gung ho” about it.
“An awful lot of the how we’re making it work is around clear guidance, clear policies [and] advice… and sessions like this, where I can learn from other governments,” she said.
Eduardo Ulises Moya Sanchez, director of artificial intelligence at Mexico’s Jalisco State Government, expanded on the need for watertight AI policies, focusing particularly on data governance, which he described as “the primary building block of AI”.
“We have the data,” he said, “but that isn’t always enough. We need to improve the data, label the data… to get good results with AI.”
Another hurdle that will need to be overcome, he added, is confusion among people and departments about what can be done with AI – a problem compounded in regional and local governments where small AI teams don’t have the capacity to field the many questions pouring in from public servants across government.
Here, what is crucial, he said, is to create and disseminate detailed AI guidelines that can be referred to when needed.
If you don’t know the question, you can’t find the answer
In the Q&A portion of the webinar, the panellists shared their thoughts on topics such as facilitating meaningful implementation of AI in the public sector without over-hyping the technology; identifying use cases; the importance of structuring data correctly; how to define a viable AI policy when technology is still emerging; how organisations can incorporate AI in their medium-term planning; and AI in policy development.
Panellists reiterated that the first step to embedding ‘quiet AI’ or ‘pragmatic AI’ – indeed any form of artificial intelligence – is to be clear what the problem is you want to solve.
Thomson recalls a colleague being told to bring in AI because “everyone else has got it, so we need to get it” but as she highlighted “if there is no problem to be solved, then AI is not the thing that you need”.
If there is a problem, there’s where ‘quiet AI’ can shine, albeit modestly, behind-the-scenes.
As the US state department’s Altamirano Rayo said: “When people have a process that’s very achy, very cumbersome, very time-consuming… it’s a real pain point, then they start looking for creative solutions.”
The ‘Quiet AI: how to cut through the noise to make artificial intelligence work in government’ webinar was held in partnership with Unit4 on 3 December 2024. Watch the webinar in full here.