Fighting AI with AI: GGF report explores how to tackle evolving public sector fraud threats

A new Global Government Forum research report has revealed some of the key approaches that governments around the world can use to tackle the evolving fraud threats they face.
The New Strategies for Evolving Public Sector Fraud Threats report examines the impact of public sector fraud, and the growing role of artificial intelligence (AI) – both in increasing the threat public sector organisations face and in how they can use the technology to respond.
The report highlights that public sector fraud is increasing in frequency, intensity and complexity – and that it causes financial losses that impact public services and undermines public trust in government agencies.
The report includes expert insight from public servants involved in tackling fraud and showcases research from SAS on public sector fraud threats. It discusses how AI is currently used in fraudulent activities, explores ways in which AI can help the public sector tackle fraud risks, and offers practical recommendations for successfully deploying AI-powered fraud solutions.
Read the report in full: New strategies for evolving public sector fraud threats
The impact of public sector fraud
The report emphasises that tackling fraud is a growing government priority, with recognition that trust in government requires strong action to tackle fraud.
Shaun Barry, global director of SAS’ Risk, Fraud and Compliance Solutions division, said you can often see corruption “eating at public trust” of governments. “It steals from the public purse,” he said.
A poll of the 183 attendees at the GGF-hosted webinar found the risk governments face from fraud is well understood, with 53% saying they understand the fraud risk in their organisation very well and 19% saying they understand it quite well. Barry called these findings “very encouraging”.
However, the report also underscores that there is growing concern about the growth of AI-powered fraud and a need for more public sector confidence in the ability to detect and address it.
As Barry put it, there is a discernible trend that government leaders are seeing that the fraudsters are using AI – a trend that Dina Buse, deputy director of the financial market policy department in Latvia’s Ministry of Finance, said she was also seeing, with fraud volumes correlating with the rapid development of digitalisation and AI.
The key, according to Barry, is that AI will allow what look like traditional fraud threats to be developed quicker and increased in scale.
“It’s not a new threat, but it’s an increased volume because it’s so easy to generate realistic-looking fake documents and transaction data [using AI],” he explained. “There’s going to be huge volumes of data and attacks that come on government programmes around the world.”
Fighting fraud with AI
The research also highlights examples of how governments are tackling fraud with AI.
According to a SAS survey of 1,100 fraud professionals across the globe, conducted with Coleman Parkes, around half (52%) of respondents say their agencies are currently using AI technologies of some kind to address fraud, waste and abuse – and 28% are using generative AI specifically. Adoption of some specific AI technologies such as machine learning, network analysis, large language models (LLMs), and digital twins is currently low, but likely to grow significantly over the next two years: almost all agencies (98%) are using or are likely to use at least one of these technologies to combat fraud, waste and abuse by 2027.
Barry said that these findings show there’s “much innovation going on in governments around the world”.
He added: “Only half are using [AI] today, and almost all are saying in the next two years they are going to be doing that.”
The survey indicated that the benefits officials expect to unlock from using AI in fraud detection are greater workforce efficiency (57%), followed by more fraud detection (39%), better prioritisation of cases (38%) and quicker detection (37%).
“I think there’s a really interesting insight in there,” Barry said. “The survey respondents, I believe, are telling us: ‘I have more than enough fraud and caseload today; I’m not sitting around or my staff are not sitting around, twiddling our thumbs, doing nothing. We’ve got more than enough work, but what we don’t have is the ability to automate how we tackle those fraud areas.
“That, I would suggest, is why the number one reason is greater workforce efficiency.”
He added that prioritisation and quicker detection are both more about making efficient use of resources than unearthing more fraud.
How governments are implementing AI fraud solutions
The survey respondents also flagged barriers that could hinder deployment of AI in the fight against fraud. The biggest barrier, according to respondents, is privacy and security concerns, followed by data quality and availability, then responsible use of AI and analytics.
The report shares insights on how governments can overcome these issues and how they are implementing AI fraud solutions.
Ellen Roberson, certified fraud examiner and global product marketing director, government and health care at SAS, said the report showed that the same technologies used to perpetrate fraud, waste and abuse – particularly AI – can and must be harnessed to prevent it.
“This report, developed in partnership with Global Government Forum, offers timely insights into how governments are responding to the growing complexity of fraud, waste and abuse,” she said. “It draws on the voices of public servants, fraud experts, and data scientists to explore the dual role of AI: both as a threat vector and as a powerful defence mechanism.”
Read the report in full: New strategies for evolving public sector fraud threats