Do we need a ‘What Works Centre’ for public sector AI?

By on 27/05/2025 | Updated on 03/06/2025
Photo by lil artsy via Pexels

Research has identified a need for evidence on AI efficacy and impacts if it is to be used fairly and safely in the public sector and deliver value for money. In this article, the Ada Lovelace Institute’s Imogen Parker argues that a dedicated centre that explores AI’s application in specific areas could be the answer

In recent months we’ve been reviewing what we’ve learnt from six years of research on data-driven and AI projects within public services.

Last month, I touched on findings from our first synthesis – Learn Fast and Build Things – and next month I’ll preview our findings from our second synthesis examining public attitudes to public services. This incorporates evidence from over 16,000 people that we’ve engaged with across the last six years.

In this article, I want to touch on an idea that has cropped up in response to both evidence on the ground, and findings from the public on their comfort with AI. It’s the need for evidence on AI efficacy and impacts.

Read more: Why public legitimacy for AI in the public sector isn’t just a ‘nice to have’

Evidence gaps

As we’ve repeatedly argued, the lack of evidence on AI’s efficacy, costs and impacts on services and people is a major obstacle to its successful deployment across the public sector, and to ensuring it is fair, safe and delivers value for money.

In our research synthesis, we highlighted that despite the optimism for the potential for AI to deliver benefits to the public sector, from productivity to performance, there is not yet systematic or comprehensive evaluation of AI tools in the public sector.

We lack evaluation of AI tools that understand performance and value for money, and can compare these to alternative approaches, to allow governments to target resources at scaling-up tools that are net positive. We are missing evaluations that understand the broader service and social context these systems are being deployed in, and to take an iterative approach to evaluation as the models and systems continue to develop.

A recent review from the UK Public Accounts Committee (PAC) echoed our concern, stating that “there is no systematic mechanism for bringing together and disseminating the learning from all the pilot activity across government”. The PAC recommended that  the UK Department for Science, Innovation & Technology “set up a mechanism for systematically gathering and disseminating intelligence on pilots and their evaluation”.

Our forthcoming public attitudes review underscores how important it is to the public that if AI is used it is used effectively, with clear evidence and understanding of its impacts. And this has been especially important in contexts where the public have been asked to use and adapt to new technologies in a short space of time, such as those rolled out during the COVID-19 pandemic.

Read more: ‘Radical reimagining’: lessons for the use of AI in public services and policymaking

Learning by example

There’s an existing model that could support evidence and public confidence in evidence: a ‘What Works Centre’.

What Works Centres (WWCs) already exist in areas like education and crime prevention, helping the government make decisions based on robust evidence. The longest standing WWC – NICE (the National Institute for Health and Care Excellence) – plays a hugely influential role in the health system, and with hundreds of thousands of studies to draw on, is used by countries around the world.

A new centre focused on AI could build and review evidence on how AI tools perform in public services and support officials to make informed choices. It could include consideration of value for money or broader professional impacts, as well as narrower parameters like time saved on specific tasks.

Crucially, it could offer the blend of disciplinary expertise needed (including technical expertise that may not exist in other WWCs) to scope and evaluate novel and emerging AI tools across sectors or support frontline professionals to help them assess the likely impacts of an AI tool.

Read more: Trusting the process, trusting the product: how governments can win over the public on AI

Starting with the problem

A What Works Centre for AI in the public sector should begin not with the technology, but with real public sector problems.

Could the adoption of specific AI tools speed up or improve benefits assessments? Could investment in AI reduce fraud more effectively than traditional methods?

Comparing AI-led approaches with alternative methods would help identify when and where AI can add real value and identify specific conditions for success.

The centre could also host a repository of AI use cases as a place to document lessons learned, reduce duplication and share good practice across government.

Playing a supporting role

This new What Works Centre could complement and support the UK’s growing landscape of AI institutions and initiatives.

It could complement the Evaluation Task Force’s new annex to the Magenta book on evaluating AI interventions, and contribute to the ‘learn’ aspect of the government’s ‘test and learn’ approach to improving public services.

It could interface with the independent Responsible AI Advisory Panel and bolster the work of the digital centre of government, supporting practitioners with access to best practice expertise to shape standards.

It could link into sectoral expert and representative bodies, as well as other WWCs that are looking at technology and AI questions to corral and coordinate reviews on common tools, as well as offer guidance on novel, rapidly emerging or general-purpose technologies that existing bodies may not feel equipped to evaluate.

Together, these efforts can move the UK from scattered pilots and trials, and disparate pockets of analysis, towards a more coordinated, evidence-led approach to AI in government.

Read more: New guidance issued to help UK government departments evaluate AI’s impact

Boosting AI adoption means boosting our evidence

The responsible use of AI in public services depends on more than just innovation. The values and ethos of the public sector mean we also need to know that AI tools are effective, fair, safe and good value.

A What Works Centre would help ensure any public sector adoption leads to real, measurable improvements in people’s lives. And – depending on its level of independence – it could offer reassurance to the public concerned about the rapidity of adoption and the influence of the private sector, on the state of the evidence and the balanced case for adoption.

If the UK is serious about AI opportunities in the public sector, then it must also be serious about investing in evidence.

Read more: AI in the public sector: from black boxes to meaningful transparency

Sign up: The Global Government Forum newsletter provides the latest news, interviews and features on AI, data, workforce, and sustainability in government

About Imogen Parker

Imogen is Associate Director (Society, justice & public services) at the Ada Lovelace Institute. Imogen’s career has been at the intersection of social justice, technology and research. In her previous role as Head of the Nuffield Foundation’s programmes on Justice, Rights and Digital Society she worked in collaboration with the founding partner organisations to create the Institute. Prior to that she was acting Head of Policy Research for Citizens and Democracy at Citizens Advice, Research Fellow at the Institute for Public Policy Research (IPPR) and worked with Baroness Kidron to create the children’s digital rights charity 5Rights. She is a Policy Fellow at Cambridge University’s Centre for Science and Policy.

Leave a Reply

Your email address will not be published. Required fields are marked *