Refresh

This website www.globalgovernmentforum.com/lessons-for-the-use-of-ai-in-public-services-and-policymaking/ is currently offline. Cloudflare's Always Online™ shows a snapshot of this web page from the Internet Archive's Wayback Machine. To check for the live version, click Refresh.

‘Radical reimagining’: lessons for the use of AI in public services and policymaking

By on 01/04/2025 | Updated on 01/04/2025
Image by NickyPe via Pixabay

Imogen Parker of the Ada Lovelace Institute distils six years of examining AI and data-driven technologies in the public sector – and more than 30 reports and research papers – into lessons for success

At the Ada Lovelace Institute, we have spent the past six years examining AI and data-driven technologies in the public sector, and at a time when interest in accelerating adoption is at an all-time high, we thought it would be instructive to look back across our back catalogue and synthesise our research into some essential cross-cutting lessons for policymakers.

It’s been a timely intervention. The day before we published our briefing, Keir Starmer, prime minister of the UK, made a speech in which he described AI as a “golden opportunity” for reforming the state and unveiled a new mantra for government adoption of technology: “No person’s substantive time should be spent on a task where digital or AI can do it better, quicker and to the same high quality and standard.”

And last week we saw MPs responding with their own scrutiny of the government’s agenda for AI in government. Launching their new report, the chair of the Public Accounts Committee warned: “The government has said it wants to mainline AI into the veins of the nation, but our report raises questions over whether the public sector is ready for such a procedure.”

Read more: UK PM says government needs to be ‘happier with innovation’ as he sets out reform plan

Lessons for success

We have examined the use of data and AI across the public sector, in healthcare, education, local government, social care and in cross-cutting work on transparency, accountability, biometric data and foundation models.

Our ‘lessons for success’ are drawn from more than 30 Ada reports and research publications, which span independent legal reviews, futures thinking, deliberative exercises and surveys of public opinions, landscape reviews, technical analysis, ethnographic case studies, and syntheses of expert views.

We believe these lessons can support government’s aim to accelerate the use of AI in policymaking and public services, to ensure AI works for the sector and for the public they seek to serve.

For those deep in the field, much of what we highlight won’t be novel. But I hope they can be helpful for thinking carefully about what needs to be understood, evaluated and reenvisioned to make AI a success. They do not provide easy solutions and should prompt more holistic reflections on what AI means for the public sector, including the need for radical reimagining.

Contextualise AI

First, we have a collection of findings around the importance of contextualising AI. To enable the wide deployment of AI across the public sector, governments should adopt clear terminology, address data challenges and recognise that these technologies operate within complex social systems rather than in isolation.

Language and understanding is a problem for the public sector. A lack of clear terminology about ‘AI’ is inhibiting learning and effective use.

We also repeatedly see real-world demonstrations of the fact that AI systems are only as good as the data underpinning them and that these systems are not deployed in a vacuum. AI deployment needs to be considered as part of a sociotechnical system: focusing only on technical capabilities without consideration of the social context, which in the public sector means thinking about the professionals and publics affected, does not provide the right conditions for success.

Learn what works

Second, policymakers need to be able to learn what works. Making informed decisions about AI requires transparency, rigorous evaluation and improved procurement processes.

While that sounds simple in practice, the reality is that there are a number of gaps and challenges. Drawing on our research, we conclude that the UK public sector does not currently have a comprehensive view of where AI is being deployed in government and public services, nor is there enough evidence on the effectiveness of AI tools.

Read more: New guidance issued to help UK government departments evaluate AI’s impact

Deliver on public expectations and public sector values

For the public sector to successfully benefit from AI adoption, it must earn and maintain public trust. Successful use of AI requires public licence. This requires developing systems that are not only technically sound but also ethically designed, properly governed and deployed with consideration for their broader societal impacts and alignment with long-term public service goals.

However, we found repeated evidence that inadequate procurement processes and gaps in AI governance are undermining the sector’s ability to ensure tools are safe, effective, fair and governed in line with public expectations for oversight and transparency.

Think beyond the technology

Finally, and this is perhaps the most interesting yet hardest lesson, we argue that radical thinking is needed from governments about the structures that will be required if AI is to have the profound impact that some anticipate.

We know that the adoption of AI will have wider societal consequences beyond the technology itself. The public sector will inevitably have to deal with the intended and unintended consequences of AI tools, regardless of their direct use within public services.

To tackle this, the state needs to proactively anticipate and manage the significant impacts of AI beyond their immediate application – for employment, trust in institutions and information, social inequalities and the environment, to name a few.

So governments and policymakers need to see AI not as an opportunity to automate the public sector, but to reimagine it. We welcome work to establish a long-term vision for public service transformation where AI follows rather than leads, one that is grounded in public and professional legitimacy.

Rather than ‘ask for faster horses’, AI should be viewed as a potential catalyst for fundamental service redesign, placing the citizen at the centre of public service delivery rather than focusing solely on immediate efficiency gains or automating the status quo. Through meaningful engagement with the public and relevant professions, governments can develop a shared understanding between citizens, staff and wider society of where AI has the potential to help reimagine more relational, effective and legitimate public services.

Read more: What are we really talking about when we talk about AI?

Sign up: The Global Government Forum newsletter provides the latest news, interviews and features on AI, data, workforce, and sustainability in government

About Imogen Parker

Imogen is Associate Director (Society, justice & public services) at the Ada Lovelace Institute. Imogen’s career has been at the intersection of social justice, technology and research. In her previous role as Head of the Nuffield Foundation’s programmes on Justice, Rights and Digital Society she worked in collaboration with the founding partner organisations to create the Institute. Prior to that she was acting Head of Policy Research for Citizens and Democracy at Citizens Advice, Research Fellow at the Institute for Public Policy Research (IPPR) and worked with Baroness Kidron to create the children’s digital rights charity 5Rights. She is a Policy Fellow at Cambridge University’s Centre for Science and Policy.

Leave a Reply

Your email address will not be published. Required fields are marked *