UK government issues AI playbook to repair ‘broken public services’

The UK government has published its AI Playbook with the aim of giving departments and public sector organisations “accessible technical guidance on the safe and effective use of AI”.
Created by more than 50 experts from the Government Digital Service and the wider Department for Science, Innovation and Technology (DSIT), the playbook consists of input from over 20 government departments and public sector organisations, along with insights and peer-review from industry and academic advisers.
Peter Kyle, the government’s technology secretary, said that the playbook represented “a call to arms for tech specialists across the public sector” to use AI “at whiplash speed” in their organisations “so we can repair our broken public services together”.
The five-part playbook
There are five parts to the playbook. The first of these sets out 10 principles that all civil servants should follow when using AI in government. The second, entitled ‘Introducing AI’, explains the foundations of AI and generative AI, and its applications, capabilities and limitations.
The third part of the playbook addresses building AI and consists of corporate and technological guidance on whether AI is the right tool for the job, use cases to avoid in government, user research guidance, and how to buy, procure and implement AI products.
Read more: New guidance issued to help UK government departments evaluate AI’s impact
The fourth part deals with the safe and responsible use of AI, including legal, ethical, security and governance considerations. And the fifth and final part – an appendix – contains case studies provided by teams that have implemented AI solutions in government departments and public sector organisations.
Each of the five sections contains a checklist with “practical recommendations to consider and actions to take when developing AI projects”.
The government said it would update the AI playbook “regularly”, and that it would also launch a series of ‘AI Insights’ publications covering “more specific aspects of AI” where in-depth discussion was not possible in the playbook.
Action plans, fresh guidance, and name-changes
Announcing the playbook, the government said that AI is “at the heart of the UK government’s strategy to drive economic growth and enhance public service delivery”.
The UK’s minister of state for science, research and innovation, Patrick Vallance, has since confirmed that the government will use the forthcoming Spending Review to make investments in artificial intelligence.
“The government has a clear focus on taking advantage of new technologies such as AI to improve public sector productivity and deliver a better user experience for citizens,” he said.
The publication of the playbook follows hot on the heels of the UK AI Opportunities Action Plan, announced by UK prime minister Kier Starmer in January, who said his intention was to “mainline AI into the veins” of the country.
The plan was developed by Matt Clifford, a tech entrepreneur and chair of the Advanced Research and Invention Agency who has since become Starmer’s adviser on AI opportunities.
Read more: UK prime minister reveals plan for AI to ‘turbocharge every single element’ of government
In early February, not long after the action plan was announced, the UK’s Evaluation Task Force (a joint unit involving the Cabinet Office and Treasury) published guidance to government departments on how to assess the impact of AI tools regarding outcomes, processes and value for money for taxpayers.
The aim of the guidance is to enhance “the safety and confidence with which government departments and agencies can adopt AI technologies” and to ensure that public sector innovation can move at pace with its private sector counterparts.
The guide said that AI systems would likely need “more substantial evaluation” than other types of interventions and urged the use of Randomised Control Trials (RCTs) when testing a new AI product. Applying RCTs would, it said, produce “high quality evidence on the intended and unintended impacts of introducing these new technologies”.
However, the UK has recently toned down its emphasis on AI safety, having changed the name of the AI Safety Institute to the AI Security Institute.
Peter Kyle said that the decision would not affect the core work of the institute, but added that its renewed focus would be on crime and national security in a bid to ensure that UK citizens and those of its allies would be “protected from those who would look to use AI against our institutions, democratic values, and way of life”.
Read more: Aiming high with AI: making artificial intelligence ubiquitous across government