Buying AI: how public sector procurement can ensure AI works for people and society

At the start of this year we saw the remarkable moment of a popular TV show leading to government scrutiny of technology and criminal convictions, and even the introduction of new legislation.
The Mr Bates vs The Post Office drama on the Post Office Horizon scandal brought home the human harm of faulty IT and deference to computerised systems above individuals’ experiences.
For those working on data and AI in public services in the UK, the Horizon scandal has dominated conferences and discussions across the sector throughout 2024.
It has become comparable to the Cambridge Analytica controversy or the UK’s A-level grade algorithm in punctuating the public consciousness and raising questions about the power of technology to affect society.
Within the Ada Lovelace Institute, it focused our attention on public sector procurement of AI tools and systems – something that was also increasingly coming up in discussions related to the new UK government’s agenda on public service reform.
Getting the procurement of AI right – in terms of utility, scope, safety and ethics – is vital for ensuring data and AI work effectively in the public interest and deliver value for money in terms of the public finances.
We’re seeing the acceleration of AI adoption within the public sector – much of which is developed by private companies – affecting those working in and interacting with public services.
Read more: AI in the public sector: from black boxes to meaningful transparency
Procurement as an opportunity
Procurement in the public sector provides a key opportunity to interrogate the quality and impacts of technologies, and hold suppliers to account for ineffective, unfair or unsafe tools.
Even though governance may lag behind technological innovation, the public sector can insist on checks, information disclosure and the incorporation of public sector values and ethics for the technologies being deployed in the public estate. Ideally, procurement should be a valuable checkpoint for ensuring that AI works for people and society.
While Horizon was a historic injustice, it involved comparatively straightforward technology. If – even fairly recently – the public sector was unable to ensure adequate reliability, scrutiny and redress in relation to essentially a set of networked tills, how well is it faring when it comes to the procurement of AI?
At Ada we’ve spent the last 12 months analysing the disparate patchwork of documents, guidance and legislation relevant to public sector (specifically local government) procurement of AI, and speaking to companies and decision-makers across local and central government to understand whether current processes are fit for purpose.
Read more: AI in public services – a political panacea?
Research findings
Our recent research goes into a lot more detail and depth, but the short answer is they aren’t.
In one sense this isn’t a surprise. AI is complex, opaque, technical, under-regulated and evolving. It’s much more complex to get right than buying in hardware or traditional forms of software.
We have a long list of recommendations specifically for UK policymakers, including the creation of a National Taskforce to bring diverse stakeholders together to try to work through some of these knotty issues.
But drilling down into the detail with those procuring and selling into the sector sharpened our understanding of some core shared challenges that have broader application and resonance beyond the UK.
Terminology is a problem
Our analysis revealed complex, overlapping and high level terminology across different guidance – we found over 50 words that related to public interest in some way, without definition or clarity about how to apply them. This leaves a significant burden on local services to untangle the 14 overlapping terms related to fairness – or determine their practical implications.
We need to invest in the foundations
Issues like poor data and infrastructure were highlighted as barriers to compliance with statutory data and equality duties in the context of AI. We heard that it wasn’t always clear if data held about residents was appropriate for AI development or whether it was too sensitive to share. We heard examples of patchy race and ethnicity data simply being excluded.
Uncertainty about AI undermines scrutiny
A lack of regulation and trustworthy assurance leaves the public sector unable to scrutinise exactly what they are buying. Local government stakeholders told us they weren’t always clear on what AI could or couldn’t do, or get into the detail of models, making it hard to cut through the hype from suppliers and really interrogate effectiveness and value for money.
Knowledge asymmetries disempower the public sector
There was a lack of shared knowledge about what AI tools were being used where, leaving it up to individual services to attempt to interrogate tools and suppliers in silos. The well-documented skills gap between public and private sectors has led to large tech companies stepping in to influence and shape decision-making and implementation.
Market failures limit public sector opportunities
Many stakeholders cited market concentration, with a few large suppliers pricing out smaller or medium vendors leading to market capture or vendor ‘lock-in’. We heard about the practical implications of this market power, with local government being unable to hold suppliers to account or access important information.
As ever, working in this space, probing one issue brings a tangle of interconnected issues to the fore. Alongside focus on specific tools and opportunities, governments will need to invest in these cross-cutting concerns to open up opportunities for AI to be transformative.
Getting procurement right should both raise the level of AI in deployment and provide opportunities to practically pilot evolving mechanisms for transparency and evaluation, even as developing and embedding regulation catches up. It deserves to be prioritised if we are serious about the potential of AI in transforming public services.