Why public legitimacy for AI in the public sector isn’t just a ‘nice to have’

In the latest of a series of exclusive articles for GGF, Imogen Parker of the Ada Lovelace Institute argues that if the public are not convinced that the government is using their data, and AI, in their best interests, it could have big implications – including for democracy itself
Last week I received some healthy challenge to whether or why it mattered if the public were brought along with AI. The person who raised it wasn’t being dismissive. He was genuinely thoughtful about whether people cared enough about or could understand highly complex systems enough to really make sense of them, and how to balance that priority against others that the government and politicians were dealing with.
He offered a provocation: that the reality is that questions of AI policy simply aren’t a priority for most people, and that how government used data probably wasn’t going to make people take to the streets. We considered aloud how to balance the desire for transparency, to foster realistic conversations with the public about AI, with the anxiety of distortion, and political fear of failing the so-called ‘Daily Mail test’ [where a project or policy receives backlash from readers of the British tabloid].
It’s a fair challenge, and useful for us in the policy world to hear someone on the inside speaking frankly about how to balance public needs with the reality of political incentives.
At the Ada Lovelace Institute, we will soon be publishing a synthesis of research on what the public thinks about AI in the public sector.
So, ahead of that forthcoming publication, let me lay out my case for why this type of research really matters and why the public sector needs to act with public licence in its use of data and AI.
Read more: ‘Radical reimagining’: lessons for the use of AI in public services and policymaking
Risk of citizens’ opting out
First, there’s a straightforward argument that it is simply in line with public sector values. We expect public services to operate with public licence and it’s right that public services meet higher standards because they deal with serious and important issues, from deciding who is eligible for IVF or expensive drug treatments, to removing someone’s liberty or taking a child from a parent.
In many parts of the public sector, you can’t ‘shop around’ if you aren’t happy with the approach: you can’t choose an alternative justice system or welfare state. The importance of acting with public legitimacy is fundamental and not unique to the adoption of new technologies.
There are also more instrumental reasons. Across the last six years we’ve seen that if people don’t trust what’s happening with their data they are more likely to withdraw their consent – minimising the potential for realising the benefits of data and AI. For example, in the UK a public backlash to proposals for greater health data sharing (in the form of the General Practice Data for Planning and Research or ‘GPDPR’) led to three million people opting out of sharing their data.
Interestingly, we’ve also found through our research that when frontline professionals don’t feel that how they’re using AI is legitimate or proper (even if legal), this can lead to a reticence to use the technology. For example, in our research with social workers using predictive tools for children’s social care, one social worker said to us: “One of the areas that we were a little bit reluctant with was, if OneView identifies the family, and we pick up the phone, what do we actually say to them? […] Nothing has actually happened to this family. There’s been no incident.’”
We have also seen that when students in England protested against the A-level grading algorithm [a controversial system for grading pupils who couldn’t sit exams during the COVID-19 pandemic], this triggered a number of other AI projects around education to be shelved, due to anxiety about the impact on public trust. High-profile examples of ‘bad’ AI and the impact on public perceptions may be one of the biggest risks to harnessing its potential.
Read more: UK government drops exam grading algorithm in the face of public anger
Growing mistrust of institutions
If people are unhappy about how services are using their data and withdraw their consent there’s a real risk of knock-on consequences – causing harm to individuals and additional costs for services already in crisis. What starts with nervousness about how personal or sensitive information might be used can escalate into anxiety about how to engage with services or even whether to engage with them at all.
Talking a step back, over recent years we have seen a growing mistrust of institutions. And we’ve seen the implications of that, with more in power elected on populist and anti-establishment agendas.
The public’s experience of democratic politics and public policy is not simply at the ballot box, it’s also their interaction with public services. If that interaction is responsive, respectful, appropriate, transparent and empowering, then that will go some way towards improving the public’s view of and trust in institutions.
The decisions we make about how data and AI are used in the public sector are not trivial or technical, they are deeply significant for how people see and trust public institutions and those elected to serve them.
Read more: What are we really talking about when we talk about AI?