License to build: understanding what people think about public sector use of AI

Imogen Parker of the Ada Lovelace Institute argues that the public sector must consider people’s comfort with AI if it is to use the technology effectively. Here, she sets out six key lessons from the institute’s research on public attitudes – and outlines the risks if AI adoption outpaces evidence on public views
In a previous article for Global Government Forum I laid out the case for why public legitimacy for AI in the public sector isn’t just a ‘nice to have’. I argued that successful use of AI requires public license, and moving out of step with public comfort can undermine the ability of the public sector to effectively use AI.
Let’s assume (boldly) that I convinced policymakers of this argument. That then raises the question, so what do the public think about AI in the public sector?
Today (24 June), the Ada Lovelace Institute is publishing a synthesis of our research on public attitudes and expectations of AI in the public sector, to draw together what we’ve learnt about the answer to that question. The research is based on our research with around 16,000 people in four attitudinal surveys and 400 people in deeper qualitative studies.
While only a synthesis of our own work, we hope this summary will contribute to the broader evidence base to support policymakers and practitioners understand public views on AI to ensure these tools work for the sector and the public they seek to serve.
Read more: Do we need a ‘What Works Centre’ for public sector AI?
I would, of course, encourage you to go and read the briefing in full. But I’ll lay out the six key lessons we’ve learned from our research on public services and AI.
1) Public perceptions of AI in the public sector are nuanced and context-dependent.
Citizens have nuanced views about AI applications rather than a single, overarching opinion on ‘AI’. People weigh the benefits, opportunities, risks and harms of specific applications within their particular contexts.
Even in discussions on a specific technology, like our in-depth public deliberation on biometric technologies, comfort levels varied across different applications, depending on the nature and rationale of their deployment, and how they were governed and assessed for proportionality.
Interestingly, across two-year waves of our public attitudes survey, we have seen that where benefits relating to specific uses of AI in the public sector – like cancer assessment or welfare eligibility –remain broadly constant, concern levels have risen.
2) Experiences and demographics shape people’s expectations.
People’s understanding, trust and comfort with AI are affected by their personal characteristics, and their direct and indirect experiences of technology and the institutions using it. These experiences can exacerbate concerns about AI’s impact on existing inequalities.
For example, where people have experienced negative interactions with law enforcement, that can affect their views of biometric technologies. People who have experienced poor-quality digital tools in healthcare voiced concern over whether AI would be used appropriately or beneficially. Our qualitive research has found that people’s views around AI reflect their experiences of structural inequalities and distrust of power holders.
3) There are concerns about the power and profits of the companies supplying the technology used in public services.
Across our research, citizens have raised concerns about the role, motivations and impacts of private companies developing AI used in the public sector.
Anxiety about the private sector spanned discussions on different tools – from cancer prediction to welfare assessments – as well as worries about data access and monetisation. These reservations also intersect with concerns about existing regulatory power, and public sector bodies sharing information with technology companies.
4) Support is conditional: the public want evidence, explainability and involvement.
A pronounced finding is the importance the public place on transparency and evidence. When asked to trade off, our research found that the public value human explainability above increased accuracy when it comes to AI tools, and they expect clear evidence on efficacy and impacts to justify use.
In our research on AI and public good, for example, participants reflected that AI is already used in many aspects of public services, but some felt they were intentionally hidden from view. As one respondent put it: “I think in a lot of ways it is used in our lives already, but probably in a slightly underhand way that we don’t always know about.” Much of the public participation research we have conducted has underscored the importance of evidence about the impact and effectiveness of AI – even in formats that the public could review.
It has revealed entrenched worries that potential overreliance on technology in important services will result in a loss of compassion and ‘the human touch’. These worries can be especially acute in areas such as health and social care or immigration.
The public is also concerned that AI could reduce human involvement in high-stakes decisions that could affect lives. In the 2025 Ada-Turing survey, ‘overreliance on technology’ was the most common concern cited by respondents about the use of AI in assessing cancer risk, assessing eligibility for welfare benefits, and facial recognition used in policing. There is a desire for those affected to have meaningful involvement in shaping decisions about public sector AI.
5) Strong governance is a prerequisite for trust.
The public increasingly ask for stronger governance of AI, along with clear appeals and redress processes if something goes wrong. People are not convinced existing regulations are adequate to ensure that public sector AI works for everyone, supports public good and prioritises people over profit.
Well over half (62%) of respondents to the 2023 Ada-Turing survey were in favour of ‘laws and regulations that prohibit certain uses of technologies and guide the use of all AI technologies’. By 2025, this had increased: 72% of respondents reported that laws and regulation would make them more comfortable with AI.
The public expect government and/or regulators to have a suite of powers related to the governance of AI. For example, to stop the use of a product if it poses harm, to monitor risks posed by AI systems, and to develop safety standards. They would also like to see measures to ensure that AI systems are not the sole basis for decision-making.
6) Social impact matters: the public oppose uses of AI that could create a ‘two-tier society’.
A recurring theme in public attitudes towards the use of AI in the public sector is the exacerbation of inequalities. This is often expressed through fears that the use of AI in the public sector could create a ‘two-tier society’. Across different contexts, the public are deeply concerned about the adoption or normalisation of technologies that could exacerbate inequalities or disadvantage certain groups when it comes to public sector access, use and experiences.
In our Citizens’ Biometrics Council, respondents raised concerns about the normalisation of technology disadvantaging those with disabilities or bodies that are deemed not to fit the norm.
When considering COVID-19 risk-scoring algorithms, citizen jurors also expressed a clear red line: ‘Technologies should not create a two-tiered society that disproportionately discriminates against or disadvantages certain groups.’
And in our research on AI-powered genomic health prediction, participants expressed concerns about disease risk-scoring being used to compound inequalities and as a basis for unfair discrimination.
Read more: G7 launches AI innovation challenge to transform public services
AI adoption risks outpacing evidence on public attitudes
The rapid and often invisible adoption of AI in the public sector means there is a risk that AI use could outpace clarity about public acceptability. At a time when AI is being offered as a solution to a wide range of public sector problems, we hope these findings – alongside the recommendations we make in the report – will help those in the public sector align with public expectations.
The findings also make the case for ongoing research and participation. Data gaps, as suggested in our and others’ research, can reflect and illuminate the inequalities that people experience when engaging with services. Heavy public services users – including many people who are vulnerable – can be excluded from research processes and less visible in research. And the services they rely on are less likely to generate significant societal or media attention. These issues, combined with the opacity and complexity around AI may raise barriers to an informed public debate. So more research and input is needed, in particular from those who experience structural inequalities to shape how services evolve with technology.
Public legitimacy is crucial to support those engaging with public services and to support wider democratic processes, civic engagement and people’s faith in the public sector and government institutions.
Read more: Projects shortlisted for Canada Public Service Data/AI Challenge