What are we really talking about when we talk about AI?

The phrase ‘artificial intelligence’ encompasses a plethora of different technologies and uses. The public sector should work to develop a shared language that describes the various possibilities to cut through the jargon and make AI work across the public sector
Last week I was invited to speak to a select committee – a group of MPs who scrutinise the work of government and help to hold the private sector to account – who were considering AI in the UK public sector.
About halfway through the first evidence session, a member of the committee paused to ask if the experts could clarify exactly what was meant by AI and whether there was a clear explanation we could agree on.
It didn’t lead to a neat answer, even from the expert industry spokespeople on the panel at the time.
It’s widely accepted that there is no single agreed definition of what we mean when we talk about AI, so in events and reports, the usual approach is to acknowledge that shortcoming, and then proceed with describing it as an umbrella of different types of things it can refer to. Often discussion then reverts back to ‘AI’ writ large, but good practice would be to attempt, where possible, to be specific about the actual technology we are referring to.
What we talk about less, are the implications of lacking a clear terminology or taxonomy of these different types of tools.
Arvind Narayanan and Sayash Kapoor begin their excellent book AI Snake Oil with this thought exercise:
“Imagine an alternate universe in which people don’t have words for different forms of transportation, only the collective noun ‘vehicle’. They use that word to refer to cars, buses, bikes, spacecraft, and all other ways of getting from place A to place B.”
They point to how confusing any conversation about transport would be in this universe. Debates rage about whether vehicles are environmentally friendly for example – replete with facts which don’t clarify if they refer to bikes or planes.
Read more: Buying AI: how public sector procurement can ensure AI works for people and society
Mismatch between rhetoric and application
Of course, when it comes to AI we face an even greater challenge: the pace of evolution means new terms and buzzwords are emerging into, and fading from, the mainstream all the time and often with unhelpful connotations – see for example ‘hallucination’ and its anthropomorphic baggage. Just this week we’ve seen enormous interest in ‘distillation’ and its implications for the AI industry. Keeping up with this pace of change is a major challenge for policymakers and the public sector.
At the Ada Lovelace Institute, we’ve found in our own research on procurement that definitional challenges and high-level rhetoric leave the public sector without actionable or cohesive frameworks for decision-making about where to prioritise and pilot AI. The mismatch between rhetoric and application of specific tools hinders the ability of the public sector to evaluate AI or discuss its capabilities. For example, stakeholders involved in the procurement of AI told us that this lack of clarity often forces procurers without expertise in AI to fill in the gaps on their own to understand which type of technology offers the best solution for the issues they are trying to address.
It becomes a pressing issue as governments aim to speed up adoption: this month the UK prime minister announced plans to “mainline AI into the veins” of the nation. If the aim is to accelerate AI, there’s a risk that the capabilities of one model are used to raise expectations about completely different tools. But, further than that, there’s a risk we have misplaced expectations on the deployment of the same type of technology in different context.
Read more: AI in the public sector: from black boxes to meaningful transparency
The UK’s Incubator for Artificial Intelligence has tried to offer greater specificity to ‘AI’ to help parts of the public sector understand where different types of AI could be applicable. The i.AI taxonomy links different types of user challenges in the public sector (public-facing services, fraud and error, matching and triage, casework management, data infrastructure) with corresponding technology solutions (from generative AI to databases and APIs).
On ‘matching and triage’ i.AI says, “for example, the same underlying optimisation techniques could be applied to prioritise what to connect to the national grid, solve timetabling problems such as scheduling appointments, or match people to services such as accommodation”.
From a purely technical standpoint it may be right. But there is a lot more to unpack with these disparate use cases. From our own work on the ground, we’d argue it’s even more complex than bringing technical specificity to debates. Their success or failure will depend on how and where they are applied. Prioritising what connects to the national grid comes with a completely different set of considerations than matching potentially vulnerable people to accommodation services.
Read more: AI in public services – a political panacea?
Careful not to compare apples with oranges
Creating typologies of AI in public services – in ways that allow us to be sure we aren’t comparing apples with oranges – will require understanding the purpose of the technology, the underlying data, the context of use, and potential societal impact. AI used in diagnostics is held up as a success story in almost every event and announcement around AI. But its performance doesn’t just equate to the power of a specific technology – it is reliant on the data, the infrastructure and culture of the health system, the professionals working with the tool, and the regulators and patients’ trust levels.
To extend the transport metaphor, giving someone a bike in Athens will disappoint if you assume that it will deliver comparable public benefit as handing it to someone in the cyclists’ paradise of Amsterdam. The conditions for success go beyond the bike itself, into the bike lanes, the steepness of the hills, local confidence in cycling, the temperature and so on. So scaling success will require understanding local context, and considering what broader factors need to be brought in (around skills, policy, professions, practice, regulation, data etc) to deliver success beyond a pilot.
As AI evolves, we will need to get comfortable using more precise technical terminology – what specific type of computational technique (and increasingly, specific industry partner) is actually being referred to. But we also need to find a meaningful way to discuss how technical and non-technical factors intersect. Acknowledging whether AI is people-facing, or back office, or in sensitive or high stakes areas, or in regulated sectors, should help the sector better understand context and replicability to support better decision-making.
To cut through the hype, the public sector needs find a way to cut through the jargon: AI is too important to be left solely to those with narrow specialist expertise in the tech. As AI evolves, so does the sector’s need to develop a way of talking about AI that is appropriate for public services and the people they seek to serve; one that can meaningfully draw in the technology, people and processes that make up successful application. Solving collective problems requires a shared language and understanding: it’s time to move beyond rhetoric about ‘AI’.
Read more: Making AI work for people and society