Regulating AI: how the UK’s watchdogs are working to keep up with technology

Artificial intelligence is both an opportunity and a challenge for regulators. Global Government Forum and CloudSource brought together different watchdogs to discuss how they are developing plans to regulate the technology – and how they are using it themselves
From streamlining NHS prescriptions and hospital discharges to improving training in the construction sector, the UK government has set out plans to use artificial intelligence to help deliver better services and improve public sector efficiency.
Achieving this outcome requires smart regulation. Ministers have asked Matt Clifford, a tech entrepreneur and chair of the Advanced Research and Invention Agency, to develop an action plan for how AI can drive economic growth and improve public services.
The plan is also expected to cover how to regulate the use of AI. This will build upon the work of the previous administration, which earlier this year set out plans for what it called a pro-innovation approach to AI regulation, with the aim of capitalising on the potential of “an extraordinary new era driven by advances in artificial intelligence”.
With so much interest in how to deploy artificial intelligence across government, Global Government Forum and knowledge partner CloudSource brought together senior figures from across UK regulatory bodies to discuss both how to regulate the development of AI in a way that builds public confidence, and the potential for its use in their own operations.
In a session held in central London – hosted under the Chatham House Rule, so the contributions from public servants can be reported without attribution to allow attendees to speak freely – participants discussed the need for strategic approaches to AI regulation.
Where regulating AI is different – and where it is the same
Senior figures from the sector discussed how they are developing their own approaches to the use of AI.
A number of regulators said they have developed policy frameworks and product regulation and are improving outcomes through AI tools.
In particular, participants discussed the delicate balance between promoting AI innovation and maintaining effective regulation.
As one speaker put it, there are issues related to the regulation of AI that reinforce existing work, and then there are new challenges that are unique to the technology.
They said: “The regulatory questions that we ask are: is it safe, does it work, and does it carry on being safe? I think there are unique challenges with demonstrating that for AI.”
This is the element that builds upon traditional regulatory work, while the more innovative elements are around the potential for organisations to use AI without human oversight. “One of the things is the enormous potential that people perceive for AI in taking the human out of the loop,” one delegate said. “[But] we’re spoiling the party a lot of the time, saying we’re not quite ready for that.”
To tackle this problem, this regulator has “published a lot of guidance”, with the aim of creating a basis for oversight that can evolve, as opposed to regulations that carry legal force.
“We’re going to go for a guidance-heavy, regulatory-light approach as guidance can evolve, whereas it’s more difficult for regulations,” the delegate said.
Guidance has been produced on the development of good practices around machine learning.
But, they concluded: “I think one of the things we recognise is that we still don’t know enough to really properly regulate it.”
Other attendees set out the approaches they are taking to develop their understanding of how AI can be used, with many implementing regulatory sandboxes where new approaches can be tested.
“I think the sandbox is really important,” one speaker said. “So, we create the guidance, we create the regulation, and then we constantly check our homework through working with industry in those sandboxes to make sure that we’re not stifling innovation and that we’re mitigating risk in the right way.” Another speaker said their work on how to apply regulations to AI began well over a decade ago and made use of regulatory sandboxes and innovation advice services.
However, they also raised another issue, arguing that the discussion around AI “slightly rests on an implicit assumption that AI is not regulated and that, consequently, there are major gaps to be filled”.
This is not actually the case, they insisted. “It’s really important for us to bust that myth. It’s quite helpful that both the previous government and the current government have set out a regulatory approach which is built on the existing regulators employing the functions and powers in their domains to AI. But we’ve got to shift that external narrative about AI not being regulated and make it very clear that AI that’s deployed with bias, for example, is squarely within our regulatory territory. That is not an unregulated issue. It is a scary issue, it’s one that we need to rise to – but it’s not unregulated.”
This kind of approach, they said, would boost understanding of the role and existing powers of regulators in this area, and focus efforts on “the specific loopholes that we need to close off, the issues we really need to address, and how to elevate what regulators are already doing to govern AI”.
Setting the risk appetite
Attendees agreed that there will be different risk frameworks in different sectors, and one speaker said setting the risk tolerance for the areas they regulate will be “tricky, but not insurmountable”.
They added: “I think you need to be quite flexible and not apply one prescriptive rule for risk. That’s something that we’ve found has been quite important.”
They advocated for a principles-based approach – which is likely to differ from some institutions, like the European Union, that are more prescriptive.
The impact of divergence from the EU was also discussed at the session. Some attendees said the disparity with the European approach was creating some confusion because regulation in Europe was not taking the same form as in the UK.
However, others said it created an opportunity for the UK regulators to have a competitive advantage around making decisions at pace.
“We’re able to provide organisations with regulatory clarity and certainty because – even if they don’t necessarily like the answer – that’s the bit that people are pushing for,” one speaker said.
They highlighted an example where a regulated organisation had paused work when regulatory concerns were raised, acted on feedback and rolled out the product in the UK – all still faster than in the EU, where there remained an ongoing regulatory debate.
Attendees also discussed how implementing such an approach at scale across regulators would require greater collaboration between different agencies.
“People talk in a very crude ‘regulate or deregulate’ way, but actually most of my time is spent sitting between regulatory frameworks… and sometimes the two don’t come together,” one speaker concluded. “And there really isn’t a forum where that collaboration happens, and I think government is really, really poor at it. We all have different department masters but the observation I’m making is that it sounds like there’s a very optimistic view that we can come together and we can reconcile it.”
Many other attendees agreed with this point, and in his reflections on the roundtable, Ed Pearce, chief delivery officer at CloudSource, said it was clear there was an appetite across government to build unified approaches to regulation.
“It’s impressive to see how far regulators have come in developing their AI strategies and the steps they’re taking to collaborate and share insights,” he said. “There’s still much to be done, but it’s clear that the UK government is responding proactively with a blend of new and existing legislation.”
How regulators are using AI
As well as discussing the regulatory challenges of AI, the session also discussed the opportunity for regulators to use AI to become more efficient and streamline how they work.
Attendees came from organisations that are at different stages of using AI, due to what one attendee called “an inefficiency in the rollout of AI” across government regulators, with some departments banning its use, “which is quite an interesting position to be in, where you’re espousing growth”.
However, most organisations have been looking to provide their staff with AI tools and skills.
One speaker said they have been using the Microsoft Copilot AI tool “in pockets” and with some very basic training on how it works.
“I wouldn’t say we’re using it. I would say what we’re doing is trying it and testing it and seeing what we get from it,” they said.
Among the use cases being examined is using AI to help deal with freedom of information requests and other common requests to the watchdog.
Another speaker said that a major AI tool had recently been made available to staff and that “there’s been a lot of training that’s had to go alongside that because some people just haven’t come across it before, and they don’t know how to use it”.
There’s a need to upskill staff working in regulations on the potential, they said, commenting: “There’s always brilliant opportunities for using it, but it’s a whole new world for a lot of people within an organisation or a department.”
Another attendee was further on in deploying AI, with their organisation having as many as two dozen use cases.
“We’ve found a little bit of a formula,” they said. “We can see now how we can apply it to different areas in the regulatory lifecycle.”
These use cases include training models on how to identify counterfeit products, providing advice based on past regulatory decisions to train a model to spot potential areas where action might need to be taken – but always with humans in the decision-making loop.
One delegate explained: “Assessors when they’re evaluating information to make decisions are taking in different information from different sources, and it’s sometimes 20,000-page dossiers, and having to assimilate it and eventually arriving at a conclusion. In many types of regulation, you could use that pattern, and that’s something that we’ve prototyped now.”
Other use cases that attendees said they are exploring in the session are using AI to redact information that is included in consultations but should not be made public – which is currently done manually.
“We have to go through a process of redaction, which is very, very labour intensive. So we are testing to see whether there’s an AI solution to that,” one participant said. “There are huge benefits if we can get it right – something that allows and guides a summation at the end of it all – and you will still need that human making the decision.”
Ashish Kapoor, chief technology officer at CloudSource, said that the conversations around data and AI use cases in regulation had been truly insightful. “These discussions highlight how innovative technologies are essential for delivering seamless regulatory services,” he said. “In particular, the transformative potential of AI integration stands out, paving the way for more efficient and effective oversight.”
The ‘How UK watchdogs will regulate artificial intelligence – and how technology can unlock smarter regulation’ roundtable was held in London on 2 October by Global Government Forum and knowledge partner CloudSource.