AI regulation is coming. Here’s how governments can get it right

By on 14/12/2021 | Updated on 12/12/2022

Many governments are beginning to consider what regulation is needed in the field of artificial intelligence. The Ada Lovelace Institute sets out what they need to get right to harness the enormous potential of AI

2021 has seen a flurry of proposals for the regulation of artificial intelligence.

In April, the European Commission released a draft proposal for the regulation of AI. In August, the Cyberspace Administration of China passed a set of draft regulations for algorithmic systems, while in the United States the White House Office of Science and Technology policy has called for the creation of an ‘AI Bill of Rights.’ In the United Kingdom, the government has committed to publishing a white paper setting out its intentions for the regulation of AI.

For countries and economic blocs still considering their regulatory responses to AI, the message of 2021 should be clear: the main questions of AI regulation are no longer ones of ‘if’, but ones of ‘how’.

With three of the world’s largest economies taking steps to bring AI systems under greater regulatory control, any country intending to take a laissez faire approach to the technology will run a very real risk of finding themselves out in the cold – their domestic AI standards misaligned with those required for trading with a very substantial proportion of the global economy.

But while AI regulation looks set to become unavoidable for most states, not all approaches to regulating this powerful, complex family of technologies are created equal. While some could provide governments with an invaluable set of tools to guide AI to develop in a societally and economically beneficial manner, others could create a dangerous false sense of security. 

The question of how to get AI regulation right is one the Ada Lovelace Institute tackles in its recent report, Regulate to Innovate. The paper explores the challenges of regulating AI and the tools at policymakers’ disposal, and concludes with some recommendations for how the UK (and other regions) should approach this difficult task.

Though different governments’ exact approaches to AI regulation will have to differently balance geostrategic considerations, cultural and economic norms, and existing laws and regulations, the paper does point to some general considerations that will apply in most contexts.

AI regulation demands new, technology specific rules, set out in statute

Firstly, minor reforms to existing regulations will not be enough to allow countries to reap the potential benefits of AI systems and business models – nor to guard against the substantial risks the technology is capable of posing. 

In virtue of their ability to develop and operate independently of human control, and to make decisions with moral and legal consequences, AI systems present novel challenges that existing legal and regulatory systems will struggle to accommodate.

As a general-purpose technology, these challenges are unlikely to be confined to specific regulatory domains, sectors or industries. To ensure a consistent regulatory response to AI, it makes sense for governments to develop cross-cutting rules for the technology, and to set out the general approach individual regulators should take to common problems posed by AI systems.

Critically, the need for rules is not a problem that will fix itself over time. In the majority of cases, a coherent, just system of AI regulation cannot be expected to emerge organically through common law – a system that develops new laws slowly, after the fact, and in a manner that tends to favour powerful, incumbent interest groups.

To provide clear, comprehensive and timely guidance on AI systems, governments will need to set out new, cross-cutting rules for AI in statute. 

Regulatory capacity needs to be built

Secondly, the development and enforcement of cross-cutting, AI-specific rules will require significant regulatory capacity building and efforts to improve cross-regulatory coordination.

If governments are serious about creating robust regulatory rules for AI systems, they need to be prepared to support regulators to develop and maintain the technical, legal and ethical expertise to monitor and enforce adherence to such rules.

As well as building additional expertise, regulators are in many cases likely to need additional powers to audit, inspect and assess AI systems – a process that ideally needs to be undertaken both before a system’s development, and after its deployment.

They will also need to provide regulators with the permission and space to experiment with promising new regulatory tools and mechanisms, and to scale those proven to work.

When it comes to improving cross-regulatory coordination and allocating additional resources effectively across the regulatory eco systems, there are various different options governments could potentially consider. Ultimately, however, any successful model will need a means to allocate additional resources efficiently (avoiding duplication of effort across regulators, and guarding against the possibility of gaps and weak spots in the regulatory ecosystem). It will also need a way for regulators to coordinate their responses to the applications of AI across their respective domains, as well as the ability for regulators to share intelligence effectively and conduct horizon-scanning exercises jointly.

AI regulation cannot be effective in a vacuum

A final consideration for regulatory policymakers is that, for all the importance of regulation, the impacts of AI systems may not always be visible to, or controllable by, policymakers and regulators alone. Regulation and regulatory intelligence-gathering therefore needs to be complemented by, and often coordinated with, extra-regulatory methods of governance, such as standards, investigative journalism, activism and the work of academics.

Governments looking to introduce robust AI regulation should therefore consider how best to support the background conditions required for these essential, extra-regulatory forms of scrutiny and accountability. Here, it may be useful to consider whether existing laws and institutions provide adequate protection and support to journalists, academics, civil-society organisations, whistle-blowers and citizen auditors to investigate and hold developers and deployers of AI systems to account.

Though many of these considerations may appear daunting, it is worth noting that the benefits of effective robust AI regulation are likely to far outweigh the costs. Historically, the transformative impact of new, general-purpose technologies, such as electricity and the internal combustion engine, have typically only been fully realised following the development of regulatory structures capable of incentivising and supporting genuinely productive uses of the technology, and providing confidence in their safety and efficacy. If we are to reap the enormous potential of AI, it is critical that policymakers get AI regulation right.

About Harry Farmer

Leave a Reply

Your email address will not be published. Required fields are marked *