Governments set out plan to ensure AI safety as US makes bid to lead standards

By on 05/11/2023 | Updated on 05/11/2023
Leaders gather for a photo at the AI Summit in Bletchley Park
UK Government hosts AI Summit at Bletchley Park. Photo: UK Government

Government leaders from around the world agreed to ensure safe testing of next generation artificial intelligence (AI) models at the first global AI Safety Summit held in Bletchley Park between 1-2 November.

The accord involving leading AI nations and private firms outlined the role both parties would play in safely testing frontier AI models before and after deployment.

Speaking at the close of the summit on 2 November, UK prime minister Rishi Sunak praised the plan for forging a synergy of public and private sector expertise and oversight.

“Until now, the only people testing the safety of new AI models have been the very companies developing it. We shouldn’t rely on them to mark their own homework, as many of them agree,” he said.

Ursula von der Leyen, president of the European Commission, commented: “At the dawn of the intelligent machine age, the huge benefits of AI can be reaped only if we also have guardrails against its risks. The greater the AI capability, the greater the responsibility.”

Days before the summit on 30 October, US president Joe Biden signed the first executive order from the federal government that directly regulates AI, describing it as the most “significant” action taken on AI by any government to date.

Building on the AI Bill of Rights set out to protect citizens from automated systems in 2022, Biden’s executive order will require tech firms to submit the results of tests performed on their AI systems to the US federal government before those systems are released.

Also included in its guidelines are the best ways to use ‘red-team testing’, a method that uses mock rogue actors to assess the true safety of AI models. Other of its guidelines relate to watermarking deepfakes in order to counter inauthentic content and fraud, or in other cases identify AI systems that have the potential to create bioweapons using life-threatening gene sequences and compounds.

Speaking at the AI Safety Summit as the US’s representative, vice-president Kamala Harris described the US government’s “moral, ethical and societal duty to make sure AI is adopted and advanced in a way that protects the public…and ensures that everyone is able to enjoy its benefits”.

The order aims to produce several outcomes, including new standards for AI safety and security, protection of American citizens’ privacy, protection of civil and consumer rights, and advancement of American leadership worldwide.

“As we advance this agenda at home, the administration will work with allies and partners abroad on a strong international framework to govern the development and use of AI,” it stated.

Read more: President Biden sets out blueprint for AI Bill of Rights

More investment needed in ‘protecting the public’

The measures agreed at Bletchley Park meanwhile build on the Bletchley Declaration, which was signed by 28 countries and the European Union and published on the opening day of the safety summit.

Yoshua Bengio, a member of the UN’s scientific advisory board nicknamed the ‘Godfather of AI’ is expected to lead the State of the Science report with a scientific assessment of the capabilities and risks of frontier AI that will inform future AI safety summits.

Commenting on the challenges ahead, Bengio said: “We have seen massive investment into improving AI capabilities, but not nearly enough investment into protecting the public, whether in terms of AI safety research or in terms of governance to make sure that AI is developed for the benefit of all.”

Governments are next expected to gather for talks on AI safety at a smaller scale virtual meeting co-hosted the Republic of Korea and due to take place in the next six months. The next in-person AI summit will be hosted by France in 2024.

Read more: Four in five Canadian public servants raise AI accountability concerns

Join Global Government Forum’s LinkedIn group to keep up to date with all the insight public and civil servants need to know.

About Jack Aldane

Jack is a British journalist, cartoonist and podcaster. He graduated from Heythrop College London in 2009 with a BA in philosophy, before living and working in China for three years as a freelance reporter. After training in financial journalism at City University from 2013 to 2014, Jack worked at Bloomberg and Thomson Reuters before moving into editing magazines on global trade and development finance. Shortly after editing opinion writing for UnHerd, he joined the independent think tank ResPublica, where he led a media campaign to change the health and safety requirements around asbestos in UK public buildings. As host and producer of The Booking Club podcast – a conversation series featuring prominent authors and commentators at their favourite restaurants – Jack continues to engage today’s most distinguished thinkers on the biggest problems pertaining to ideology and power in the 21st century. He joined Global Government Forum as its Senior Staff Writer and Community Co-ordinator in 2021.

Leave a Reply

Your email address will not be published. Required fields are marked *