Governments set out plan to ensure AI safety as US makes bid to lead standards

Government leaders from around the world agreed to ensure safe testing of next generation artificial intelligence (AI) models at the first global AI Safety Summit held in Bletchley Park between 1-2 November.
The accord involving leading AI nations and private firms outlined the role both parties would play in safely testing frontier AI models before and after deployment.
Speaking at the close of the summit on 2 November, UK prime minister Rishi Sunak praised the plan for forging a synergy of public and private sector expertise and oversight.
“Until now, the only people testing the safety of new AI models have been the very companies developing it. We shouldn’t rely on them to mark their own homework, as many of them agree,” he said.
Ursula von der Leyen, president of the European Commission, commented: “At the dawn of the intelligent machine age, the huge benefits of AI can be reaped only if we also have guardrails against its risks. The greater the AI capability, the greater the responsibility.”
Days before the summit on 30 October, US president Joe Biden signed the first executive order from the federal government that directly regulates AI, describing it as the most “significant” action taken on AI by any government to date.
Building on the AI Bill of Rights set out to protect citizens from automated systems in 2022, Biden’s executive order will require tech firms to submit the results of tests performed on their AI systems to the US federal government before those systems are released.
Also included in its guidelines are the best ways to use ‘red-team testing’, a method that uses mock rogue actors to assess the true safety of AI models. Other of its guidelines relate to watermarking deepfakes in order to counter inauthentic content and fraud, or in other cases identify AI systems that have the potential to create bioweapons using life-threatening gene sequences and compounds.
The order aims to produce several outcomes, including new standards for AI safety and security, protection of American citizens’ privacy, protection of civil and consumer rights, and advancement of American leadership worldwide.
“As we advance this agenda at home, the administration will work with allies and partners abroad on a strong international framework to govern the development and use of AI,” it stated.
Read more: President Biden sets out blueprint for AI Bill of Rights
More investment needed in ‘protecting the public’
The measures agreed at Bletchley Park meanwhile build on the Bletchley Declaration, which was signed by 28 countries and the European Union and published on the opening day of the safety summit.
Yoshua Bengio, a member of the UN’s scientific advisory board nicknamed the ‘Godfather of AI’ is expected to lead the State of the Science report with a scientific assessment of the capabilities and risks of frontier AI that will inform future AI safety summits.
Commenting on the challenges ahead, Bengio said: “We have seen massive investment into improving AI capabilities, but not nearly enough investment into protecting the public, whether in terms of AI safety research or in terms of governance to make sure that AI is developed for the benefit of all.”
Governments are next expected to gather for talks on AI safety at a smaller scale virtual meeting co-hosted the Republic of Korea and due to take place in the next six months. The next in-person AI summit will be hosted by France in 2024.
Read more: Four in five Canadian public servants raise AI accountability concerns