G7 leaders ‘take stock’ of AI amid calls for shared governance standards

By on 25/05/2023 | Updated on 25/05/2023
G7 Hiroshima Summit

National leaders from the Group of Seven have vowed to work together and with other international partners to achieve “the common vision and goal of trustworthy AI”.

The development of inclusive artificial intelligence governance standards and interoperability was one of the topics of discussion at the G7 Summit in Hiroshima, Japan, last week.

Though the leaders of the seven nations – Canada, France, Germany, Italy, Japan, the UK and the US – and the European Commission acknowledged that their approaches to achieving trustworthy AI may be different, they said that technical standards should reflect “shared democratic values”.

The meeting also served as a chance for the G7 to “take stock of the opportunities and challenges of generative AI”, a subset of AI made mainstream by language models such as Open AI’s ChatGPT and Google Bard.

Since their launch, such tools have proven highly effective at producing convincing content from prompts, stoking fears that they could be used to undermine national security and public trust.

The G7 leaders agreed to create a ministerial forum provisionally called the “Hiroshima AI process” by the end of 2023 to facilitate discussion on copyright and disinformation.

Speaking on the first day of the meeting, Ursula von der Leyen, president of the European Commission, said: “We want AI systems to be accurate, reliable, safe and non-discriminatory, regardless of their origin.”

In March, the EU Artificial Intelligence Act was passed. It includes a classification system that can identify the threat level an AI technology poses to a person’s health and safety or fundamental human rights.

Read more: ChatGPT a threat to national security, warns Pentagon AI chief

Better regulate than never

Meanwhile, the US government is a step closer to creating a federal agency tasked with scrutinising digital platforms and AI after two senators reintroduced a bill in support of a ‘Federal Digital Platform Commission’.

Senators Michael Bennet and Peter Welch re-tabled the Digital Platform Commission Act of 2023, which according to a summary proposal would require five commission members to “hold hearings, pursue investigations, conduct research, assess fines and engage in public rulemaking to establish rules of the road for digital platforms to promote competition and protect consumers”.

The summary also stated: “This is not the first time a new sector of the economy has emerged to amass extraordinary and unregulated power. In the past, Congress has answered these developments by creating expert federal agencies empowered to provide timely, thoughtful, and durable regulations.”

The proposed agency’s five commission members would also be responsible for identifying platforms deemed “systemically important”, which could call for “extra oversight, reporting, and regulation” including “algorithmic accountability, audits, and explainability”.

“We need an expert federal agency that can stand up for the American people and ensure AI tools and digital platforms operate in the public interest,” Bennet said.

In recent weeks, senior US defense officials have weighed in on the topic of AI and associated risks. For example, lieutenant general Scott Berrier, director of the US Defense Intelligence Agency, admitted earlier this month that his agency had been slow to get in front of the challenges and opportunities of AI.

“We’re trying to be faster, we’re trying to be better,” he said at an Intelligence and National Security Alliance (INSA) event.

Craig Martell, the US Department of Defense’s chief digital and AI officer said on 3 May that he was “scared to death” of unregulated generative AI, and providers failing to “build in the right safeguards and the ability for us to validate the information”.

Similar fears drove The Future of Life Institute to publish an open letter in March that urged a six-month pause on any AI systems more powerful than ChatGPT-4, the most sophisticated model of its kind to date.

Responding to the proposed pause, the Pentagon’s chief information officer John Sherman said: “If we stop, guess who is not going to stop? Potential adversaries overseas.” 

Read more: AI in the public sector: an engine for innovation in government

Want to write for GGF? We are always looking to hear from public and civil servants on the latest developments in their organisation – please get in touch below or email [email protected]

About Jack Aldane

Jack is a British journalist, cartoonist and podcaster. He graduated from Heythrop College London in 2009 with a BA in philosophy, before living and working in China for three years as a freelance reporter. After training in financial journalism at City University from 2013 to 2014, Jack worked at Bloomberg and Thomson Reuters before moving into editing magazines on global trade and development finance. Shortly after editing opinion writing for UnHerd, he joined the independent think tank ResPublica, where he led a media campaign to change the health and safety requirements around asbestos in UK public buildings. As host and producer of The Booking Club podcast – a conversation series featuring prominent authors and commentators at their favourite restaurants – Jack continues to engage today’s most distinguished thinkers on the biggest problems pertaining to ideology and power in the 21st century. He joined Global Government Forum as its Senior Staff Writer and Community Co-ordinator in 2021.

Leave a Reply

Your email address will not be published. Required fields are marked *