OECD AI policy unit can help bolster national standards, report finds

By on 14/05/2020 | Updated on 24/09/2020
In May 2019, 42 countries adopted the OECD AI Principles. (Photo by Leo Gonzales via flickr).

The function of the OECD AI Policy Observatory as an international hub for artificial intelligence (AI) governance may help avert a race to the bottom in AI tech, according to a new report by the University of California Berkeley Center for Long-Term Cybersecurity (CLTC).

The report, Decision Points in AI Governance, takes an in-depth look at recent efforts to translate AI principles into practice, covering the tools, frameworks, standards and initiatives being applied at different stages of the AI development pipeline. It comprises three case studies, of which the OECD AI Policy Observatory is one.

The OECD AI Policy Observatory was launched in February as part of an international effort to establish shared guidelines around AI. It helps countries to implement the OECD AI Principles adopted by 42 countries in May 2019, and is described as a “platform to share and shape public policies for responsible, trustworthy and beneficial AI”.

According to the report’s author, CLTC research fellow and AI Security Initiative (AISI) program lead Jessica Cussins Newman, the Observatory provides a potential counterpoint to “AI nationalism” and the idea of an international “AI race” – which can lead countries to accept lower standards in the pursuit of competitive advantage.

Dangers of a unilateral approach

Cussins Newman told Global Government Forum that while intergovernmental collaboration on the development of AI principles is “incredibly important”, with many countries’ national AI strategies including international collaboration as one of their key principles, AI nationalism continues to be an issue.  

“A fragmented, unilateral approach may be dangerous if it leads to highly competitive conditions between countries which could encourage some actors to cut corners on issues like safety and accountability,” she said. “More generally, AI nationalism would mean that global AI standards have less sway. And a fragmented approach is also more likely to breed distrust, which could contribute to the weaponisation of AI technologies.”

Her report finds that despite challenges in achieving international cooperation, the OECD AI Policy Observatory demonstrates that governments remain motivated to support global governance frameworks for AI. “Evidence-based AI policy guidance, metrics, and case studies to support domestic AI policy decisions are in high demand, and the OECD AI Policy Observatory is poised to become a prominent source of guidance globally,” it says.

Common understanding

The report concludes that international coordination and cooperation on AI begins with a common understanding of what is at stake and what outcomes are desired for the future. It says the OECD AI Principles – which are being used to underpin partnerships, multilateral agreements, and the global deployment of AI systems – supports that global collaboration.

However, it also notes that while stakeholders largely agree on high-level interests such as AI safety and transparency, “there will continue to be differences in the implementation of AI principles within different political and economic environments”.

Newman Cussins told Global Government Forum that her hope in the short term is that governments use AI principles “to provide structure and a foundation for ongoing efforts to develop AI strategies and policies, while in active consultation with their citizens and other stakeholders”. She said this should include developing procurement guidelines as well as testing, monitoring, and accountability practices for governments’ own use of AI technologies. 

In the medium term, she hopes there will be “meaningful international standards and best practices for the implementation of AI principles that can more easily be integrated by governments”. And in the long term, she hopes that AI principles guide the realisation of “robust regulatory frameworks that enable the safe and responsible development and use of AI technologies throughout sectors around the world, including international law to prevent unacceptable AI uses”.

A number of governments are developing ethical AI frameworks, including CanadaAustraliaNew Zealand, the US and the UK. Meanwhile, the EU unveiled proposals earlier this year to promote the ethical, trustworthy and secure development of AI.

Tech executives including Elon Musk, the billionaire entrepreneur behind companies such as Tesla, SpaceX and Neuralink, and Sundar Pichai, the head of Google and parent company Alphabet, have urged governments to implement solid regulatory measures to protect the public from the potential harms of AI.

About Mia Hunt

Mia is a journalist and editor with a background in covering commercial property, having been market reports and supplements editor at trade title Property Week and deputy editor of Shopping Centre magazine, now known as Retail Destination. She has also undertaken freelance work for several publications including the preview magazine of international trade show, MAPIC, and TES Global (formerly the Times Educational Supplement) and has produced a white paper on energy efficiency in business for E.ON. Between 2014 and 2016, she was a member of the Revo Customer Experience Committee and an ACE Awards judge. Mia graduated from Kingston University with a first-class degree in journalism and was part of the team that produced The River newspaper, which won Publication of the Year at the Guardian Student Media Awards in 2010.

Leave a Reply

Your email address will not be published. Required fields are marked *