Council of Europe starts work on legally-binding AI treaty

The Council of Europe is working on a future legal framework to regulate the use of artificial intelligence (AI) across all 47 member states.
The Council’s Ad hoc Committee on Artificial Intelligence (CAHAI) held a three-day meeting on 6-8 July attended by around 150 international experts. The purpose of the meeting was to draw up “concrete proposals on the feasibility study of a future legal framework on artificial intelligence based on human rights, democracy and the rule of law,” according to the Council.
Representatives from all 47 member states, including Russia, attended the online meeting alongside delegates from ‘observer states’ (USA, Canada, Japan, Mexico, the Vatican and Israel) and AI experts drawn from civil society, academia, and business. Other international organisations such as the EU, OECD and the UN will also contribute to CAHAI’s work on potential AI regulation.
Speaking after the meeting, chair of the CAHAI committee, Gregor Strojin, underlined the importance of working with a wide range of stakeholders active in the field of AI. “We need to be on one side, up to date on the developments of the technology and we also need to think globally and inclusively,” he said.
The first draft of the feasibility study is due to be presented at the next CAHAI plenary meeting in December 2020. The panel will then make a decision by the end of the year on whether a legally binding treaty should go ahead; if it does it could be delivered within as little as two years, Politico reported.
Agreement – but not yet consensus
In order for the treaty to get the green light, all 47 member countries would need to approve it. Each country would then incorporate the new rules into their national legislation. “There is overwhelming agreement — but not yet consensus” emerging that such rules are needed, the Council of Europe’s director of information society and action against crime, Jan Kleijssen told AI: Decoded.
In the meantime, the next stage is for two working groups, a policy development group and a consultations and outreach group to begin work at the end of the summer. The policy development group will analyse existing AI applications and draw on the experts’ contributions from the meeting, “mapping risks and opportunities as well as the existing legal frameworks,” Strojin explained in a video after the three-day meeting.
They will then develop policy proposals to feed into the work of a legal framework group whose work should start at the beginning of next year. This group will either draft new materials or propose “soft law instruments,” he said. The work ahead is hard, Strojin added, because they will need “to propose solutions that will be acceptable to the member states”.
Parallel moves
In February the European Commission released a white paper proposing regulation to ensure the ethical, trustworthy and secure development of AI across the EU. The Commission’s focus is on protecting EU citizens while remaining competitive with the US and China and it hopes to pass the first law in early 2021. But there is not yet consensus among all 27 member states, with the German government last week calling for elements of the proposals to be strengthened.
Strojin called for “especially close” cooperation and coordination with the EU, as well as other international organisations such as the OECD and the UN, saying each organisation had specific strengths to contribute to future AI regulation. “The added value of the Council of Europe is that it can prepare legally binding documents with mechanisms to enforce it in the field of human rights, the rule of law and democracy,” he said.