New EU regs set to control use of AI

By on 22/04/2021 | Updated on 22/04/2021
Painting the red lines on AI tech: the EU’s new draft AI regulations are designed to outlaw some applications of AI, while ensuring that risks are addressed in other uses. Picture by European Parliament

New draft regulations from the European Commission on artificial intelligence (AI) include provisions to outlaw AI technology that can be used to manipulate people’s behaviour or create Chinese-style “social scoring” systems, while implementing a system of strict regulatory oversight for a wide range of use cases deemed “high risk”.

The new draft is the EU’s attempt to foster the benefits of AI for EU citizens while controlling the risks to their private and working lives. The Proposal for a Regulation on a European approach for Artificial Intelligence takes a risk-based approach, with four different levels of oversight depending on the AI application’s risk category: unacceptable, high risk, medium risk or low risk. 

Use cases deemed high risk include those where the coding could reflect biases, such as systems that score students’ exams or control access to training or professional courses, and AI used in HR and recruitment settings that could end up disadvantaging applicants from minority groups.  

Facial recognition used for mass surveillance in public places is deemed high risk rather than unacceptable, with the EU arguing that it can be sanctioned where there is a compelling public safety argument – for instance, to foil an imminent terrorist attack or a child abduction.  

A global move

By tabling draft regulations on AI, the EU hopes to gain a first-mover advantage, setting the pace in the same way that the 2018 General Data Protection Regulation became a legislative template for countries worldwide – including Thailand, India and Brazil. The proposals could also influence the UK, which recently announced its plan to publish a strategy on AI.

The European Commission’s Margrethe Vestager, executive vice-president for a Europe fit for the Digital Age, said: “On Artificial Intelligence, trust is a must, not a nice to have. With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted.

“By setting the standards, we can pave the way to ethical technology worldwide and ensure that the EU remains competitive along the way. Future-proof and innovation-friendly, our rules will intervene where strictly needed: when the safety and fundamental rights of EU citizens are at stake.”

The proposal is part of a new Co-ordinated Plan on AI for member states, first published in 2018, and aligns with the European Strategy on AI and the European Green Deal, while taking into account new challenges brought by the coronavirus pandemic.

The EU wants to facilitate a single market for AI to inhibit the emergence of competing standards in the bloc, and to create the legal and regulatory certainty that will attract investors and employers. It also wants to avoid barriers or cost burdens for innovative companies developing AI solutions.

What’s in, what’s out

Unacceptable uses of AI are defined as those posing “a clear threat to the safety, livelihoods and rights”. Examples are software designed to “manipulate human behaviour to circumvent users’ free will”, which could arguably include distorting social media feeds during election campaigns, and systems that facilitate “social scoring” – a reference to the social credit system operating in China since 2020. China’s citizens can for example accumulate credits by volunteering in the community or donating blood, while facing sanctions if they collect parking tickets or dodge fares on public transport.

While remote biometric identification systems will in general be off limits to law enforcement agencies, the EU is proposing that defined and regulated exceptions will be possible – for instance, to search for a missing child or to locate a perpetrator of a serious offence. Permissions must be authorised by a judicial or other independent body, with strict limits applied on  time, geographic reach and the data bases to be searched.

Meanwhile, high-risk AI systems will be subject to “strict obligations” before they can be put on the market. Examples of high risk applications include systems that determine individuals’ credit scores, assess their eligibility for welfare support, or assess the authenticity of travel documents or the AI used in robot-assisted surgery.

In these cases, evidence that the AI system’s safety and security meets EU requirements must be submitted to a new national market surveillance authorities. The list of “strict obligations” includes risk assessments followed by implementing measures to mitigate any risk; the use of high-quality datasets that avoid “discriminatory outcomes”; human oversight measures; and tests on robustness, security and accuracy.

Limited risk AI systems, such as chatbots, will be subject to transparency obligations, with users clearly told that they are interacting with intelligent software rather than a human, and voluntary codes of conduct.

Minimal risk systems – such as AI-enabled games – will not be subject to any intervention.

Member states will have to legislate to implement the new regime, including “laying down effective, proportionate and dissuasive penalties for … infringement”. As with the GDPR, the EU Commission is proposing fines for serious breaches of up to 20,000,000 euros or up to 4% of a company’s total worldwide annual turnover for the preceding financial year.

Meanwhile, a newly created European Artificial Intelligence Board will develop EU-wide guidance on standards and implementation.

About Elaine Knutt

Leave a Reply

Your email address will not be published. Required fields are marked *