Major survey highlights Europeans’ fears over AI

Less than 20% of Europeans believe that current laws “efficiently regulate” artificial intelligence, and 56% have low trust in authorities to exert effective control over the technology, according to a new survey from the European Consumer Organisation (BEUC). The findings have important implications for the governance and design of AI-powered public services, emphasising the need to address citizens’ fears over transparency, accountability, equity in decision-making, and the management of personal data.
The BEUC surveyed 11,500 consumers in nine European countries: Belgium, Denmark, France, Germany, Italy, Poland, Portugal, Spain and Sweden. It found that while a large majority of respondents feel that artificial intelligence (AI) can be useful, most don’t trust the technology and feel that current regulations do not protect them from the harms it can cause.
It also found that 66% of respondents from Belgium, Italy, Portugal and Spain agree that AI can be hazardous and should be banned by authorities. Only 18% of respondents in the five other countries strongly disagree that AI should be banned.
Those surveyed feel that AI will play an important role in many areas of their lives, especially when it is used to predict traffic accidents (91%), in health services (87%) or to assist with financial problems (81%). In addition, nearly half of those surveyed in Spain (44%) and Italy (45%) believe that AI will help to make the world more sustainable, and 44% of respondents in Portugal and 50% in Spain believe AI will contribute to increased life expectancy.
However, across the board, the majority of respondents stated they have low or medium levels of trust in AI, fuelled by concerns about abuse of personal data and its use in manipulating decisions.
For example, 60% of respondents in Belgium, Italy, Portugal and Spain say that AI will lead to more abuse of personal data. And consumers in the same four countries believe that companies are using AI to manipulate their decisions. In France, Denmark, Germany, Poland and Sweden, 52% believe this to be the case.
The majority of those surveyed – 83% in Spain, for example – think that consumers should be told when they’re dealing with an automated decision-making system, and the majority (78% in Italy and Portugal and 80% in Spain) agree that AI users should have to right to say “no” to automated decision-making.
Need for greater regulation
Around 56% of respondents in all countries – with a peak of 70% in Belgium – have low trust in authorities to exert effective control over AI, however.
The majority of respondents (60% in Belgium, Italy, Portugal and Spain) believe that it is not clear who is accountable where AI is not secure or causes harm, and 51% of those surveyed in the same four countries agree that AI will lead to unfair discrimination based on individual characteristics or social categories.
“In other words, when a consumer believes they have been harmed because of AI-based products or services they are not only unable to identify who’s responsible but also feel that they can’t rely on authorities to protect them,” said BEUC’s deputy director general Ursula Pachl.
“Having clear and strong rules, enforced by authorities with well-defined competences and able to exercise their powers, is a matter of credibility,” she added.
The BEUC concluded its report: “While AI applications are already subject to European legislation – on e.g. data protection, privacy, non-discrimination, consumer protection, product safety and liability – existing rules are not fit to address the risks that AI poses and additional measures are needed. Existing legislation should be updated and a new legislation should be introduced to strengthen consumer rights in AI to ensure they are adequately protected.”