Tech firms sign pact to counter AI deepfakes as elections near

By on 19/02/2024 | Updated on 19/02/2024
Photo credit: PaftDrunk

Twenty of the world’s leading technology firms have signed a pact agreeing to counter the use of artificial intelligence to deceive voters in the run-up to elections taking place in numerous countries this year.

The electorates of more than 60 countries, including seven of the world’s 10 most populous nations – Bangladesh, India, the United States, Indonesia, Pakistan, Russia and Mexico – the UK and nine EU member states have or are expected go to the ballot box this year. In total, almost half of the world’s population will be entitled to vote in 2024.

The threat of mis- and disinformation around elections has intensified in recent months and years, as AI technologies become better at generating convincing ‘deepfake’ content centred on life-like depictions of heads of state and other public figures. Examples of deepfakes include AI-generated audio and video, as well as doctored images.

Companies that signed the Tech Accord to Combat Deceptive Use of AI at the Munich Security Conference on 16 February include Google, IBM, Amazon, Microsoft, Facebook and Instagram’s parent company Meta, OpenAI, and X (formerly Twitter).

A statement released at the event said signatories would be expected “to work collaboratively on tools to detect and address online distribution of [deceptive] AI content, drive educational campaigns, and provide transparency, among other concrete steps”. It added that the agreement includes “a broad set of principles”, specifically “tracking the origin of deceptive election-related content” and raising “public awareness about the problem”.

Read more: Russia’s elections toolkit: dollars, disruption and disinformation

The pact detailed eight steps, or commitments, the firms are expected to follow to reduce content designed to misinform users. These include:

  • Developing and implementing open-source tools to “mitigate risks related to deceptive AI election content”
  • Assessing existing models for the risk of deception they pose
  • Detecting threatening content on their platforms
  • Addressing threatening content when it is found
  • Fostering cross-industry resilience to deceptive AI election content, and
  • Informing the public of how it addresses deceptive content
  • Engaging with global civil society organisations and academics
  • Supporting public awareness campaigns, promoting media literacy and “all-of-society resilience”

Dr Christoph Heusgen, chairman of the Munich Security Conference, said: “Elections are the beating heart of democracies. The Tech Accord to Combat Deceptive Use of AI in 2024 elections is a crucial step in advancing election integrity, increasing societal resilience, and creating trustworthy tech practices.”

Read more: How intelligence and transparency can combat electoral interference

‘Amplified risks’

Christina Montgomery, vice president and chief privacy and trust officer at IBM, said: “Disinformation campaigns are not new, but in this exceptional year of elections – with more than four billion people heading to the polls worldwide – concrete, cooperative measures are needed to protect people and societies from the amplified risks of AI-generated deceptive content.”

Kent Walker, president of global affairs at Google, said the accord “reflects an industry-side commitment against AI-generated election misinformation that erodes trust”.

A number of countries including the US and the UK have begun regulating AI or providing additional support for regulators since late last year in a bid to mitigate its risks.

In the US, for example, the federal government has begun implementing a requirement for major software developers to disclose the results of AI system safety tests, following an executive order signed by president Joe Biden in October 2023, which sought direct state regulation of AI.

The order requires tech companies to report the results of safety tests for their most powerful AI systems to the US Department of Commerce before those systems are released.

Similarly, the UK’s Labour Party (which polls suggest could win this year’s general election, supplanting the Conservative government) announced plans to replace a voluntary AI testing agreement with a statutory regime, in which AI developers would be compelled to share test data with officials.

The UK government announced earlier this month that it had allocated more than £100m (US$125.5m) in funding for better regulation, research and innovation in the field of AI, and to establish a partnership with the US on responsible AI.

Read more: A subtle opponent: China’s influence operations

Join Global Government Forum’s LinkedIn group to keep up to date with all the insight public and civil servants need to know.

About Jack Aldane

Jack is a British journalist, cartoonist and podcaster. He graduated from Heythrop College London in 2009 with a BA in philosophy, before living and working in China for three years as a freelance reporter. After training in financial journalism at City University from 2013 to 2014, Jack worked at Bloomberg and Thomson Reuters before moving into editing magazines on global trade and development finance. Shortly after editing opinion writing for UnHerd, he joined the independent think tank ResPublica, where he led a media campaign to change the health and safety requirements around asbestos in UK public buildings. As host and producer of The Booking Club podcast – a conversation series featuring prominent authors and commentators at their favourite restaurants – Jack continues to engage today’s most distinguished thinkers on the biggest problems pertaining to ideology and power in the 21st century. He joined Global Government Forum as its Senior Staff Writer and Community Co-ordinator in 2021.

Leave a Reply

Your email address will not be published. Required fields are marked *