AI could scale up disinformation campaigns, researchers warn

By on 27/05/2021 | Updated on 27/05/2021
Spreading disinformation: automation could help hostile actors scale up their disinformation campaigns. Credit: Pixabay

Hostile actors could use automation to launch highly scalable disinformation campaigns with less manpower than is currently required, researchers have warned.

A new report – Truth, Lies and Automation – analysed how GPT-3, an artificial intelligence (AI) system that writes text, could be used in disinformation campaigns.

The researchers concluded that “although GPT-3 will not replace all humans in disinformation operations, it is a tool that can help them to create moderate- to high-quality messages at a scale much greater than what has come before.”

Experts from the Center for Security and Emerging Technology (CSET) at Georgetown’s Walsh School of Foreign Service teamed the GPT-3 AI system up with humans and tested it across six disinformation activities.

 “Our study shows the plausibility — but not inevitability — of such a future, in which automated messages of division and deception cascade across the internet,” the authors note. “While more developments are yet to come, one fact is already apparent: humans now have able help in mixing truth and lies in the service of disinformation.”

Robot trolls

The report cites the Internet Research Agency (IRA), a “troll farm” used to spread Russian propaganda across social media, as an example of a contemporary government disinformation capability.

“In a way, the IRA mimicked any other digital marketing startup, with performance metrics, an obsession with engagement, employee reviews, and regular reports to the funder,” the researchers wrote.

“While the US discussion around Russian disinformation has centered on the popular image of automated bots, the operations themselves were fundamentally human, and the IRA was a bureaucratic mid-size organisation like many others.”

GPT-3, the researchers argue, gives hostile actors an opportunity to scale up disinformation operations as, when it is used with humans, the AI can take on much of the labour. Indeed, when operated by a skilled human, the GPT-3 fooled 88% of readers into thinking text was written by a person.

GPT-3 is also capable of reproducing the content of harmful ideologies. Previous experiments have seen the AI effectively craft far-right extremist content when prompted with human-authored examples.

The researchers tested GPT-3’s ability to perform six tasks associated with “one-to-many” disinformation campaigns, where messages are spread to a wide audience.

The simplest of these — “narrative reiteration” — involves producing “varied short messages that advance a particular theme, such as climate change denial”. GPT-3 “excels with little human involvement” at this task, the researchers said.

More complex tasks include “narrative seeding” whereby new accounts of events are shared that could create conspiracy theories. Researchers shared the example of QAnon. The AI “easily mimics the writing style of QAnon and could likely do the same for other conspiracy theories,” researchers said. But at this stage “it is unclear how potential followers would respond”.

On the final task, “narrative persuasion”, researchers did test audience responses. They tested to see if GPT-3 changed targets’ views on political issues.

“A human-machine team is able to devise messages on two international issues — withdrawal from Afghanistan and sanctions on China — that prompt survey respondents to change their positions,” the researchers found.

“For example, after seeing five short messages written by GPT-3 and selected by humans, the percentage of survey respondents opposed to sanctions on China doubled.”

Implication for governments

For a government or other organisation to use GPT-3 to mount a disinformation campaign, researchers said they would need three resources: the system itself, which is currently not publicly available; capable operators; and computing power and technical capacity.

While this rules out some actors from using such technology, the researchers believe deploying it is “well within the capacity of foreign governments, especially tech-savvy ones such as China and Russia”.

Governments seeking to resist such campaigns, the researchers said, might struggle if they focus on identifying AI-written text on social media, given the convincing nature of GPT-3’s outputs and the lack of metadata to distinguish it from human-authored text.

However, the researchers wrote, GPT-3 does not help disinformation campaigns to create the “infrastructure” they need, such as fake social media accounts.

“The best mitigation for automated content generation in disinformation thus is not to focus on the content itself, but on the infrastructure that distributes that content,” the researchers wrote.

About Josh Lowe

One Comment

  1. marky says:

    Seems impossibly unlikely that people will choose not to use AI to further their cause…. no matter what that cause is.

Leave a Reply

Your email address will not be published. Required fields are marked *