AI experts call for ‘bias bounties’ to boost ethics scrutiny

By on 24/04/2020 | Updated on 24/09/2020
Experts recommend that developers offer financial rewards to people who discover and report bias in AI systems – addressing the risk that AI exacerbates existing race and gender prejudices. (Photo by Marco Verch via foto.wuestenigel.com).

Experts from the private sector and leading research labs in the US and Europe have joined forces to create a toolkit for turning AI ethics principles into practice. The preprint paper, published last week, advocates paying people for finding risks of bias in artificial intelligence (AI) systems – adapting a model used to check the security of new computer systems, in which hackers are paid ‘bounties’ for identifying weaknesses.

The paper also proposes better linking independent third-party auditing operations and government policies to foster a market in regulatory systems, and suggests that governments increase funding for researchers in academia to verify performance claims made by industry.  

The 80-page paper, Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims, has been put together by AI specialists from 30 organisations including Google Brain, Intel, OpenAI, Stanford University and the Leverhulme Centre for the Future of Intelligence.

From principles to mechanisms

“In order for AI developers to earn trust from system users, customers, civil society, governments, and other stakeholders that they are building AI responsibly, there is a need to move beyond [ethics] principles to a focus on mechanisms for demonstrating responsible behaviour,” the executive summary reads. “Making and assessing verifiable claims, to which developers can be held accountable, is one crucial step in this direction.”

The paper outlines 10 key recommendations of which bias bounties is one. It notes that bounties for other areas, such as security, privacy protection or interpretability, could also be explored.  

In the case of a bias bounty, a developer, government or other organisation would offer financial rewards to people who discover and report bias in AI systems – addressing the risk that AI exacerbates existing race and gender prejudices.

While the paper notes that bounties alone cannot ensure that a system is safe, secure, or fair – “some system properties can be difficult to discover even with bounties, and the bounty hunting community might be too small to create strong assurances”, it says – it asserts that bounties might increase the amount of scrutiny applied to AI systems.

It encourages developers to pilot bias and safety bounties for AI systems. In establishing a bounty programme, the paper recommends they consider setting compensation rates for severity of issues discovered; determine processes for soliciting and evaluating bounty submissions; and develop processes for reporting and fixing issues discovered via bounties.

Auditing and computing power

The paper also recommends that third-party auditing be embraced to complement government regulation of AI; that governments establish clear guidance regarding how to make safety-critical AI systems fully auditable; and that they work with academia and industry to develop audit trail requirements.

One way to connect third-party auditing with government policies, and to secure funding, is via a regulatory market, the paper suggests. Under this model, government would “create or support private sector entities or other organisations that compete in order to design and implement the precise technical oversight required to achieve [requisite] outcomes”.

The paper also proposes that government take action to level the playing field when it comes to computing power – enabling academics to verify claims made by AI developers. Substantially increasing government funding of computing power resources for researchers in academia, it says, would help to increase scrutiny of commercial models; provide open-source alternatives to commercial AI systems; and enable the deployment of AI systems to test AI. “Governments could also build their own computer infrastructures for this purpose,” it adds.

Governments around the world developing ethical AI frameworks, including Canada, Australia, New Zealand, the US, and the UK. Meanwhile, the EU unveiled proposals earlier this year to promote the ethical, trustworthy and secure development of AI.

Tech executives including Elon Musk, the billionaire entrepreneur behind companies such as Tesla, SpaceX and Neuralink, and Sundar Pichai, the head of Google and parent company Alphabet, have urged governments to implement solid regulatory measures to protect the public from the potential harms of AI.

About Mia Hunt

Mia is a journalist and editor with a background in covering commercial property, having been market reports and supplements editor at trade title Property Week and deputy editor of Shopping Centre magazine, now known as Retail Destination. She has also undertaken freelance work for several publications including the preview magazine of international trade show, MAPIC, and TES Global (formerly the Times Educational Supplement) and has produced a white paper on energy efficiency in business for E.ON. Between 2014 and 2016, she was a member of the Revo Customer Experience Committee and an ACE Awards judge. Mia graduated from Kingston University with a first-class degree in journalism and was part of the team that produced The River newspaper, which won Publication of the Year at the Guardian Student Media Awards in 2010.

Leave a Reply

Your email address will not be published. Required fields are marked *