New report calls for New Zealand AI watchdog

By on 04/06/2019
Watchtower: New report by researchers at the University of Otago urges New Zealand government to create an agency monitoring public sector AI (Image courtesy: Benchill).

A new report has called for the creation of a regulatory body to oversee the use of Artificial Intelligence (AI) by the New Zealand government.

Researchers from the University of Otago who worked on the Law Foundation’s ‘Artificial Intelligence and Law in New Zealand Project’ argue that a watchdog is required to protect the public from risks such as bias in the operation of predictive algorithms deployed by the government. 

The authors write: “The increasing use of these tools, and their increasing power and complexity, presents a range of concerns and opportunities. The primary concerns around the use of predictive algorithms in the public sector relate to accuracy, human control, transparency, bias and privacy.”

A new body for artificial minds

The authors believe that “some form of ‘top-down’ scrutiny is likely to be required if the benefits of predictive algorithms are to be maximised, and their risks avoided or minimised. To that effect, we have proposed the creation of an independent regulatory agency.”

The report goes on to propose several roles for a new oversight agency, including producing best practice guidelines; maintaining a register of algorithms used in government; producing an annual public report on AI deployment; and conducting ongoing monitoring on the effects of these tools.

The authors point to similar efforts in the UK and Australia to develop parliamentary scrutiny or launch national AI watchdogs. However, they write, “at this time we offer no detailed proposal as to the form it should take. At present, there are very few international examples from which to learn, and those which exist (such as the UK’s CDIE) are in very early stages.”

How bias sneaks in

Commenting on the report, Associate Professor David Parry, Head of the Computer Science Department at the Auckland Institute of Technology, said: “Unfortunately most decision-makers have very little understanding of how these algorithms work or what the results actually mean. Bias is caused by data selection, the right to opt-out of data collection, existing bias in decision making and inappropriate choice of algorithm.”

Dr Amy Fletcher, Associate Professor of Political Science and International Relations at New Zealand’s University of Canterbury, said: “As we become an ‘algorithmic society’, increasingly reliant upon Big Data, machine learning, and social media platforms, it is crucial that citizens understand both the possibilities and limitations of these tools.”

Global efforts

There is growing momentum across the world behind efforts to consider the ethics of AI use in public services. The French and Canadian governments are setting up a new International Panel on Artificial Intelligence (IPAI), and last week 42 countries signed up to five new OECD international principles guiding the use of artificial intelligence.

The non-binding OECD principles state that AI should be beneficial to citizens and the planet; should include appropriate safeguards to ensure a fair and just society; should be used responsibly, safely and transparently; and that those working with AI systems should be held accountable.

OECD Secretary-General Angel Gurría said: “These principles will be a global reference point for trustworthy AI so that we can harness its opportunities in a way that delivers the best outcomes for all.”

About Natalie Leal

Natalie Leal is an NCTJ qualified journalist based in the UK. She holds a BSc and Master's degree in Social Anthropology and writes about society, poverty, politics, welfare reform, innovation and sustainable business. Her work has appeared in The Guardian, Positive News, The Brighton Argus, UCAS, Welfare Weekly, Bdaily News and more.

Leave a Reply

Your email address will not be published. Required fields are marked *