Human rights and AI: interesting insights from Australia’s commission

By on 11/06/2021 | Updated on 11/06/2021
The report looks at how technological advances in areas like facial recognition and AI can be balanced with protecting human rights. Credit: PhotoMIX Company/Pexels

The conundrum is one that many governments face: how do you make the most of technological advances in areas such as artificial intelligence (AI) while protecting people’s rights? This applies to government as both a user of the tech and a regulator with a mandate to protect the public.

Australia’s Human Rights Commission recently undertook an exercise to consider this very question. Its final report, Human Rights and Technology, was published recently and includes some 38 recommendations – from establishing an AI Safety Commissioner to introducing legislation so that a person is notified when a company uses AI in a decision that affects them.

We have rounded up some of the report’s recommendations for governments about how to ensure greater use of AI-informed decision-making does not result in human rights disaster.

Supporting regulation

A range of recommendations in the report relate to improving the regulatory landscape around AI technology.

The report particularly singles out facial recognition and other biometric technology. It recommends legislation, developed in consultation with experts, to explicitly regulate the use of such technology in contexts like policing and law enforcement where there is “a high risk to human rights”.

More generally, the report calls for the establishment of an independent, statutory office of an AI Safety Commissioner. This body would “work with regulators to build their technical capacity regarding the development and use of AI”.

The AI Safety Commissioner would also “monitor and investigate developments and trends in the use of AI, especially in areas of particular human rights risk”, give independent advice to policy-makers and issue guidance on compliance.

Alongside this, the report notes that AI Safety Commissioner should advise government on “ways to incentivise… good practice [in the private sector] through the use of voluntary standards, certification schemes and government procurement rules”.

Explaining and involving

Several of the report’s recommendations focus on people who might be affected by AI. It calls for more public involvement in decisions about how AI should be used, and more transparency in indicating when a member of the public is affected by an AI-assisted decision.

For example, the report suggests legislation be introduced that would require any department or agency to complete a human rights impact assessment (HRIA) before an AI-informed decision-making system is used to make any administrative decision. Part of this HRIA should be a “public consultation focusing on those most likely to be affected”, the report says.

The report also notes that governments should encourage companies and other organisations to complete a HRIA before developing any AI-informed decision-making tools. As part of the recommendations, the authors suggest that the government appoints a body, such as the AI Safety Commissioner, to build a tool that helps those in the private sector complete the assessments.

In addition, the report recommends legislation “to require that any affected individual is notified where artificial intelligence is materially used in making an administrative decision” in government. There should also be equivalent laws binding private sector users of AI to do the same.

The report also says: “The Australian Government should not make administrative decisions, including through the use of automation or artificial intelligence, if the decision maker cannot generate reasons or a technical explanation for an affected person”.

Improving capacity

Other recommendations also suggest that the Australian government improve its capacity for working ethically with AI-informed decision-making tools.

The government should “convene a multi-disciplinary taskforce on AI-informed decision making, led by an independent body, such as the AI Safety Commissioner”, the report says. Responsibilities should include promoting the use of “human rights by design” in AI.

In keeping with the theme of transparency, the report also recommends that centres of expertise, such as the Australian Research Council Centre of Excellence for Automated Decision-Making and Society, “should prioritise research on the ‘explainability’ of AI-informed decision making”.  

About Josh Lowe

One Comment

Leave a Reply

Your email address will not be published. Required fields are marked *