Australia sets out proposals to protect public from dangers of emerging tech

By on 23/12/2019 | Updated on 24/09/2020
Edward Santow: “Inaccessible technology can lock people with disability out of everything from education, to government services and even a job.” (Image courtesy: Alexandre Saraiva Carniato/Pexels).

Australia’s human rights commissioner has unveiled draft proposals – including drawing up a national strategy on new and emerging technologies, reforming laws, and commissioning an independent body to oversee ethical frameworks – in a bid to protect the public from the risks associated with AI and other technologies.

The proposals are included in a 232-page discussion paper published by the Australian Human Rights Commission on 16 December.

In a tweet announcing the launch of the paper, the country’s human rights commissioner, Edward Santow, described it as a “call to action to modernise our human rights protections in the new era of AI and emerging tech”. He said in an accompanying video that it aims “to put humans and human rights at the centre of how new technology is developed and used”.

In the report’s introduction, Santow wrote that “new technologies are changing our lives profoundly – sometimes for the better, sometimes not”, and that artificial intelligence (AI) is sometimes being used in ways that unfairly disadvantage people on the basis of their race, age, gender and other characteristics. “The risks affect us all, but not equally,” he said.

The discussion paper lists a set of 29 proposals aimed at tackling this and other risks posed by AI and other new technologies.

They include establishing a national strategy on new and emerging technologies that “help us seize” economic and other opportunities “while guarding against the very real threats to equality and human rights”, and the creation of a new AI safety commissioner to “monitor the use of AI and coordinate and build capacity among regulators and other key bodies”.

Cost-benefit analysis

The paper also proposes that each use of AI by government should be accompanied by a cost-benefit analysis and public consultation before it is brought in, and that once a system is in place, people should be able to have an AI-led decision explained to them in a non-technical way.

Other proposals include introducing a memorandum on potentially harmful use of facial recognition in Australia until there are effective human rights safeguards enshrined in law; introducing legislation that creates a rebuttable presumption that the person who deploys an AI-informed decision-making system is legally liable for the use of the system; and conducting a comprehensive review, overseen by a new or existing body, in order to identify uses of AI in government decision-making.

In addition, it proposes that professional accreditation bodies for engineering, science and technology, as well as education providers, consider introducing mandatory training and modules on ‘human rights by design’.

Protecting the vulnerable

Other proposals focus on protecting vulnerable people in society.

Santow said AI is revolutionising the Australian economy and society, but that “too often new technology is ‘beta-tested’ on some of the most vulnerable members of our community” and can cause serious harm, especially in high-stakes areas such as social security, policing and recruitment.

During the consultation that informed the paper, many people told him they were “starting to realise” that their personal information could be used against them, he said.

He gave the example of smart home assistants. While they can be used to improve the lives of people with disabilities, he said, “inaccessible technology can lock people with disability out of everything from education, to government services and even a job”.

He said laws apply to the use of AI “as they do in every other context,” but that the challenge is that “AI can cause old problems – like unlawful discrimination – to appear in new forms”.

“The Commission makes a number of proposals to ensure that products and services, especially those that use digital communications technologies, are designed inclusively,” he said.

Robodebt scandal

Santow argued that, following the ‘robodebt’ scandal, now is the time to set new rules on how emerging technologies are used.

The flawed debt-calculation system – used by Centrelink, the agency responsible for making social security payments – calculated people’s income over short periods, assumed that their earnings had remained steady throughout the year, and used that data to work out whether they had been overpaid benefits. The assumption was often false – but claimants were chased for these ‘debts’, and the onus was placed on them to prove that the claim had no merit.

The Australian federal government conceded in a landmark court case last month that failings in the robodebt programme led it to unlawfully demand that citizens repay non-existent debts.

In the discussion paper, Santow wrote that law “cannot be the only answer”, pointing to a series of measures set out in the paper designed to help industry, researchers, civil society and government “to work towards our collective goal of human-centred AI”.

The Australian Human Rights Commission said the discussion paper “sets out a template for change” but that it is “written in pencil rather than ink”, and called on members of the public to contribute their views. This feedback, it said, would shape the final report, which is due for release in 2020.

About Mia Hunt

Mia is a journalist and editor with a background in covering commercial property, having been market reports and supplements editor at trade title Property Week and deputy editor of Shopping Centre magazine, now known as Retail Destination. She has also undertaken freelance work for several publications including the preview magazine of international trade show, MAPIC, and TES Global (formerly the Times Educational Supplement) and has produced a white paper on energy efficiency in business for E.ON. Between 2014 and 2016, she was a member of the Revo Customer Experience Committee and an ACE Awards judge. Mia graduated from Kingston University with a first-class degree in journalism and was part of the team that produced The River newspaper, which won Publication of the Year at the Guardian Student Media Awards in 2010.

Leave a Reply

Your email address will not be published. Required fields are marked *