Australia issues guidance on how public servants can use AI in government

By on 11/07/2023 | Updated on 11/07/2023

Australia’s Digital Transformation Agency (DTA) has issued government-wide interim guidance to public servants concerning the correct use of generative AI.

Its recommendations mark the government’s first bid to set parameters around the use of tools such as ChatGPT, Google Bard and Bing AI.

“Due to the rapid evolution of technology, there is a growing demand for guidance when government staff members assess potential risks involved in its use,” said Chris Fechner, the DTA’s chief executive officer.

The DTA’s rules aim to “support the responsible and safe use of technology”, “minimise harm” and “achieve safer, more reliable and fairer outcomes for all Australians”. They also try to “reduce the risk of negative impact on those affected by AI applications” and “enable the highest ethical standards when using AI”. Finally, they seek to ensure transparency as well as a basis on which to build “community trust in the use of emerging technology by government”.

The guidance advises public service leaders to monitor the use of these tools by first giving staff accounts permission to use them, then logging their use in an official record.

“As you consider the potential uses of publicly available generative AI platforms like ChatGPT, Bard AI or Bing AI in your work, you should assess the potential risks and benefits for each use case,” the guidance said.

How public servants should use AI

The guidance provided three examples of practical use cases for generative AI where the risks involved should can be managed under the provisions of the guidance itself.

These use cases include using the technology to check the development of a project plan, potentially using AI to help generate template slides for a presentation, and using AI to confirm technical requirements for a tender document.

In each case, the DTA has set out guidelines for the use, including refraining from entering any details of projects, or sensitive or classified information, into public AI platforms.

By contrast, the guidance said use cases that involve unacceptable level of risks are tasks that include “large amounts of government data”, “classified, sensitive or confidential information”, “services [that] will be delivered, or decisions [that] will be made”, and finally, tasks involving “coding outputs [that] will be used in government systems”.

“We recommend agencies implement an enrolment mechanism to register and approve staff user accounts to access generative AI platforms. This should include appropriate approval processes through chief information security officers and/or chief information officers.”

The DTA also urged agencies to get staff to self-report any breaches resulting from the use of AI by “establish[ing] an avenue for staff to report any exceptions made to adhering to the guidance through your chief information security officer/chief information officer”.

Such breaches should be reported to the DTA periodically by email.

Read more: AI threatens two-thirds of civil service jobs, warns UK’s former government HR chief

Governments around the world explore AI rules

The DTA guidance is intended to be “iterative” and for government agencies to implement within their organisation. “APS staff should follow their agency’s policies and guidance on generative AI tools in the first instance,” it said.

The Australian guidance is the latest effort by a government to provide guidance for public and civil servants on how to use AI in their work.

Last month, the UK Cabinet Office released its own formal guidance on the use of generative AI tools to explore possible use cases and place limits where necessary.

Specifically, the guidance ruled out the use of generative AI to draft papers about policy changes.

“It is technically possible for one of these tools to write a paper regarding a change to an existing policy position. This is not an appropriate use of publicly available tools such as ChatGPT or Google Bard, because the new policy position would need to be entered into the tool first, which would contravene the point not to enter sensitive material,” it said.

It also prohibited using these tools to produce numerical analysis, though said that “it would be technically possible to use a publicly available tool to analyse a data set you are looking to present in a government paper”.

Read more: UK civil servants told to exercise caution around AI chatbot use

One in ten officials using AI in Canada

Public servants are already using AI in their work, according to Global Government Forum reserch. More than 10% of Canadian public servants say they have used artificial intelligence tools such as ChatGPT in their work, according to a GGF survey of 1,320 federal employees across the Canadian Public Service.

When asked if they have used AI tools like ChatGPT and Bard for work purposes, 11% of officials said they had – 8% sometimes and 3% often.

The survey also identified the areas where Canadian public servants are most positive about using AI.

Officials were keenest to use AI to process large amounts of data (61% were either excited or positive about the opportunity), and for real-time analysis and monitoring of public service delivery, for example traffic flow analysis, or improving healthcare services (48% either excited or positive).

However, public servants raised a number of strong concerns about the potential use of AI in some areas of public service delivery. Nearly half of officials (48%) said they were very concerned about the “accountability and responsibility for AI-based decisions and actions” in government. Over two-fifths of officials were concerned about both the potential over-reliance on AI leading to a lack of public service autonomy and decision-making capabilities (44%) and public servants’ lack of understanding and familiarity with AI hindering its use (41%).

Read more: One in ten Canadian public servants already using AI for work purposes

Join Global Government Forum’s LinkedIn group to keep up to date with all the insight public and civil servants need to know.

About Jack Aldane

Jack is a British journalist, cartoonist and podcaster. He graduated from Heythrop College London in 2009 with a BA in philosophy, before living and working in China for three years as a freelance reporter. After training in financial journalism at City University from 2013 to 2014, Jack worked at Bloomberg and Thomson Reuters before moving into editing magazines on global trade and development finance. Shortly after editing opinion writing for UnHerd, he joined the independent think tank ResPublica, where he led a media campaign to change the health and safety requirements around asbestos in UK public buildings. As host and producer of The Booking Club podcast – a conversation series featuring prominent authors and commentators at their favourite restaurants – Jack continues to engage today’s most distinguished thinkers on the biggest problems pertaining to ideology and power in the 21st century. He joined Global Government Forum as its Senior Staff Writer and Community Co-ordinator in 2021.

Leave a Reply

Your email address will not be published. Required fields are marked *