Australia sets out standards for government AI alongside new collaboration tool

Australia’s Digital Transformation Agency has published an AI technical standard for the government, along with ‘GovAI’, a platform designed to foster collaborative use of the technology across departments.
The new technical standard – launched on 31 July – sets out technical requirements for AI systems across their full lifecycle, from initial design through to monitoring and decommissioning. The standard applies to in-house systems, those procured from the private sector, and pre-trained AI models and managed services.
Lucy Poole, the Digital Transformation Agency’s general manager of digital strategy, policy and performance, said DTA’s goal was to position Australia as “a global leader in the safe and responsible adoption of AI, without stifling adoption”.
She added that the standard had been designed “with public trust front of mind”.
It was developed with “extensive” research of international and domestic practices, and in consultation with the Australian Public Service (APS).
Poole said the standard “isn’t about adding more processes to its users” and that it was designed “to allow agencies to embed responsible AI practices into existing governance, risk and delivery frameworks”.
Read more: New Zealand launches its first national AI strategy
The lifecycle approach
The standard breaks down the lifecycle of an AI system into three parts – Discover, Operate and Retire – providing guidelines to help agencies use AI in a way that is “ethical, effective, and aligned with regulation” throughout.
The ‘Discover’ phase of the cycle covers the blueprint and design of a system, during which the standard advises agencies to assess “ethical risks, biases, fairness, government policies, human oversight and accountability structures”. This includes evaluating the quality, privacy standards, and security measures of the data used to build and train a system, as well as evaluating the “accuracy, reliability, and robustness” of the AI.
The standard also advises agencies to carry out “adversarial testing” to pinpoint underlying risks and “ensure compliance with guidelines”.
Read more: G7 launches AI innovation challenge to transform public services
The ‘Operate’ phase refers to the implementation of an AI system. The standard suggests that AI be embedded into the enterprise ecosystem, ensuring that it is compatible with a platform “based on user needs with safeguards against unintended outcomes”. It advises a focus on preventing “unauthorised access”, and on detecting “biases, data drift and unforeseen issues”.
The ‘Retire’ phase of the lifecycle describes the process of decommissioning an AI system in a controlled and compliant way when it is no longer needed.
“At every stage of the AI lifecycle, the standard helps agencies keep people at the forefront, whether that’s through human oversight, transparent decision-making or inclusive design,” Poole said.
She added that the “comprehensive” lifecycle approach, “combined with the flexibility to go above and beyond, complements the broader suite of AI resources available to the APS”.
Read more: Why public legitimacy for AI in the public sector isn’t just a ‘nice to have’
Harnessing the opportunities through collaboration
To coincide with the release of the AI technical standard, Australia’s minister for government services Katy Gallagher launched GovAI the following day.
GovAI aims to promote the use of AI in collaboration across agencies and features a sandbox for public servants to test AI processes, tools and training.
Gallagher said it would allow teams within government to identify productivity gains and advantages to service delivery.
“AI is increasingly becoming a feature of modern workplaces across Australia and the world, which is why the public service must be capable of harnessing the opportunities it provides while also maintaining public trust,” she said.
Read more: US federal government launches action plan to ‘win AI race’