OECD urges countries to gear up for ‘governing with AI’
From automating routine tasks to forecasting public health crises, governments around the world are already experimenting with artificial intelligence, research has revealed.
A new report and framework from the Organisation for Economic Co-operation and Development (OECD) aims to broaden understanding of responsible AI use in the public sector.
According to the OECD, there is a “global need for further knowledge sharing, exchange of good practices, and structured policy dialogue to understand the implications and steer a responsible use of AI in the public sector”.
“While the global debate on AI has tended to focus on governments’ role as regulators in shaping and responding to the application of AI, less attention has been paid to their responsibilities as users of AI,” the report states.
The report analyses 71 use cases of AI deployment in governments in 31 of its member and accession countries, focusing on several functions including governments’ internal operations, policy design, service delivery and internal and external oversight.
It finds that about 70% of participating countries have used AI to enhance internal operations.
Operations, policy and oversight
The report highlights several examples of how governments are using AI. France has already started experimenting with a generative AI tool called ‘Albert’ to accelerate the daily tasks of public service workers. The Canadian government has begun using robotic process automation to “automate tedious tasks such as transferring information between systems, streamlining internal operations and increasing efficiencies of officers’ workflows”. In a bid to boost productivity, the UK government published a framework for using AI to aid the delivery of public projects and responsible experimentation.
In the field of policymaking, governments in countries such as South Korea have leveraged AI to forecast public health crises. The country’s Disease Control and Prevention Agency begun using AI to “address… emerging infectious diseases” by analysing “medical… quarantine…and spatial data to develop policy responses” to these events. Finland’s AuroraAI programme also aims to identify “overly cumbersome” public services for users and help them navigate to receive help with key life events.
Several countries have used AI to counter various types of fraud. For example, the government of Spain has used AI to identify what the report calls “high-risk instances of potential fraud in grant and subsidies programmes”. Another example is Estonia’s Tax and Customs Board (MTA), which tested the capacity of AI systems to pick up on “incorrectly submitted VAT refund claims” as well as “companies or persons in need for inspection”.
Read Global Government Forum’s latest AI Monitor: AI ‘makes slow-moving governments vulnerable’, EU breaks new ground on AI legislation, and more
Doing more with public support
The report says that if governments are to progress further with AI, it is essential to win public support and develop clear strategies.
Key actions, according to the report, include establishing a whole-of-government approach, embedding “participatory mechanisms” to empower citizens, rolling out data governance measures to ensure inputs remain trustworthy, and applying metrics and tools to scrutinise AI systems. Governments are also creating new institutions to ensure accountability, the OECD says.
The US federal government, for instance, created strict requirements for federal agencies to appoint Chief AI Officers. These officers are assigned “[responsibility for] coordinating the use of AI” across agencies, it said. They are also tasked with setting up “AI Governance Boards, chaired by the deputy secretary or equivalent, to coordinate and govern the use of AI across the agency”.
Australia’s government too has created an AI in Government Taskforce in joint leadership with the Digital Transformation Agency and the Department of Industry, Science and Resources. The taskforce has been mandated to “develop guidelines and a governance approach on how to best enable the safe, ethical, and responsible use of AI in public service”. This includes measures to “improve risk management, skills and capability, technical use, and preparedness”.
AI framework for governments
The OECD has produced a framework to assist governments in the responsible use of AI. It is based on three policy questions and four policy measures. The three questions include what actions government should take, who they should engage, and why these actions are necessary.
Policy measures include the extent to which governments are prepared to create an “enabling environment” around innovation in AI and apply safe and human-centred guardrails to these innovations. They also include the willingness of government to engage relevant stakeholders and fulfil their goals of increasing productivity, responsiveness and accountability.
The OECD concluded that “more and better evidence of the impact of AI on governments will help ensure it is used for optimal impact”. It meanwhile urged governments and policy research experts to focus on “understanding, promoting and enabling the positive aspects of using AI, rather than only preventing the negative ones”, noting that “focusing mainly on risks might deter the deployment of high-benefit, low-risk uses of AI to improve public services”.
“Investments in capabilities and monitoring mechanisms are also acknowledged as critical tools for effectively deploying and overseeing the responsible use of AI”, says the report, adding that “there is a need for a more comprehensive, consistent and shared approach across public sectors”.