Oxford University launches AI and good governance commission

The Oxford Internet Institute has launched a new commission to advise world leaders on effective ways to use artificial intelligence (AI) and machine learning in public administration.
The Oxford Commission on AI and Good Governance (OxCAIGG) will bring together academics, technology experts and policymakers to analyse the AI implementation and procurement challenges faced by governments around the world and to set out best practice for policymakers and officials.
The commission’s goals include analysing the AI implementation challenges faced by democratic governments worldwide; identifying best practice for evaluating and managing the risks and benefits of AI use in public policy administration; determining the policy guidelines needed to help agencies implement AI and machine learning in policy decisions; and making research, practice and policy recommendations to help government departments evaluate, procure and apply AI tools for use in public service.
To mark its launch, the commission has released a working paper Four Principles for Integrating AI & Good Governance. It examines the use of AI by government agencies and outlines four significant challenges relating to AI development and application that need to be overcome if AI is to be used as a “force for good” in government responses to the COVID-19 pandemic.
The paper underscores the urgent need for inclusive design, informed procurement, purposeful implementation, and persistent accountability in order to protect democracy. Issues raised in the paper include the need for training and specialised due diligence processes, the integration of automated decision-making into policymaking, inherent bias within training data sets, and the explainability of algorithms.
The commission will address these questions in a series of reports in the coming months.
Professor Philip Howard, director of the Oxford Internet Institute, chair of OxCAIGG and the co-author of its inaugural working paper, said: “AI will have an important role to play in building our post-coronavirus world. The pandemic will certainly supercharge the pressure for widespread surveillance, data collection, and the use of AI to deliver more efficient public services. Innovative AI will need to be governed accordingly. Machine learning, coronavirus tracking apps, cross-platform data sets, and AI driven public health research shouldn’t pose a risk to fundamental human rights and legal safeguards.
“The commission’s global agenda of research and policy conversation will focus on finding effective ways to help government officials evaluate, procure and apply AI tools for the benefit of public service.”