AI for all? Addressing the biases in automation

The development of artificial intelligence creates opportunities to automate public sector bureaucracy and improve the delivery of public services. However, concerns are growing that the use of AI algorithms could also perpetuate and exacerbate existing inequalities and disparities in society.
In particular, the United Nation’s Special Rapporteur on the rights of persons with disabilities Gerard Quinn highlighted that artificial intelligence “hinges on the information and data provided to the machine”. When an artificial intelligence model is required to, for example, identify the best candidate for a job, it does so based on data about employers have deemed successful in the past – a process that the report said was “unlikely to account for the benefits of diverse candidates who do not conform to historical hiring norms, such as persons with disabilities”.
And data used to train artificial intelligence systems will often include data shaped by prior human decisions and value judgments. “If the human decisions that the data set represents are discriminatory, the artificial intelligence system will likely process new data in the same discriminatory fashion, thereby perpetuating the problem”, he added.
It is not just the UN that is raising concerns. The UK’s Equality and Human Rights Commission has announced it is monitoring the use of artificial intelligence by public bodies to ensure technologies are not discriminating against people, and the office of Australia’s merit protection commissioner has warned of ‘AI-assisted recruitment myths’, including that all AI-assisted and automated tools are reliably unbiased.
This webinar looked at how governments can use artificial intelligence in hiring decisions and beyond in a way that reduces inequality rather than exacerbates it.
Join this webinar to find out:
- The risks and opportunities to using artificial intelligence in public services.
- The areas where AI can be best used in government – and where the risks are greatest.
- What governments can do to maximise the inclusive and equitable use of AI.
Panel
Jeremy Darot, Head of Artificial Intelligence, The Scottish Government

Jeremy is the Head of Artificial Intelligence at the Scottish Government, he is passionate about AI, particularly the potential it has to improve the lives of the people of Scotland. In this role he is helping guide the country to become a leader in the development and use of trustworthy, ethical and inclusive AI. Jeremy’s experience spans over 20 years across industry, academia and government including aeronautics and astronautics at MIT, computational biology at the University of Cambridge and most recently as Head of Data Innovation at the Scottish Government.
Laura Carter, Senior Researcher, Public Sector Algorithms, Ada Lovelace Institute

Laura is a Senior Researcher and programme lead for Public sector use of data and algorithms at the Ada Lovelace Institute, and a PhD candidate in Human Rights Research Methods at the University of Essex where she researches gender stereotyping and discrimination in public sector data sharing. At the Ada Lovelace Institute she is currently working on an ethnography of data analytics in a UK local authority to be published in 2023.
…………………………………………………………………………………………………………………………………………………….
Eduardo Ulises Moya Sanchez, Director of Artificial Intelligence, Jalisco State Government, Mexico

Ulises Moya is the Director of Artificial Intelligence at the General Coordination of Innovation in the State of Jalisco, Mexico. Being the first director of this area in the public administration in Mexico. Some of his projects were selected by GPAI (2020) and Global UNESCO IRCAI (TOP-10) due to their responsible and ethical design. He holds a Ph.D. from CINVESTAV, a master’s degree in Medical Physics from UNAM, and a Bachelor’s in Physicist from the University of Guadalajara. He is a member of the National System of Researchers of CONACYT level 1. In 2019, he was recognized with the Fulbright García-Robles grant to collaborate with the Quantitative Bioimaging Laboratory at the University of Texas in Dallas and the University of Texas Southwestern Medical Center. He also collaborated in deep learning research and applications at the Barcelona Supercomputing Center at the high-performance artificial intelligence group.
Judith Peterka, Head, Artificial Intelligence Observatory, Policy Lab Digital, Work and Society, Federal Ministry of Labour and Social Affairs, Germany

Judith Peterka is head of the Observatory on Artificial Intelligence in Work and Society at the German Federal Ministry of Labour and Social Affairs. The AI Observatory aims to build the evidence base on the impact of AI on work and society and to shape policy development and the regulatory framework in this area. The focus is on promoting development and use of AI in a human-centred and responsible way.
Previously, she worked as an economic advisor in the UK civil service, including advising the UK Office for Artificial Intelligence. She has also worked in the OECD’s Education and Skills Directorate and as a Teach First maths teacher in a secondary school in East London.
Webinar chair: Richard Johnstone, Executive Editor, Global Government Forum

Richard Johnstone is the executive editor of Global Government Forum, where he helps to produce editorial analysis and insight for the title’s audience of public servants around the world. Before joining GGF, he spent nearly five years at UK-based title Civil Service World, latterly as acting editor, and has worked in public policy journalism throughout his career.