Refresh

This website www.globalgovernmentforum.com/across-the-public-sector-there-is-a-sense-of-excitement-but-also-worry-and-both-feelings-are-justified-can-governments-ensure-ai-is-equitable/?swcfpc=1 is currently offline. Cloudflare's Always Online™ shows a snapshot of this web page from the Internet Archive's Wayback Machine. To check for the live version, click Refresh.

‘Across the public sector there is a sense of excitement but also worry, and both feelings are justified’: can governments ensure AI is equitable?

By on 07/08/2023 | Updated on 07/08/2023
Photo by Gerd Altmann via Pixabay

Concerns are growing about artificial intelligence and its ability to exacerbate inequalities. Global Government Forum convened a panel of public servants from around the world for a webinar on how governments can use AI in a way that is equitable for all

The development of artificial intelligence creates opportunities to automate public sector bureaucracy and improve the delivery of public services. However, concerns are growing that the use of AI algorithms could also perpetuate and exacerbate existing inequalities and disparities in society.

The warnings are proliferating. The United Nation’s Special Rapporteur on the rights of persons with disabilities Gerard Quinn says artificial intelligence “hinges on the information and data provided to the machine” and when an artificial intelligence model is required, for example, to identify the best candidate for a job, it does so based on data about employees deemed successful in the past. However, Quinn said this was “unlikely to account for the benefits of diverse candidates who do not conform to historical hiring norms, such as persons with disabilities”.

The European Union is also raising concerns, with competition commissioner Margrethe Vestager warning that artificial intelligence could harm underprivileged citizens. Her concerns are that people would be ‘judged’ on their race, gender or place of residence, for example, rather than being “seen as who they are”.

“If it’s a bank using [AI] to decide whether I can get a mortgage or not, or if it’s social services in your municipality, then you want to make sure that you’re not being discriminated [against] because of your gender or your colour or your postal code,” she said.

Read more: AI could deny vulnerable citizens public services, fears EU chief

To discuss this issue, Global Government Forum brought together a panel of public servants from around the world for a webinar on how governments could use AI in a way that was equitable and didn’t perpetuate existing biases.

Jeremy Darot, the head of artificial intelligence at the Scottish Government, said the wide availability of AI tools like ChatGPT had made this issue “urgent and concrete to a lot more people than ever before, including in government”.

There was a sense of excitement about the potential to use AI in the public sector to tackle bureaucracy and improve service delivery. “Across the organisation and the wider public sector, there is really a sense of excitement, but also of worry, and I think both feelings are justified to some extent.”

Darot said the job of his central AI team in the Scottish Government was to “help translate this into an informed discussion and a coherent plan of action”.

Although the technology is evolving rapidly, AI faces many of the same issues that systems designed by humans have always faced, including decision-making bias.

“These individual prejudices have become entrenched in society as a whole, and government in particular, over millennia,” Darot said.  

“So this is not new. What’s new with AI is the risk of further entrenching those biases under a veneer of logic or objectivity and making it also a lot easier to discriminate on a large scale by automating decisions.”

Darot and his team in the Scottish Government are working to develop a plan for the use of AI in government. It has developed and published an AI strategy for Scotland, around the three pillars of trustworthiness, ethics and inclusivity, and the team has also developed a Scottish AI register to both provide public sector organisations with information on how to develop or procure an AI system, and to make the development of AI for use in the public sector transparent.

“I think that transparency and dialogue really is absolutely essential to addressing bias,” he said. “What we’re doing in my team is to help test those tools and help develop best practice in this area.” There is also a Scottish AI playbook, intended as an open practical guide to AI’s use in Scotland. Although initially published as a wiki document, Darot said the aim is to provide a one-stop source of information and best practice that everybody in the Scottish AI community can use and contribute to.

‘By 2035, all jobs will be impacted by AI’

Judith Peterka, the head of the Artificial Intelligence Observatory in the Federal Ministry of Labour and Social Affairs in Germany, spoke about the development of the observatory, which is focused on understanding how AI is being used across the country.

The observatory forms part of the German government‘s AI strategy, which is led by three departments – the Federal Ministry for Economic Affairs and Climate Action, the Federal Ministry of Education and Research, and Federal Ministry of Labour and Social Affairs.

It is unusual for a labour ministry to have such a role, Peterka said, but it is vital because “we estimate that by 2035, all jobs will be impacted by AI”.

There is a lot of potential in AI to increase labour productivity and combat skill shortages, she said, but there are also the risks around discrimination and that current inequalities in the labour market will be increased.

The observatory therefore analyses the impact of AI by monitoring how companies are using AI tools. It examines what happens when they are deployed, looking at metrics including labour productivity, worker satisfaction and health and safety in a workplace, including through the use of randomised control trials.

The observatory also promotes the responsible and human-centred use of AI through a 20-organisation strong network on AI in the labour and public administration.

This group brings together government bodies such as the federal employment agency and the German federal pension insurance association to share information on how to use AI.

“These organisations are really big, and they process millions of applications and processes for German citizens every day,” Peterka said. “There’s a huge potential for the application of AI, for example, to make the services more citizen friendly, but also to speed them up. And there’s also, frankly, a problem with skill shortages in these organisation as well, so AI can play a role here.”

However, Peterka said care must be taken as the data involved in the work of these agencies is usually sensitive. “When people interact with the labour and social administration, it is in difficult circumstances in their life when they lose a job, or when they had an accident. So there is an increased responsibility to really process and apply AI in a human-centred and responsible way.

“This is why this network has been set up even before AI regulation is officially on the way. It is on the way at the European level, but even before that, we wanted to make sure that we adhere to standards here. The network comes together once a month for two years now to exchange experiences with the use of AI tools.”

This has led to the development of guidelines for the implementation and use of AI, and the observatory is funding AI projects to test them out.

One example Peterka discussed during the webinar was an AI project for the accident insurance fund for the construction industry. “They have identified that on contract construction sites, every third day a person is dying in Germany. This is despite there being a lot of data in excel sheets about these construction sites, but there’s not enough members of staff to actually go to every single construction site and do an inspection.”

Using AI, the fund is hoping to be able to analyse the available accident data and then provide more targeted inspections to improve safety.

“They can, on one hand, reduce costs and compensation, but more importantly, save lives,” Peterka concluded.

‘The metrics are very good’: using AI to test for diabetic retinopathy

Eduardo Ulises Moya Sanchez, the director of artificial intelligence from the Jalisco State Government in Mexico, discussed the development of AI projects in Mexico, including the artificial intelligence based referral system for patients with diabetic retinopathy – a complication of diabetes caused by high blood sugar levels that damages the back of the eye and can cause blindness if untreated – in Jalisco.

Jalisco is the Silicon Valley of Mexico he said, and it had developed an AI-based system to screen for diabetes.

This was needed because around 500,000 people in the state need the screening test annually, but there are only around 50 doctors who can provide it.

“For this task, we’ve implemented an artificial intelligence model in order to help the physician,” he said. Researchers trained a deep learning model on retinal scans to detect diabetic retinopathy and reduce the chance of vision loss for patients.

The system classifies the pictures based on three levels of health, to allow patients to be referred onto the next stage of help.

The model has been tested on 1,000 local patients, but the model was only trained on publicly available images. This is important in making sure that the model can work in the real world, he said.

The metrics of the model “are very good”, he said. “It’s very similar to a physician here in Mexico.”

Moya Sanchez and his colleagues have now published information about the system in seven scientific publications in international journals and conferences to share this information, but he says that it “has not a solved problem”. The governance around artificial intelligence in the health system needs to be improved, and researchers are now evaluating any possible bias in the system.

‘Data analytics systems should be explainable in a way that’s accessible to everybody’

The final speaker was Laura Carter, a senior researcher in public sector algorithms at the Ada Lovelace Institute, an independent research institute based in London and Brussels.

She shared the high-level findings from an ethnographic study looking at how a local authority in England used a data analytics system to bring together and to synthesise data from lots of different council systems. The research was conducted in 2020, but has not yet been published, so Carter was unable to name the local authority concerned.

She was able to highlight a series of findings around fairness, equalities, and discrimination in the use of data analytics systems. In particular, she highlighted that frontline workers were not able to trust the output of data analytics systems if they were not clear about how those outputs were generated.

“We found that where algorithmic outputs were explainable, this could be beneficial to frontline worker service delivery,” she said. But they were not being used “if these systems were not explainable or not explainable in a way that’s accessible to everybody involved in the system – not just the technical staff or management, but also to social workers, service providers, and the people who are seeking services”.

“We’re recommending that data analytics systems should be explainable in a way that’s accessible to everybody, and we’re also recommending that local authorities should complete an algorithmic transparency report” that covers their use of AI, Carter said – a recommendation that overlaps with Darot’s comments on the Scottish AI register.

Carter also said the research concluded that there was a need for stronger procurement guidance and the development of success criteria for the use of data and data algorithmic systems.

“We found that when implementing what can be quite new technology, failing to set success criteria in advance makes it very difficult to measure how much these data analytics systems are improving services, and to measure how much they’re making the lives of social workers and other frontline workers easier, and how much they’re improving outcomes for residents.”

Carter also called for more guidance for local authoirities on how to produce AI systems.

“We’d love to see the Crown Commercial Service, which is the government department that manages procurement for public sector organisations, to develop things like model contract clauses, and perhaps a model for an algorithmic impact assessment standard to help local authorities and other public sector organisations to procure, implement, and deploy data analytics systems in a way that works for their services and works for their residents.”

Following the presentations, the panel took questions from the webinar audience covering topics including:

  • Are AI developers, who are often private sector firms, engaged in helping to show how AI can be used equitably?
  • How can governments ensure that AI systems are transparent and accountable to the public?
  • Is anonymising data sufficient for feeding public sector data into AI APIs?

Please click below to watch the Q&A portion of the webinar.

To learn all this and more, you can watch the full AI for all? Addressing the biases in automation webinar on our dedicated events page. The webinar, hosted by Global Government Forum, was held on 27 June 2023.

About Richard Johnstone

Richard Johnstone is the executive editor of Global Government Forum, where he helps to produce editorial analysis and insight for the title’s audience of public servants around the world. Before joining GGF, he spent nearly five years at UK-based title Civil Service World, latterly as acting editor, and has worked in public policy journalism throughout his career.

Leave a Reply

Your email address will not be published. Required fields are marked *