Algorithm and blues: avoiding the traps in new tech

By on 30/06/2021 | Updated on 01/07/2021
Fine tuning required: algorithms can bring huge benefits – but poor design can create injustices and reduce transparency.

Algorithms can be immensely valuable in public services; but as the UK’s ill-judged A-levels algorithm demonstrated, badly designed systems can damage citizens and governments alike. At a recent GGF webinar, experts from around the world explored how to build public confidence and protect the public interest as civil servants adopt the technology. Adam Green reports

When Keith Stagner’s T-Impact consultancy was hired by the UK’s National Health Service to develop an algorithm to match donor organs with patients, success was judged by how many lives the software saved. For government bodies, the use of new technology often comes with such high stakes.

This can lead to “an underlying fear or anxiety about deploying algorithms in the public sector, where there’s such an emphasis on getting an ethically good outcome,” according to Dr Mahlet Zimeta, Head of Public Policy at the UK’s Open Data Institute. In other words, no one wants to be responsible for deploying a robot which chooses one person’s life over another.

Dr Mahlet (Milly) Zimeta, Head of Public Policy, Open Data Institute (ODI)

Stagner and Zimeta were speaking at a Global Government Forum webinar in June; there, public and private sector practitioners from around the world debated how to overcome these anxieties and find ways to use algorithms for the public’s benefit.

Explain the machines – or ditch them

Algorithms have been present in the work of government for several decades, making predefined decisions according to a fixed set of business rules. Some of these are innocuous: an algorithm might, for example, determine whether all the required fields in a form have been completed before allowing the user to proceed. Others do directly affect the lives of citizens: even prior to the pandemic, the UK government used algorithms to adjust down school pupils’ exam results to combat ‘grade inflation’.

In recent years, out of the box Artificial Intelligence solutions such as chatbots have been added. These add a layer of language processing to interpret a citizen’s questions, but also follow fixed business rules – perhaps directing the citizen to relevant resources, or connecting them to a human. Because outcomes follow transparent and immutable rules, basic AI technologies also hold relatively little fear for governments.

According to the webinar speakers, though, the elephant in the room is machine learning: the use of complex algorithms that ‘learn’ how to perform a task, evolving over time so that it can be difficult to understand how they’ve come to particular conclusions. As such algorithms are increasingly deployed across governments, the focus is on how to make their outcomes comprehensible to citizens – and, indeed, whether they should be used at all if explanation is impossible.

Dr Vik Pant, Chief Scientist and Chief Science Advisor, Natural Resources Canada

“Explainability is not something you can do in hindsight. You need to be thinking about it full steam ahead when you’re designing the systems, long before they will ever be deployed,” said Dr Vik Pant, chief scientist and chief science advisor at Natural Resources Canada.

Crucially, this does not mean simply releasing the underlying code and analysis into the public domain, he said, but “making it so that any reasonable person can understand it.”

What’s reasonable?

This ‘reasonable person’ test is the holy grail governments across the world are pursuing.

One route explained by Natalia Domagala, head of data ethics at the UK’s Cabinet Office, is slightly tangential. She described workshops that have taken place across the country, in which citizens are exposed to potential deployments of complex algorithms in public life – for example, to shortlist candidates for interview for a public sector job.

Natalia Domagala, Head of Data Ethics, Central Digital and Data Office (CDDO), Cabinet Office, United Kingdom

The workshops gauge citizens’ level of comfort with these applications, and ask them to point out concerns they would have. Citizens are also asked about their experience with algorithmic decision-making, such as credit scoring. The idea is to understand citizens’ level of comfort with the technology and inform how any future deployments might be explained.

In New York, an online public forum named the Data Assembly has been launched, in which citizens both hear about and offer feedback on algorithms the city has deployed, according to Stefaan Verhulst, co-founder of the Governance Laboratory at New York University 

Both the small group workshops in the UK and mass public meetings are different ways towards a common goal of encouraging “the level of AI literacy that allows for this kind of democratic deliberation to happen,” he added.

In some cases the conclusion may be that an algorithm should not be pursued due to lack of public confidence, several participants argued. “We have to begin with assessing if complex algorithms are the right solution to the policy challenge that we are trying to solve,” Domagala said.

Welcome challenges

In the summer of 2020, with schools shut by the pandemic, the UK government decided to dramatically expand its use of algorithms in the exams system – creating algorithms that, drawing on data on students’, classes’ and teachers’ previous performance, allocated exam results to students who hadn’t been able to sit their tests.

In what has gone down as one of the most controversial applications of technology in British life, many students ­– often from deprived backgrounds – received grades far below those expected, while private school children did disproportionately well. As our interview with the departed education permanent secretary Jonathan Slater revealed, design flaws in the algorithm had created multiple inequities. After a public outcry, the government had to withdraw the algorithm’s results, and accept grades predicted by teachers for each student instead.

But a story that many will see as a warning to be even more cautious about using algorithms in government arguably had a positive outcome. Technology was deployed, the public was exposed to detailed explanations of it and, when they were not satisfied, it was withdrawn.

“Accountability comes from being able to be challenged, being held to account on the effectiveness of the decision-making,” Dr Zimeta points out. In this case the public definitely had their say.

Gently now

It is also by no means inevitable that all deployments will end in such bruising public encounters. Technology can be sharpened by applying it incrementally, so that both governments and citizens develop a level of trust.

“You want to build confidence in the technology before you go and solve some really big and complex problem,” Stagner from T-Impact says, relating how his consultancy advises government clients to work on projects.

Keith Stagner, CEO, T-Impact

Once results emerge from an algorithm, data scientists should also work closely with operational staff to understand whether the results match expectations. If they do not, technologists should investigate why that’s the case before accepting and releasing the results.

“You need to be vetting your results with domain experts – with the subject matter experts who have a deep knowledge and understanding about how these models are probably working in the real world,” Dr Pant says.

Or as Domagala puts it: “Once the algorithm is in place, that’s where the real work starts. Evaluation is key.”

City, Country or Company?

Canada and the UK are two of the countries furthest advanced in the deployments of algorithms in government, yet Domagala from the UK’s Cabinet office pointed out that her government does not have a formal definition of an algorithm. Asked whether government has a list of algorithms operating across its operations, she replied: “I wish there was, because that would most certainly make my job much easier!”

This is a sign that regulation and governance are still emerging even in countries paying the closest attention. Given the speed of innovation and change in the field, this is no surprise. Meanwhile, city administrations – more flexible, and able to experiment at a smaller scale – are forging ahead, said Verhulst.

Stefaan Verhulst, Co-Founder and Chief Research and Development Officer, The Governance Laboratory (The GovLab)

“Cities have become the laboratories of innovation,” he said. And this makes sense, he argued: residents of cities are better able to engage with and understand technology deployed at a local level. “In cities, citizens have a direct kind of experience of how AI is being used. It’s more direct.”

But increasing use of experimental algorithms at city levels also comes with risks. One is that reality on the ground will outpace national governments’ ability to regulate, leading to a patchwork of different standards and regulations emerging at the city level. Another is inequality, where cities with large tax bases are able to experiment with sophisticated algorithms, while poorer cities are left behind.

A third, and related risk, is that of “corporate capture”, said Verhulst: some large tech companies offer inducements to city governments to allow them to experiment with services in a particular location. The city may receive a cash injection, or advance rollout of a specific service. But they will also be exposed to risks, while the data generated may belong to the private company.

An ethical digital ecosystem

Dr Zimeta argued that, with big tech companies increasingly under scrutiny for their use of data, governments should be asking basic ethical questions about the use of algorithms, rather than simply viewing them as a means to cost savings and efficiency. “Could there be an obligation for the public sector to ensure that data is used for the public interest?” she asked.

In an ideal world, Zimeta said, governments’ creation of standards and operating protocols could provide a framework which private sector companies chose to adhere to: “There is an opportunity for the public sector to show leadership in its use of algorithms, and use that as a tool to influence or set public expectations for the private sector and civil society.”

Domagala argued that all parties will also benefit if governments succeed in making algorithms more transparent and explicable. “Data scientists could launch ambitious AI and data projects, knowing unintended consequences will be mitigated early on,” she said. “The public will have an opportunity to gain an understanding of how the algorithm works and interact with the system.”

A badly-designed algorithm can badly damage public trust in governments’ ability to deploy advanced technologies. But a well-designed system of regulation and standards could minimise the number of badly-designed algorithms, both improving the outcomes of individual deployments and, ultimately, building up public confidence.

As long as governments have existed, they’ve struggled to keep up with the pace of change in the societies they oversee. They may already have lost the race with social media and big tech; but it’s not too late for them to get ahead of the game on algorithms.

GGF’s Deploying algorithms in government webinar was held on 8 June, with the support of T-Impact Ltd. You can watch the whole webinar via our event page, or below.

About Adam Green

Leave a Reply

Your email address will not be published. Required fields are marked *