Tackling the robot bigots: how to implement Artificial Intelligence intelligently

By on 12/03/2020
Alwin Magimay

The tech industry is excited about Artificial intelligence. But in government, ‘learning’ algorithms can create big problems – producing discriminatory decisions, and threatening transparency and accountability. At the Government Digital Summit, leaders from 16 countries explored the risks inherent in AI – and mapped out a cautious path forwards. Matt Ross reports

“One of the things that constantly surprises me – and maybe it shouldn’t – is how much the public sector is willing to ‘lean in’ around emerging technologies,” said Chris Hayman, praising public servants’ enthusiasm for “experimenting and trying new things.”

And when public bodies introduce new digital systems, they can get impressive results. Hayman, who is Director of Public Sector for Amazon Web Services’ (AWS’s) operations in the UK and Ireland, cited work at the city-regional agency Transport for London: its decision to stream real-time data via ‘APIs’, he said, has allowed businesses to create slick travel advice apps; researchers to improve safety by identifying patterns in road accidents; and utility providers to predict the impact of planned roadworks.

There are huge opportunities to improve service delivery using technologies such as artificial intelligence, robotic process automation and analytics, Hayman told his audience – who comprised 27 national and departmental digital leaders from 16 countries, gathered in London for GGF’s first Digital Summit. And he argued that introducing new technologies can also improve staff engagement: after AWS automated the handling of routine tasks at a public sector call centre, he said, “the call handlers were really happy, because they were only working on meaningful calls.”

Chris Hayman says adopting digital tech requires culture change and a willingness for organisations to “disrupt themselves”

But at the cutting edge, there’s always a risk of getting hurt: sharp tools need expert handling. And in fields such as project management, budgeting and governance, many traditional civil service working practices sit awkwardly with the techniques required to get the best out of digital technologies. Civil servants are, for example, very reluctant to risk wasting taxpayers’ money – but digital projects are best pursued by experimentation and iterative development; and that requires an acceptance that some of these experiments will fail.

Adopting innovation

Adopting digital tech requires culture change and a willingness for organisations to “disrupt themselves”, said Hayman, citing some of Amazon’s policy decisions. In allowing third party suppliers onto its platform, for example, the company was arguably “competing against ourselves” – but the reform ultimately helped grow the business. And to encourage experimentation, Amazon has delegated projects’ control to delivery teams of 6-8 people. “These teams have full autonomy, and they own the service from end to end,” he said. “There are no gatekeepers; nobody is signing off change requests.”

To avoid mission drift – when the focus on a project’s aims is lost during its development – Amazon asks team leaders to write their launch press release before they’ve written a single line of code, Hayman said. And to maintain service quality, it’s introduced a virtual version of the ‘Andon Cord’ central to Toyota’s ‘Lean’ manufacturing techniques. This allows any staff member to halt delivery of a service, he explained: “I have nothing to do with retail, but if I see a product description that’s misleading, I can pull it from the retail site.”

These reforms have obvious implications for project management – and Alwin Magimay, Chief Transformation Officer at the Project Management Institute (PMI), noted that the non-profit organisation has itself been “undergoing its own transformation” as it adapts to the demands of digital tech. PMI research has, he said, found that only a quarter of transformation projects produced tangible benefits in line with their aims. “So we’ve been thinking long and hard about why these transformations are so challenging.”

Working with academics and businesses, Magimay explained, PMI is developing a “transformation framework” to help guide organisations through digital programmes – building new operating systems, data insights and staff engagement around a “north star” headline goal. And it’s working on ways to combine traditional ‘Waterfall’ project management with the newer ‘Agile’ techniques favoured in the digital world – including the creation of an “enterprise or hybrid Agile”. Elements of Waterfall can be useful in the late stages of a project, he said: when helping to set up an online bank, “we built the entire bank using Agile methods until we hit the regulators and the internal audit department – and then we had to go through a Waterfall process.”

Yet civil servants shouldn’t be scared of pursuing pure Agile principles, argued Helen Mott, Deputy Director for Digital at the UK’s Ministry of Justice: “We’ve used Agile approaches to deliver some really huge programmes of work, even in a very heavily regulated space – often by embedding lawyers and policy team members in the team,” she said.

Learning about the machines

Some of the challenges around adopting emerging technologies in government, however, cannot be excised by developing the right cultures and project management techniques. Machine learning (ML) provides a good example: intrinsic to the technology’s nature are new challenges around accountability, regulation, transparency and equity.

In some fields of the public sector, for example, regulators are charged with examining new tools and services – but as one delegate pointed out, a regulator’s approval can quickly become outdated if an ML algorithm evolves over time. “Our regulatory methods are ‘point in time’: we stamp it as approved and throw it out into the system,” they said. “But as soon as it’s out of the door, it changes.” And as ML systems adapt themselves, their decision-making can become opaque to their operators – posing challenges to systems of democratic accountability. One ML system, another delegate commented, has ‘learned’ how to identify subjects’ genders: “We don’t know how it’s doing that,” he commented. “There are things going on that we don’t understand.”

As Magimay pointed out, ML systems can also ‘learn’ to discriminate against particular groups if there are biases built into the datasets used to ‘train’ them. When PMI built an algorithm to aid the recruitment of engineering project managers, he recalled, “it became gender-biased. It looked for correlation, and correlated that most engineers are men so therefore they must make better engineers. But causation is completely different!”

PMI corrected for that error, he continued, “but then another proxy appeared; it’s quite a knotty problem.” So where training datasets include evidence of discriminatory outcomes – perhaps as a result of systemic or staff biases – the algorithm may replicate those inequities. And unfortunately, said Edward Hartwig, Deputy Administrator of the US Digital Service, an awful lot of datasets have this problem: “Using that historical data to make predictive decisions about the future of individual human beings is really dangerous,” he argued.

Edward Hartwig: “Using historical data to make predictive decisions about the future of individual human beings is really dangerous.”

In the worst cases, he added, an ML system can end up “reinforcing the multiplied bias of entire generations of people that came before it.” And even when ML is deployed to guide civil servants’ decisions rather than automating the whole process, there’s a risk that “the machine bias backs up the human’s bias, and gives me justification for making the biased decision. Then the more we use it, the more we reinforce those biases.”

Exorcising the ghost in the machine

One response, Hartwig said, is to train operators carefully in how to spot potentially discriminatory decisions: “There’s a world in which you can train people to identify the bias in the data, supporting people as gatekeepers against it.” And another delegate suggested allowing ML systems to make automated decisions in service users’ favour, while requiring civil servants to decide cases which would disadvantage citizens.

Focusing on improving data collection and storage can also help squeeze out the risk of bias. In part, this involves strengthening the data that public bodies hold on individuals and their circumstances, said Chan Cheow Hoe, Singapore’s Government Chief Digital Officer: “Government has breadth of data; government doesn’t have depth of data,” he noted. “And for AI, you really need depth not breadth.” And in part, it means pushing for better data quality and the common formatting standards that support data exchange across government: “That prepares us for a world in which we can do greater things with that data, because if we have common standards we’re speaking a common language,” commented Hartwig.

Chan Cheow Hoe: “Government has breadth of data; government doesn’t have depth of data. And for AI, you really need depth not breadth.”

Where officials are concerned about bias or transparency in ML decision-making, said Ian Porée, Executive Director for Community Interventions in the UK’s HM Prison and Probation Service, the use of analytics and data-presentation software instead of ML can provide staff with “some additional augmented support, while still leaving the human being responsible.” This, his Ministry of Justice colleague Helen Mott noted, can simply involve “getting the information that people need in front of them in a consumable format, helping them to make decisions.” In the highly sensitive field of criminal justice casework, Porée added, staff need to be able to explain to clients and criminal justice professionals exactly why risk-based decisions have been taken – so “going further than decision-support tools feels like a big step to take, and we probably aren’t ready for that step.”

The adoption of technology standards and protocols can help civil servants to realise the potential – and avoid the risks – of ML schemes, commented Gordon Morrison, Director for EMEA Government Affairs at knowledge partner Splunk. Pointing to the World Economic Forum’s new AI Procurement Guidelines for Governments, he argued that standards can “ensure the supplier has used the right coding practice and AI methodology in designing that tool; that you always have a human being within the decision loop; and you continuously test the algorithm with synthetic data to make sure it’s doing what you expect it to do.”

The centre’s role

A number of national governments and international groups are currently developing AI standards. And central digital teams, said Kevin Cunnington – a former UK Government Digital Service (GDS) director general, now serving as head of the UK’s International Government Service – must take the lead in both shaping ML standards, and promoting their adoption across government. The GDS’s role is to “set the over-arching strategy; to set standards and provide reassurance against those standards; and to help people to develop capability in that space,” he said. “That’s the centre’s mission in the UK, and I suspect consistently across other governments as well.”

Kevin Cunnington says central digital teams must take the lead in both shaping machine learning standards, and promoting their adoption across government

And there may be another role here for the centre, commented Colin Cook, the Scottish Government’s Director of Digital: “There is an justifiable expectation that the central government digital function looks at future technology trends and provides guidance on how they might change cultures, the operating environment, and the opportunities for achieving national policy outcomes,” he said. “The call for some form of futurology function at the centre is quite common across Europe.”

The next wave of technologies will certainly throw up an entirely fresh set of challenges, responded Hartwig. “Twenty years from now, I think Quantum computing is going to break all of our current standard encryption models – but it’s not a worry I face every day,” he commented. Right now, his biggest challenges revolve around keeping IT systems in operation, and improving data management on four fronts: “Are we collecting data in a responsible way? Are we stewards of data in a responsible way? Are we making responsible decisions based on that data? And are we doing it in a way that engenders public trust in the system?”

Humanising the robots

That trust element is crucial to public acceptance of new technologies – and HM Prison and Probation Service’s Ian Porée noted that this should apply to service users as well as the wider public. “The people who receive our services often tell us how inhumane some of our decisions feel on the receiving end,” he noted; and anything that makes decisions harder to explain is likely to exacerbate the problem. “If the person can’t understand how we’ve arrived at a risk-based decision, then the process is inhumane – and we should be in the business of supporting citizens,” he argued. “Maybe we need to pay more attention in government to being citizen-centric, listening more to service users.”

New technologies have enormous potential to improve the delivery of public services and help governments achieve their policy goals. But civil servants must move step by step, Hartwig said, ensuring that they have the right data and protections in place: “I worry that we’re trying to go from mainframes to AI without taking the responsible steps in between.” And Alison Pritchard, Interim Director General of the UK’s GDS, also argued for officials to test the ground carefully before moving forward. “We have a tendency to reach towards the extreme ends of these forms of technology,” she said. “That impacts on our ability to seek these transformational opportunities, because all of a sudden you generate all sorts of challenges.”

Public bodies are developing the ability to deploy “supervised machine learning”, she added – calling on ML applications for decision support, while ensuring that staff have full oversight of the source data and retain the responsibility for decision making. But the “deep learning” applications are “complex, and may not happen for quite a while,” she concluded. “Let’s not reach for the very end, and in the process miss opportunities to make progress in that safer space.”

This is the fourth part of our series of reports on Global Government Forum’s Digital Summit, held in London late last year. The first part covers delegates’ discussions on how to create central digital units; the second reports their debates on how to build digital skills and capability; the third examines the session on the use of data and identity verification; and the fifth and final chapter considers a new and more supportive role for central digital teams.

The Summit was an ‘off the record’ event, but we have secured delegates’ consent to publishing these quotes – allowing us to report on the discussions while protecting delegates’ ability to speak freely.

Global Government Digital Summit 2019 attendees

In alphabetical order by surname

Civil servants:

  • Caron Alexander, Director of Digital Services, Northern Ireland Civil Service, Northern Ireland
  • Chan Cheow Hoe, Government Chief Digital Technology Officer, Smart Nation and Digital Government Office, & Deputy Chief Executive, Government Technology Agency of Singapore, Singapore
  • Kevin Cunnington, Director General of International Government Service, United Kingdom
  • Colin Cook, Director of Digital, The Scottish Government, Scotland
  • Fiona Deans, Chief Operating Officer, Government Digital Service, United Kingdom
  • Anna Eriksson, Director General, DIGG, Sweden
  • Edward Hartwig, Deputy Administrator, U.S. Digital Service, The White House, USA
  • Paul James, Chief Executive, Department of Internal Affairs, and Government Chief Digital Officer, New Zealand
  • Lauri Lugna, Permanent Secretary, Ministry of the Interior, Estonian
  • Richard Matthews, Director of CTS and Lead for the Technology Transition Programme, Digital & Technology, Ministry of Justice, United Kingdom
  • Jessica McEvoy, Deputy Director for UK & International at Government Digital Service, United Kingdom
  • Simon McKinnon, Chief Digital and Information Officer, Department for Work and Pensions (DWP), United Kingdom
  • Helen Mott, Head of Digital, Justice Digital and Technology, Ministry of Justice, United Kingdom
  • Maria Nikkilä, Head of Digital Unit, Public Sector ICT Department, Finland
  • Iain O’Neil, Director, Digital Strategy, NHSX, United Kingdom
  • Tobias Plate, Head of Unit Digital State, Federal Chancellery, Germany
  • Ian Porée, Executive Director, Community Interventions, Her Majesty’s Prison and Probation Service, Ministry of Justice, United Kingdom
  • Alison Pritchard, Director General of the Government Digital Service, United Kingdom (Host)
  • Mikhail Pryadilnikov, Head, Center of Competence for Digital Government Transformation, Russia
  • Line Richardsen, Head of Analysis, Department of ICT Policy and Public Sector Reform, Ministry of Local Government and Modernisation, Norway
  • Francisco Rodriguez, Head of Digital Government Division, Ministry of the Presidency, Chile
  • Carlos Santiso, Director, Governance Practice, Digital Innovation in Government, Development Bank of Latin America, Colombia
  • Aaron Snow, Chief Executive Officer, Canadian Digital Service, Canada
  • Huw Stephens, Chief Information Officer – Information Workplace Solutions and Treasury Group Shared Services, HM Treasury, United Kingdom
  • Diane Taylor-Cummings, Diane Taylor-Cummings, Deputy Director Project Delivery Profession, IPA, United Kingdom
  • Rūta Šatrovaitė, Head of Digital Policy, ICT Association INFOBALT, Lithuania (former civil servant)

Knowledge Partners:

  • Neal Craig, Public Sector Digital Lead, PA Consulting
  • Per Blom, Government and Public Sector Lead, PA Consulting
  • Natalie Taylor, Digital Transformation Expert, PA Consulting
  • Cameron J. Brooks, General Manager, Europe Public Sector, Amazon Web Services
  • Chris Hayman, Director of Public Sector, UK and Ireland, Amazon Web Services
  • Greg Ainslie-Malik, Machine Learning Architect, Splunk
  • Ben Emslie, Head of Public Sector UK and Ireland, Splunk
  • Gordon Morrison, Director for EMEA Government Affairs, Splunk
  • Adrian Cooper, Field CTO UK Public Sector, NetApp
  • Alwin Magimay, Chief Transformation Officer, PMI

Global Government Forum:

  • Matt Ross, Editorial Director
  • Kevin Sorkin, Chief Executive

About Matt Ross

Matt is a journalist and editor specialising in public sector management, policymaking and service delivery. He was the editor of Civil Service World 2008-14, serving an audience of senior UK officials; and the features editor of Regeneration & Renewal 2002-08, covering urban regeneration, economic growth and community development. He has also been a motoring and travel journalist, and now combines his role as editorial director of Global Government Forum with communications consultancy, marketing and journalism work for publishers, public sector unions and private sector suppliers to government.

Leave a Reply

Your email address will not be published. Required fields are marked *