The promise and risks of artificial intelligence: Global Government Summit 2018, part 4

By on 15/05/2018 | Updated on 06/08/2019
The world becomes its own map: A combination of technologies, reinforcing each other, are allowing us to create a map of our world on a scale of 1:1 (Image courtesy: BCG).

Artificial intelligence and real-time data streams could dramatically cut costs and improve outcomes in the operation of government and public services. But as delegates discovered at the 2018 Global Government Summit, these technologies also bring threats around accountability, transparency, privacy and bias: to safely harness the power of emerging technologies, governments must get ahead of the game

The Argentinian poet and author Jorge Luis Borge, recalled Miguel Carrasco, once wrote a short story “about an ancient kingdom, high in the Andes, whose leaders were obsessed with cartography. They aspired to develop maps with ever-greater fidelity and accuracy, and launched increasingly ambitious projects until, eventually, they began a project to map their kingdom on a scale of 1:1.”

This was, of course, “ridiculous”, he added: “The project was a complete failure.” But then Carrasco, a Senior Partner at Boston Consulting Group (BCG) and its Global Leader in Digital Government Practice, ran a video pulled from an autonomous vehicle’s memory banks: combining film from its cameras with satellite photography, cartographic and GPS data, signage info and a satnav route, the vehicle identified and tracked moving objects and other hazards, weaving its way past swerving cyclists and jogging jaywalkers.

“Borges wrote the story to illustrate the futility of exact science,” commented Carrasco. “But as my colleague Philip Evans notes, this story serves as a useful analogy for what’s happening in the world around us: a combination of technologies, reinforcing each other, are allowing us to create a map of our world on a scale of 1:1.”

Merging technologies

Miguel Carrasco, Boston Consulting Group

Carrasco’s example focused on autonomous vehicles; but his goal was to demonstrate to delegates at the 2018 Global Government Summit, which this year was held in Singapore in February, how Artificial Intelligence (AI) and real-time data streams are set to become hugely valuable in public service delivery. Speaking to top civil servants from 11 countries, he explained how information can be brought together from an ever-growing range of sources.

In the case of autonomous vehicles, he said, these data sources include sensors and “user-generated” data – such as the roads congestion and hazards information uploaded by drivers via the Waze app. And AI tech has reached the stage where machines can distinguish and identify individual objects, even in moving images or against a confused background.

The current revolution in data and AI is being driven by four key factors, said Carrasco: the dramatic falls in the price and size of sensors able to transmit real-time data; the arrival of cloud computing, making it “so cheap to store data that we can capture and maintain everything”; rapid progress in the development of algorithms and analytics, enabling people to process, interpret and analyse data; and the ability to exchange data with people on the move, via smartphones. “It’s the mutually reinforcing nature of these four things which is driving a rapid acceleration in what we can do as private individuals, corporations and governments.”

From description to prescription

Given these capabilities, explained Grantly Mailes – a BCG Associate Director specialising in digital technologies – AI is enabling us to move “from a descriptive to a prescriptive use of technologies: from describing what used to be, to what can be or is likely to happen.” The big tech firms publish many of their algorithms without charge, while AI processing power is available as a cheap utility service. And modern AI is able to ‘learn’, reducing the need to write detailed programmes and leading to continuous improvement.

With all this computing power on tap, he argued, “what makes the difference is data – and you can mash up government data with data that’s freely available, creating tremendous power.”

The duo went on to consider use cases – beginning with segmentation: the ability to split groups up into ever smaller cohorts sharing common characteristics. Given rich enough data, Mailes said, governments can now create a “segment of one customer, providing the ability to talk specifically to each and every person that you deal with: to understand which person is at risk, which health theme we need to worry about, how we educate individuals, where we target investment.”

Grantly Mailes, Boston Consulting Group, says AI is enabling us to move “from a descriptive to a prescriptive use of technologies: from describing what used to be, to what can be or is likely to happen.”

AI in action

In New Zealand, said Carrasco, the government has pulled together data from a range of agencies and identified “which cohorts of welfare recipients are costing public services the most over their lifetimes. That then allows you to prioritise and work out where to invest in early intervention, reducing subsequent lifetime costs. The future liability in welfare outlay has come down in the order of billions of dollars.”

The second use case centred on predictions. Just as the autonomous vehicle had learned to predict that a cyclist might veer into the road, “the ability to sense patterns at very large scale” can help governments to forecast changes in fields such as climate change, health, education and transport. In New York, Carrasco explained, BCG has fed transport and infrastructure data through many thousands of simulation models to “identify the weak points in the food supply chain: those that are vulnerable in the event of natural disasters and emergency situations. That enabled the city to address them.”

Third, they turned to recommendations – for AI can now churn through vast amounts of data to develop detailed, nuanced plans for distributing funds or delivering services. In Australia, said Carrasco, a new system is being developed that separates welfare applications which can be processed automatically from those which are more complex and might need reviewing.

In another Australian programme, “network optimisation models” were used to identify the mix of technologies required to hit the government’s commitments on rolling out high-speed broadband. Using data on properties’ locations, the existing network and the costs of different solutions – “millions upon millions of data points” – the system worked out how to hit a set download rate in every home with the minimum of cost and delay.

John Manzoni, chief executive of the UK Civil Service, Cabinet Office, UK.

Overcoming the obstacles

To realise such opportunities, however, governments will have to address a set of challenges, risks and obstacles.

There’s the government ‘silos’ which inhibit collaboration and data sharing, and concerns around data quality – though Mailes commented that “the technology industry is beginning to work out how to deal with dirty data.” There are skills shortages – a real issue, given the tech firms’ tendency to hoover up as many data scientists and AI specialists as they can recruit – and the challenges around transiting from legacy IT systems whilst maintaining services. And there’s both a lack of understanding of the opportunities, and a tendency among policymakers – even when opportunities have been identified – to be suspicious or sceptical about new technologies.

If policymakers are wary, the public is still more so – and Paul Huijts, Secretary-general at the Netherlands’ Ministry of General Affairs, was worried about cyber security. “I’m convinced of the astounding possibilities,” he said, “but the vulnerability of our society is increasing: everything is interlinked, and in many cases there aren’t even non-IT solutions to run operations any more.”

This is largely a human rather than a technological problem, replied Carrasco: “Eighty percent of cyber security failures come down to human error,” he said. “We need technical firewalls, but we also need to increase awareness in our organisations and people. They have to understand the ramifications of mistakes or not applying appropriate policies. Just as we’ve been educated over many years on occupational health and safety, we all need to be educated about data safety.”

Protecting privacy

Privacy is another, linked concern, said Andrew Kibblewhite, Chief Executive of New Zealand’s Department of the Prime Minister and Cabinet. “All the different countries here will have different understandings of the ‘permission space’ for what is effectively surveillance of people and the use of AI,” he pointed.

Klen Jäärats, Director for European Union Affairs at Estonia’s Government Office, explained his country’s solution: giving citizens access to a digital platform through which they can control access to their data, and enabling them to receive alerts whenever public bodies make use of it. “Making it transparent enabled us to win people’s trust,” he said; if people can see the benefits of sharing data and have the right to say no, they’re generally happy to give their permission.

Even with these obstacles addressed, though, there are some challenges and risks unique to AI. As Mailes explained, AIs ‘learn’ using “training data” – so they can develop an “inherent bias” if that data is skewed or points in a particular direction. There’s a further risk that algorithms are “tuned such that they work well for the training data but fail in the real world; that’s something we need to be careful of.” And because AIs are combining data from multiple sources, it can become possible to identify individual people or organisations – even when their data has been ‘anonymised’ in each individual data set.

Paul Huijts, secretary-general, Ministry of General Affairs, Netherlands

Accountability in an age of AI

It was the sheer complexity of AIs’ calculations, though, that underpinned the risk which attracted most attention from the civil servants – for as Carrasco explained, as machines learn and develop their processes “through thousands of millions of iterations, they can come up with algorithms that we just can’t understand. We can see the inputs and the outputs, but we can’t see what’s happening in the middle.” In some cases, added Mailes, these days software engineers often receive guidance from AIs on how to develop their systems: “The algorithm says: ‘Here’s five things you need to do’. Pretty soon the algorithms will begin to do the coding for them.”

This rang alarm bells for John Manzoni, Chief Executive of the UK civil service, who drew a parallel with the financial services sector pre-2008 – another rapidly-growing industry built around products so complex that people didn’t know exactly what they were buying or selling. Governments are wary of regulating fast-growing, successful industries – but in a “system beyond human cognition,” he asked, “who’s accountable for that algorithm?”

Yaprak Baltacıoğlu, Canada’s Secretary of the Treasury Board, had another challenge: how to manage AIs’ handling of ethical decisions? Say an autonomous vehicle has to decide between hitting a pedestrian and driving into a wall, she said, risking its passenger’s life: “How is it going to make that call?”

Andrew Kibblewhite, chief executive, Department of the Prime Minister and Cabinet, New Zealand.

Transparency in technology

Similarly, one delegate raised the question of a tech giant which had recently said in a meeting that they “had a problem with hate speech and were developing algorithms to remove those comments. But then you’re defining what is hate speech, and I don’t know what standards they’re using: to us, that’s censorship.” Tech companies must be completely transparent about how they operate, he argued, but in some cases “they’re not really aware of their societal impact.”

“This is a dangerous place,” Mailes acknowledged, “where the smartest minds are moving so much faster than governments.” Carrasco noted that, in the end, it’s in the tech firms’ interests to build public confidence: “Organisations that breach people’s trust will ultimately pay the price,” he said.

One answer, replied Carrasco, is to use AI to monitor and regulate AI – giving regulators the tools for this most complex of jobs. And within the public sector, he suggested, governments can ensure proper accountability when using AI by dividing decision-making processes into “white box and black box models”.

There are some types of decisions in which government doesn’t necessarily need to know exactly how a conclusion was reached, he argued, and here opaque “black box” AI might be acceptable; “but if we’re using AI for recommendations on parole or child protection decisions, we might need a white box model” in which civil servants can see exactly how decisions are made.

Klen Jäärats, director for European Union Affairs of the Government Office, Estonia.

Getting ahead of the game

Looking ahead, Mailes urged the public sector leaders to focus on citizens’ needs and look for opportunities to use AI for public benefit: “Let’s start with the proposition of how do we create public value, not how we protect existing data systems or worry about where data is held.” And Carrasco recommended focusing on use cases, developing technologies in short, iterative “sprints” to test ideas, engaging with the public to explain the potential benefits, and producing some “tangible value that will create the permission space to do more.”

To make this approach work, Carrasco argued, governments can look to the reforms embraced by some big tech firms. He mentioned ING Bank which, he said, has “transformed its way of working, breaking down the organisation from traditional functional hierarchies into customer-focused squads” – where small teams work in iterative sprints towards clearly defined outcomes.

It was clear from the conversation, though, that governments need to address both the potential benefits and the inherent risks flowing from our fast-expanding tech industries. “I’ve yet to see many governments thinking about the capabilities they have to put in place to get on top of this,” concluded Mailes.

“I’d have a cabinet minister responsible for data and AI, and a dedicated team in government,” added Carrasco. “Because the big tech companies are getting away from the regulators, and the danger is that governments are already behind.”

This is part 4 of our report on the 2018 Global Government Summit. Part 1 covered an analysis of the challenges facing governments by Singapore civil service chief Leo Yip, plus UK civil service Chief Executive John Manzoni’s explanation of Britain’s reform journey. Part 2 focused on New Zealand’s civil service reform journey, with Andrew Kibblewhite – head of the country’s Department of the Prime Minister and Cabinet. In part 3, two top Singapore officials set out their country’s public sector reforms. Part 5 explores how governments can reconnect with disillusioned sections of the population. And in part 6, top officials explored how governments can help protect social mobility and median incomes in an era of rapid technological change.

About Matt Ross

Matt is Global Government Forum's Contributing Editor, providing direction and support on topics, products and audience interests across GGF’s editorial, events and research operations. He has been a journalist and editor since 1995, beginning in motoring and travel journalism – and combining the two in a 30-month, 30-country 4x4 expedition funded by magazine photo-journalism. Between 2002 and 2008 he was Features Editor of Haymarket news magazine Regeneration & Renewal, covering urban regeneration, economic growth and community development; and from 2008 to 2014 he was the Editor of UK magazine and website Civil Service World, then Editorial Director for Public Sector – both at political publishing house Dods. He has also worked as Director of Communications at think tank the Institute for Government.

Leave a Reply

Your email address will not be published. Required fields are marked *