Refresh

This website www.globalgovernmentforum.com/how-ai-can-grease-the-wheels-of-government/ is currently offline. Cloudflare's Always Online™ shows a snapshot of this web page from the Internet Archive's Wayback Machine. To check for the live version, click Refresh.

How AI can grease the wheels of government

By on 19/08/2024 | Updated on 19/08/2024
Gerd Altmann from Pixabay

Making the most of artificial intelligence (AI) means different things to different organisations. For governments, the primary value AI represents is minimising friction and maximising efficiencies.

During a recent webinar with knowledge partner SAP, Global Government Forum brought together a panel of four experts to discuss how AI is already easing delivery across government, and its potential to do the same across the whole public sector.

Start small, scale up

Steven Hodson, head of automation at DES Digital at the Ministry of Defence (MoD), described the first “core element” of the MoD’s AI strategy as “a challenge to the business units around defence and within MoD to ensure that they were aware of and understood how to responsibly take advantage of the opportunities that AI can present”. A secondary element to the strategy is the establishment of the department’s Digital Exploitation for Defence Programme.

Starting small, the MoD was able to save the Defence Equipment & Support (DE&S) and Marine Operations teams thousands of hours using Microsoft AI Video Analyzer.

Hodson added that the department places a lot of emphasis on internal communications and training programmes for staff to learn about the basics of generative AI and AI more broadly. These communications include an explanation of the benefits and risks of new AI tools. Drawing on two use cases, Hodson described the department’s ‘Typhoon Planned Maintenance Optimisation’ project, which uses Large Language Models (LLMs) across more than 11 million free text data fields.

“These fields are sourced from records captured over the last 20 years from Typhoon maintenance records. The processing of this activity simply wouldn’t be feasible without the Gen AI tooling that’s currently available to us,” he said. 

The LLMs in this project review historic maintenance records and make optimised recommendations based on the data.

The second project, known as Intelligence Search, uses LLMs to provide staff with powerful search capabilities across extensive document sets.

“This was a huge win for us,” Hodson said. He explained that it is normally very difficult to find and process information across defence, but LLMs make this significantly easier.

“From the feedback [we’ve] received [on] the time taken to search for answers across comprehensive and disparate document sets…we’re seeing up to 90% time saved.”

A new era of government

Victoria Bew is head of strategy at i.AI (otherwise known as Incubator for Artificial Intelligence). Her team of 70 people is only six months old, and before the result of the UK general election was revealed on 5 July, it sat within the Cabinet Office at No.10 Downing Street. Now, Bew and her colleagues are preparing to move into the Department for Science, Innovation and Technology (DSIT).

She said the mission of i.AI is to “harness the opportunity of AI to improve lives, generate efficiency savings and deliver better public services”. She added that the UK’s newly elected Labour government wants to use AI primarily to improve lives and deliver better public services.

“It’s not all about taking cost out of the system at a personal level,” she stressed.

Bew’s team first gathers ideas, then quickly tests those ideas to see which fall into a category of problems that could be solved through AI. “We then look to move into scaling and deployment,” she said. None of i.AI’s products are in full deployment, Bew added, though two of them are in their ‘beta’ phase. One of these, dubbed ‘Caddy’, is already being used by Citizens Advice and is currently undergoing a randomised controlled trial to ensure it is working effectively to deliver the right outcomes.

Bew’s closing message underscored the fact teams across the UK government are at the start of a new era of centralised power, which means civil servants have a unique opportunity to “drive change that potentially was not possible before and [to] use AI for the better of public services overall”.

David Dinsdale, industry value advisor at SAP, shared examples of how AI can improve public services. He pointed playfully to the fact that, at least for now, generative AI is known best for creative tasks such as being able to “paint your cat in the style of Van Gogh”.

“That’s an interesting thing to do, but it isn’t going to change necessarily that much in public services,” he added.

To demonstrate where AI can make meaningful change in this area, Dinsdale shared an annotated taxation query, marked “high priority” in status and “negative” under what SAP’s Communication Intelligence platform termed ‘sentiment’. After contextualising a query, the platform’s AI system generates a response, checking for biases in communication and offering ways to adjust the style to strike the appropriate tone. A response may be casual or formal, submissive or assertive, short or long. The system also suggests a ‘next best action’ and can trigger that action in the underlying systems. Whatever outputs are chosen, Dinsdale said, the point is to use the AI generated text as a good draft / suggestion and not to let AI completely take over.

“This is about how we deliver better public services to people in a more consistent and efficient way.”

AI in London

Theo Blackwell MBE, chief digital officer for the Greater London Authority under the Mayor of London, began by saying that it wasn’t realistic to talk about how public sector authorities can advance AI without first offering citizens something substantive around digital inclusion. He said London remains very fragmented, and that until recently there wasn’t a collective space in which to talk about innovation at scale.

“We created the London Office of Technology and Innovation (LOTI), which I think is England’s only regional innovation collaboration body,” Blackwell said.

“Twenty-eight or 29 of the 32 boroughs are members of LOTI, [as] is the Greater London Authority. [LOTI] works with methods, standards and real-world projects and problems that are identified by local authorities. That could be everything from plan[ning] electric vehicle charging, through to specific work on AI itself.”

Blackwell noted that AI isn’t particularly new in London transport. “Back in 2019, we were really looking at predictive analytics working with the private sector to understand congestion emissions [and] road safety impacts, through to collaborations that we’re doing right now,” he said.

Transport for London also worked with Google Maps to update the algorithm of its way-finding app to incorporate safer cycling routes, Blackwell explained.

“Historically, Google Maps will tell you how to get from A to B, but it will then tell you to cycle on a busy road full of heavy vehicles and cars. We worked with Google Maps to direct people to the safest cycling route. That was trialled and developed in London and is now available worldwide for every city to adopt.”

London also continues to run AI trials for use cases such as transport fare evasion to understand times of day and how fare evasion comes about, and councils are basing pilots on addressing issues such as localised flooding, fly tipping, noise pollution, and damp and mould in social housing.

Blackwell also touched on the use of AI for facial recognition, which he said has not only been piloted but deployed by the police in London.

Mitigating risks

The discussion culminated in reflections on the risks associated with AI in government.

“A risk that we haven’t covered…is that AI is probabilistic,” Dinsdale said.

“We tend to deal in black and white: is the answer to this question ‘yes’ or ‘no’? We’re not very good at dealing with an answer [that] is 80% ‘yes’. What do we do with that? Understanding how that impacts a particular use case is important.”

For Bew, the biggest AI risk for government is dwelling too much on risk itself, to the neglect of positively framed action.

“The biggest risk is that we get washed up thinking about the risks. Sometimes perfect is the enemy of the good. There are some huge and existential risks we need to manage with AI, but they are not necessarily the same risks that are present with quite operational transformation projects within individual organisations.”

Hodson offered the view that risk evolves alongside technological innovation, and that governments should assess risk on a case-by-case basis, rather than as something static.

He concluded: “How do we continue to monitor model accuracy for LLMs? How do we continue to monitor the prompts? It’s risk identification, risk mitigation and constant monitoring. That, I think, is how we manage it.”


The webinar – How governments are using AI to become more efficient – was held with knowledge partner SAP on 9 July 2024. Replay it in full

Find out more from SAP: Four Ways AI Can Deliver Better Public Services (And The Three Guard Rails You Need To Know)  

Upcoming event: SAP will co-host a roundtable at Public Service Data Live in London on 19 September:
How AI is freeing up civil servants for more engaging work

About Partner Content

This content is brought to you by a Global Government Forum, Knowledge Partner.

Leave a Reply

Your email address will not be published. Required fields are marked *