Refresh

This website www.globalgovernmentforum.com/slick-cities-how-local-authorities-are-using-ai-to-tackle-their-most-pressing-problems/ is currently offline. Cloudflare's Always Online™ shows a snapshot of this web page from the Internet Archive's Wayback Machine. To check for the live version, click Refresh.

Slick cities: How local authorities are using AI to tackle their most pressing problems

By on 16/07/2025 | Updated on 16/07/2025
AI generated image from Blessing111 via Pixabay

AI offers huge potential for local and city governments to improve services for residents and to make operations more efficient. These public service organisations also often have the agility to move faster than their national counterparts, offering early indicators of success and of challenges.

To examine the potential for AI to deliver local services, Global Government Forum convened city and municipal government leaders from Canada, Denmark and the UK for a webinar in which they discussed how they are using AI to tackle their most pressing problems. The session focused on use cases, and on how local authorities are approaching AI policy to enable safe experimentation and innovation.

Eddie Copeland is director of the London Office of Technology and Innovation (LOTI), whose job is to facilitate collaboration on digital, data and innovation projects across the city’s 32 boroughs and the City of London.

“The disadvantage of having such a fragmented city is that it’s hard to do stuff at scale,” Copeland told the session. “The advantage for topics like AI is that we have 33 different test beds, and it’s been really pleasing to see London boroughs doing exactly what we’d want them to see them doing – trying things, starting small, exploring together what kind of use cases work, and sharing that knowledge via LOTI so we can scale solutions much faster.”

He described the three different types of artificial intelligence in government: narrow, generative, and agentic. As he explained, narrow AI learns from historic data to do one thing very well, generative AI generates new content based on prompts, and agentic AI “is able to proactively and very autonomously do entire series of combined tasks together”.

He gave examples of use cases for each. So-called narrow AI can scan entire local areas in minute detail allowing frontline workers to carry out inspections from their desks rather than on location, and has been used to identify the London advertising hoardings on which business rates are not being paid to allow the recovery of due funds.  

As for agentic AI, Copeland explained that it is being used by Islington Council to serve social housing tenants by enabling them to report maintenance problems via WhatsApp. The embedded AI responds to their messages – in 92% of the languages spoken in the area – then diagnoses the problem, and books an appointment for the issue to be viewed and fixed.

Read more: G7 launches AI innovation challenge to transform public services

Cutting the time that public servants spend on administrative work

Local government also plays a crucial role in managing and delivering social care services, and it is here that generative is being put to important use.

AI tools are used to transcribe the conversations between social workers and those under their care, enabling the social worker to “have a much more human conversation” than if they were having to type notes on a laptop throughout every meeting, as was usual practice before. In future, Copeland explained that the goal is for AI to learn from historic case notes and suggest actions that have been successful in past cases be taken in new ones with similar characteristics.

Using AI in social work is also being explored at North Yorkshire Council. As Mark Peterson, the council’s head of data and insight pointed out, studies have shown that social workers in the UK spend up to 80% of their time on administrative work – what he described as a “genuinely terrifying figure”.

To address this, a team at the council have created the Children’s Social Care Knowledge Mining Prototype, which scans documents related to a case and produces a complex diagram of all of the child’s connections in their community and beyond.

Traditionally, such a diagram would have been limited to immediate family members. Now, they can map hundreds of connections – from schools and sports clubs to much more subtle interaction points – giving a social worker “loads of threads to pull” when investigating who might be able to help keep the child safe.

The council has also created a ‘policy buddy’, called Polly. This “really simple” LLM is trained on national children’s guidelines and local policies and has been built to answer social workers’ questions. “You might say ‘I’m a new social worker, I’m about to go on a first statutory revisit’ and it’ll give you a summary, bullet points of what you should do before the meeting, during the meeting and after, and what the key considerations are,” Peterson explained.

“Sometimes people might feel bad about going and asking the same person the same question multiple times. The technology doesn’t care. You can ask it 1,000 times if you want. There’s no judgment in it.”

Where it gets “really interesting and exciting”, he said, is in the ability to tailor outputs to the person being supported. For example, a social worker might be working with an eight-year-old boy who’s interested in superheroes. The AI can write guidelines for the social worker in a superhero theme and in language that an eight-year-old would understand. “You’ve now got something super-tailored to that individual child that supports them through the process”.

Read more: Do we need a ‘What Works Centre’ for public sector AI?

Copenhagen’s aim to be carbon neutral – and Toronto’s ambition

Marie Hvid Damborg, head of digitalisation, technical and environmental administration at the City of Copenhagen, and Sonia Brar, chief technology officer at the City of Toronto, described both current and planned AI use cases in their cities.

One of Damborg’s focuses is on the use of AI in advancing Copenhagen’s ambition to become a carbon neutral city and to reduce citizens’ emissions consumption by 50% by 2035. “We expect that data and AI is going to be important to achieving that goal,” she said.

The city authority is already using AI to help optimise energy use in public buildings, in urban planning, to analyse its vehicle fleet as it moves to more sustainable transport, and to simulate traffic flows to enable it to make decisions around public transport efficiency.

It is also exploring how AI can be used to nudge citizens to adopt more sustainable behaviours, with a small experiment related to waste sorting currently underway.

In Toronto, Brar explained that there are four main challenges the authority hopes to apply AI to: to help with traffic congestion management at a time when construction levels across the city are very high; to streamline the building permit process; to create interfaces in a resident’s language of choice so they can more easily interact with the authority; and to help people access the city’s wealth of recreational activities.

Read more: UK government develops AI tool to help meet housebuilding target

Responsible use of AI

All four panellists agreed that measuring the success of AI use cases requires starting by applying it to a small and specific task with a very clearly defined goal.

Of course, what is crucial for local governments and authorities is to use AI responsibly.

Each speaker emphasised the importance of having ethical guidelines in place to ensure AI is used for the greater good of the community.

In North Yorkshire, the approach is three-pronged. As Peterson explained, the council focuses on assurance by devising and circulating policy and guidance and carrying out ethical impact assessments; reassuring public sector workers about the use of AI through communication, explainers and a cross-council approach; and using AI to drive servicechange, focused on what works and on building AI literacy and confidence.

“One of the major pieces that sits in that assurance box is the ethical impact assessment, and that naturally has accountability, transparency and governance elements to it,” Peterson said.

“Human agency and dignity is absolutely critical for us,” he added. “It’s that question of: just because we can [use AI], should we? That’s got to be right at the forefront of any of the decisions that we’re making.”

In Toronto, the city authority hadn’t taken a stance on AI until recently. “There was some experimentation done a few years ago but it was quickly shut down because there were concerns around cyber, around privacy, around bias,” Brar explained.

To enable confidence around the safe use of AI and to promote innovation, the city has recently released guidelines for the use of generative AI.

“We’re responsible for how we apply AI and in making sure it’s valid, it’s accurate, it’s limited in bias, and that it really fulfils the mission that we’re here to accomplish,” she said.

“[The guidelines] really emphasise that human-centric approach and the areas that we really want to push ahead with, but balanced with some of the risks [involved].”

Old policies, new applications

A discovery in producing the guidelines was that the city already had robust policies in place governing data privacy, cyber, and the ethical and acceptable use of systems.

So, rather than creating new policy that could take years, and in an effort “to accelerate and to build on the foundation we already had for responsible use of tech”, it was decided that – where appropriate – existing policies could be applied to generative AI.   

The guidelines have, Brar said, “created a lot of excitement across the city, because now people feel like they have permission to really experiment and push the bar here because we’re looking for innovation”.

The City of Copenhagen has also developed guiding principles on how to use AI. “It’s about value, it’s about ethics, it’s about legality. We want to build both an AI-ready organisation and AI-ready infrastructure, better data, stronger platforms, and also increase digital confidence among our staff and leaders,” Damborg said.

To do the latter, the authority has developed its own GPT platform that enables its 46,000 employees to safely experiment with AI for daily tasks such as drafting and summarising documents.

Damborg explained that AI “has really brought digitalisation into the political spotlight” in Denmark. Since 2022, the country has appointed its first digitalisation minister and has set a goal for AI to free up 30,000 full time positions in the public sector by 2035.

Against this backdrop, one of the core issues around AI in the country at present is legal authority “so we have a national AI task force working on how to remove barriers in getting a clear legal basis for using responsible AI”, she said.

Considerations include ensuring that AI “follows public values” and that “AI isn’t the goal, but should help us reach the goal”.

As Copeland had said earlier in the discussion, there is a need to rethink processes to make the most of the potential of technology. “We cannot just take shiny new technology, bolt it onto the same old ways of working we’ve had in the public sector for 20 years and expect profound change. Let this be an opportunity to fundamentally rethink some of our service models, not just add AI and hope for the best.”

During the webinar, panellists took questions from the live audience covering topics such as:  

  • How to pick the right use cases
  • How to design an effective pilot or project so you can understand whether it’s working, and how you scale up
  • Measuring success
  • The environmental impact of AI
  • How local government and city authorities can overcome problems around poor quality, incomplete and biased data when training AI systems
  • And AI skills and capabilities in the public sector

Watch the ‘How cities are using AI to tackle their most pressing problems’ webinar in full here to hear their answers to these questions. The webinar was hosted Global Government Forum and took place on 3 July 2025.

Sign up: The Global Government Forum newsletter provides the latest news, interviews and features on AI, data, workforce, and sustainability in government

About Amanda Sweeney

Amanda Sweeney is an intern at Global Government Forum for the summer of 2025, part of a study-abroad program facilitated by Villanova University. She is a second-year student at Villanova University in Pennsylvania, United States of America. Amanda is heavily involved at Villanova University, being part of one of the largest programs on campus, “Blue Key,” a student-ran tour guide organisation. In addition, she is secretary of Villanova’s Pre-Law Society and co-runs Villanova’s Model United Nations organisation.

Leave a Reply

Your email address will not be published. Required fields are marked *