AI takeaways from Innovation 2024; UK and US join up on AI safety; and more: GGF AI Monitor

Welcome to this month’s Global Government Forum AI Monitor
In this edition
- AI takeaways from Innovation 2024: Getting the public sector AI-ready
- Join senior leaders from across the US federal government at GovernmentDX
- OMB issues AI guidance for federal agencies
- US border control looks to AI to close detection capability gaps
- UK and US get joined up on AI safety
- Looking to train in AI as a public servant?
AI takeaways from Innovation 2024
Global Government Forum’s annual Innovation conference this year delved deep into AI in government. The first session on the topic began bright and early on Day 1, and featured Susan Acland-Hood, permanent secretary at the UK’s Department for Education; Victoria Bew, head of strategy at i.AI (Incubator for Artificial Intelligence), Cabinet Office and 10 Downing Street; and Clare Martorana, federal chief information officer in the Executive Office of the President, United States of America.
AI for show? Innovations in AI are making headlines all over the world, emerging as one of the biggest talking points across public and private sector organisations. But Bew was keen to stress to delegates that she and her team were not racing to adopt the technology for its own sake. She said government departments first must decide what impacts or social good they’re trying to achieve, then understand how AI can help them deliver that.
‘This is about driving impact’: “We’re also not here to use AI just because we can and it’s the new thing. Really clearly, we see our roles as leaders in AI and government adoption as about driving impact, about using innovation for social good and public purpose,” she said.
Be the tortoise, not the hare: The pace of change in AI meanwhile raises an important question, according to Acland-Hood. She asked exactly how much governments should seek to innovate when there is still so much information to gather on AI, its viability and its risk. She suggested that letting people experiment and report back was potentially a better way to go than expending energy trying to keep pace. However, this is a tricky balance to strike.
“It’s quite difficult to use some of our normal paradigms. We need to wait and see things play out and test and learn and then reflect back, but sometimes the thing you’re reflecting back is a version that went away iterations ago… so we’re wrestling with that.”
Back to basics: Martorana said her office has decided to “start at the beginning and build [a] semantic understanding of what is being done” in AI, creating a taxonomy for better data and insights around AI use cases across agencies. What can look like AI in action can, she said, turn out to be old-fashioned IT. This is designed to give her and her team the ability to learn fast while acknowledging that AI presents governments with “a really tough space” in which to operate.
“We’ve written the vision, we’re putting out the guidelines, and now we’re trying really hard to structure ourselves in a way where we can gain insights quickly from this vast ecosystem of 430 agencies that are all trying to learn something,” she said.
How well does AI know you? Martorana gave a top tip for those struggling to gauge the efficacy of current generative AI tools: look yourself up on them. When she did this, Martorana got back a description of herself that included a university she’d never attended and which attributed to her “a very significant role in the state of Pennsylvania”: a place she has never lived or worked in. “I use that example for everyone because it’s really easy for us to be intoxicated by the potential of AI, but we should not assume that the tech is perfect,” she said.
Getting the public sector AI ready
In a later session, experts shared insights from public servants on how they have got their organisation ready to embrace the potential of AI.
Experimentation in Estonia: Former guest on the Leading Questions podcast and secretary of state for the Government Office of Estonia, Taimar Peterkop, said that much of the hard work on digitalising government in his country has already been done. Estonia now has a data ecosystem that is “secure and interoperable”. However, there is still room for improvement, particularly when it comes to best practice in AI. Peterkop reiterated the need for space to be created so public servants could feel comfortable enough to “play and experiment” with AI. Another focus is on clarifying both data protection and procurement rules: areas that Peterkop said are so complicated, people “are afraid of them”.
“We need to get over it and experiment, take risks and maybe sometimes break the rules,” he said. “Most innovation starts by breaking the rules, but we must be very careful there that we don’t break the trust that people have.”
AI readiness: Fariz Jafarov, chief executive officer at the Centre for Analysis and Coordination of the Fourth Industrial Revolution in Azerbaijan, broke down AI-readiness in to five categories: education, workforce training, cybersecurity hygiene, research, and innovation.
AI ‘works like a knife’: Fariz used the metaphor of a knife to make the point that AI carries risk in proportion to who wields it in the first place. He said that whoever is holding the ‘knife’ determines whether the AI is good or bad, and this makes regulation critical. However, he added that governments must react quickly to their decisions when using AI, and this requires public servants who are trained to analyse the data on specific use cases.
Read more on Innovation 2024:
- Civil servants must take ‘long-term stewardship view’, says outgoing UK Cabinet Office chief
- Incoming civil service COO Cat Little sets priority to empower Whitehall departments
- ‘Hope is not a strategy’: how to change the way civil servants think about risk
- Innovation 2024 as it happened – day 1 and day 2
Join senior leaders from across the US federal government at GovernmentDX
18 and 19 April 2024, Ronald Reagan Building and International Trade Center, Washington, D.C.
Hot on the heels of Innovation, Global Government Forum is bringing public servants from across the United States federal government and beyond together in Washington D.C. to share insight on how to use technology to improve the digital experience of interacting with government.
This new event – GovernmentDX – will feature speakers from the White House, and from US federal government departments and agencies including the Department of State, General Services Administration, United States Digital Service, and the Office of Personnel and Management. Plus, international speakers from Canada, Estonia, Germany, Singapore and the United Kingdom will share their best practice across sessions on people-focused services, such as delivering a better constituent experience; optimising government information for the search engine age; and how to get federal technology right for a modern government.
Find out more about the conference here.
OMB issues AI guidance for federal agencies
AI is everyone’s department: The US Office of Management and Budget has issued a policy to guide how federal agencies use AI, following President Biden’s executive order last year. In a LinkedIn update when the policy was launched, Clare Martorana said that federal agencies “have a distinct responsibility to identify and manage AI risks because of the role they play in our society”.
The guidance covers AI risk management and governance and outlines requirements for concrete safeguards and transparency measures. The policy isn’t only about risk; it also urges agencies to advance AI innovation and “responsibly experiment” with generative AI to address issues such as the climate crisis, public health and public safety. Expanding the AI workforce is an additional priority.
Live from the White House: At a livestreamed meeting at the White House on The Future of AI in Government on 28 March, Arati Prabhakar, director of the White House Office of Science and Technology Policy, said that 2023 was the year AI “surged into the public consciousness”.
Laying out the charge: Prabhakar described how the vice president of the US had “laid out the charge” by identifying AI as “the most consequential technology of our times”. “We know it’s going to get used for good and for ill. We know that it has enormous promise, and it comes with tremendous perils as well,” she added.
‘Where the rubber meets the road’:Prabhakar also said that with a chief AI officer now “in every part of government”, the US had the ability to translate its public service values into actions that “change people’s lives”.
“This is where the rubber is going to meet the road,” she commented. “These are the actions that will ensure that when we use AI for all the reasons we need to use it in government, that when we’re doing that, we are protecting individual rights, and we are protecting the liberties that are the core values of this country.”
US border control looks to AI to close detection capability gaps
Big Border is watching: In an example of how parts of the US federal government are looking to use AI, the US Customs and Border Protection (CBP) is attempting to construct AI-powered border surveillance systems that can automatically scan people seeking entrance into the country.
Human monitors not enough: At the start of the year, CBP held a virtual ‘Industry Day’, from which a key takeaway was that border agents had proved ineffective at detecting large numbers of border crossings. These agents work long shifts and rely mostly on information fed to them through computers, making it difficult to achieve optimal surveillance, and officials said that without more assistance from AI, CBP would need to hire more staff. A document produced in 2022 and shared with delegates of the event outlined how autonomous solutions and enhancements were deemed “preferable” to conduct proper surveillance operations and “reduce the number of personnel required to monitor surveillance systems”.
Human rights, AI wrongs: The use of AI and machine learning to curb migrant flows presents clear ethical challenges related to privacy and existing human rights law. Discriminatory outcomes are already a well-documented bug in AI-powered recognition tools, which leaves experts such as Eliza Aspen, researcher on technology and inequality with Amnesty International, “gravely concerned”, according to a report by The Markup.
Aspen said the organisation has “called on states to conduct human rights impact assessments and data impact assessments in the deployment of digital technologies at the border, including AI-enabled tools”. Such assessments, she says, should be used to “address the risk that these tools may facilitate discrimination and other human rights violations against racial minorities, people living in poverty and other marginalised populations”.
UK and US join up on AI safety
Across the pond policy: The UK and US governments have agreed to join forces in seeking better safety testing for powerful ‘frontier AI’ models. On 1 April, Michelle Donelan, UK secretary of state for science, innovation and technology, and Gina Raimondo, US secretary of commerce, set out a plan for collaboration between the two governments.
Building on Bletchley: Donelan said the agreement, which was published after that OMB policy guidance, “puts meat on the bones” of an existing cooperation between the UK and US AI Safety Institutes, both of which were established only a day apart around the world’s first-ever AI Safety Summit in Bletchley Park last November.
More institutes incoming: The agreement commits the UK and US to developing similar partnerships with other countries. Donelan said many countries are “either in the process of or thinking about setting up their own institutes”, though she did not name them explicitly.
“We are going to have to work internationally on this agenda, and collaborate and share information and share expertise if we are going to really make sure that this is a force for good for mankind,” Donelan said.
Looking to train in AI as a public servant?
Check out some of GGF’s upcoming training courses:
How artificial intelligence can empower the civil service: Thursday 30 May
This interactive workshop is designed to introduce civil service professionals to the world of AI. The seminar will cover fundamental concepts of AI, its applications in the public sector, ethical considerations and practical tools. It aims to provide a clear understanding of AI, especially for those new to the field, and explore its potential in enhancing government services.
Deploying AI in the civil service: Wednesday 19 June
This workshop seminar is designed to provide you with a sound understanding of how to go about using and deploying AI in civil service departments and organisations. It starts with the fundamental questions and moves into how to take account of practical considerations in implementing AI strategies and operations, from legal frameworks and organisational compliance, through to the analysis of case studies and how these lessons can be applied in your own context. It will also look at what the future holds and how you can be prepared as civil servants to realise the benefits that AI can bring most fully.
Thanks for reading this newsletter and keep an eye out for next month’s edition, when we will be sharing insights from the GovDX conference in Washington DC – and find out more about the conference here.