Treat AI with ‘same spirit of urgency’ as climate change, says UK PM

By on 12/06/2023 | Updated on 12/06/2023
A robot's fist

UK prime minister Rishi Sunak urged world leaders to view artificial intelligence (AI) with the same humanitarian concern they would climate change during a visit to the White House on 8 June.

Sunak’s words come as the UK readies to host the first international AI summit this autumn, at which Western leaders are expected to weigh in on topics such as skills automation, AI-created misinformation, and the technology’s possible threat to life.

“We come together at COP to work multilaterally across multiple countries to bring down carbon emissions, to get funding to the countries that need it, to share research on how we can develop the green technologies of the future,” Sunak said. “We need to bring that same spirit of urgency, I think, to the challenges and opportunities that AI poses because the pace of technological change is faster than people had anticipated.”

US president Joe Biden said America was “looking to Great Britain to lead [the] effort” with a proposal for how a group of nations might be brought together to mitigate AI-associated risks.

“[AI] not only has the potential to cure cancer and many other things that are just beyond our comprehension, it has the potential to do great damage if it’s not controlled,” he added.

Australia sharpens AI oversight

Meanwhile, the Australian federal government has ramped up its regulatory ambitions around AI with a consultation paper published shortly after its chief scientist’s latest risk report.

The paper was released in June by the Australian Department of Science and Resources and sought what it called “system-wide feedback” on actions government should take to regulate and govern AI across Australia’s economy.

It gave a broad sweep of the country’s existing governance and regulatory framework – which includes an e-safety commissioner, who is tasked with safeguarding Australian citizens online, and a set of national AI ethics principles – and called for input on whether further mechanisms were needed.

“While Australia already has some safeguards in place for AI and the responses to AI are at an early stage globally, it is not alone in weighing whether further regulatory and governance mechanisms are required to mitigate emerging risks,” it said.

Read more: ChatGPT a threat to national security, warns Pentagon AI chief

In March, Australia’s chief scientist Dr Cathy Foley published the Rapid Response Information Report which tallied the risks and opportunities of generative AI. It warned that though governments globally had actively supported the development of “more than 630 ‘soft law’ AI governance programmes” between 2016 and 2019, their regulatory effectiveness was “debatable”.

“There is a growing recognition that a range of institutional measures and policies are likely to be required to mitigate public risks,” it said.

Drawing on the Australian Human Rights Commission’s 2021 Human Rights and Technology report, the consultation paper also identified algorithmic biases as “one of the biggest risks or dangers of AI”.

“Algorithmic bias involves systematic or repeated decisions that privilege one group over another,” it said, citing examples of discrimination such as where AI disproportionately targets a minority ethnic group when asked to predict repeat criminal offences; grading algorithms in education that favour students from higher performing schools; and algorithms used in recruitment that prioritise men over women.

Read more: US and EU launch ‘first sweeping AI agreement’

All eyes watching 

Many governments around the world have raised concerns about the future of AI in recent months. In May, leaders of the G7 – Canada, France, Germany, Italy, Japan, the UK and the US – met in Japan to reinforce their shared vision of a humane and trustworthy AI built on “shared democratic values”.

AI risk strategies adopted by countries such as Canada and the UK were acknowledged in Australia’s consultation paper. It outlined, for example, the “principles-based approach to classifying AI into risk categories” used in the Canadian government’s widely-applied Directive on Automated Decision Making. This measures risk on a four-category scale: low “reversible or brief”; moderate “likely reversible and short-term”; high risk “difficult to reverse and ongoing”; and very high risk in cases where impacts are considered “irreversible and perpetual”.

Citing the UK’s AI policy whitepaper published in March, the Australian consultation paper also listed the UK government’s principles for “responsible development and use of AI”. These included safety, security and robustness; appropriate transparency and explainability; fairness, accountability and governance; and AI contestability and redress “in all sectors of the UK economy”.

On 1 June, Philip Ingram, a former British military intelligence officer, told The Independent newspaper that the UK government should classify AI as an official threat to the country. He said that by adding AI to the National Risk Register, the government would be in a better position to tackle “bad actors” seeking to use to AI to “change the way people think”.

A statement published by the Centre of AI Safety two days before on 30 May secured dozens of signatories in the call for global leaders to “mitigate the risk of [human] extinction from AI”. The statement drew close comparisons between AI and the societal risks posed by other existential threats such by pandemics and nuclear attacks.

Responding to the statement, Ingram said: “We didn’t start to think about mitigation of nuclear risks until post Hiroshima. It feels to me to be a pretty reasonable response to not make that mistake with AI.”

Read more: G7 leaders ‘take stock’ of AI amid calls for shared governance standards

Join Global Government Forum’s LinkedIn group to keep up to date with all the insight public and civil servants need to know.

About Jack Aldane

Jack is a British journalist, cartoonist and podcaster. He graduated from Heythrop College London in 2009 with a BA in philosophy, before living and working in China for three years as a freelance reporter. After training in financial journalism at City University from 2013 to 2014, Jack worked at Bloomberg and Thomson Reuters before moving into editing magazines on global trade and development finance. Shortly after editing opinion writing for UnHerd, he joined the independent think tank ResPublica, where he led a media campaign to change the health and safety requirements around asbestos in UK public buildings. As host and producer of The Booking Club podcast – a conversation series featuring prominent authors and commentators at their favourite restaurants – Jack continues to engage today’s most distinguished thinkers on the biggest problems pertaining to ideology and power in the 21st century. He joined Global Government Forum as its Senior Staff Writer and Community Co-ordinator in 2021.

Leave a Reply

Your email address will not be published. Required fields are marked *