Report urges AI sustainability as energy concerns grow

The British government should invest in sustainable AI amid concerns about the amount of energy needed to power the technology, according to a report by the UK’s National Engineering Policy Centre (NEPC).
The report – produced by the Royal Academy of Engineering, which leads the NEPC, together with BCS (the Chartered Institute of IT), and the Institution of Engineering and Technology – brings into focus the intensive use of water and critical materials needed to power AI systems.
“These resources are directly consumed during the production, use and end-of-life management of the compute and infrastructure hardware that underpins AI systems and services,” the report explained.
“Consuming these resources can cause air pollution, water pollution and thermal pollution, as well as creating solid waste.”
Separate studies predict that the AI industry will consume the same amount of energy as the Netherlands by 2027, and that by 2030, over 20% of all electricity produced in the US will be feeding the US data centres needed to power AI.
Read more: New guidance issued to help UK government departments evaluate AI’s impact
The NEPC report outlines five steps to improving AI sustainability and promote the UK as an efficiency leader in the field. These steps include expanding environmental reporting mandates; addressing “information asymmetries” across the value chain; setting environmental sustainability requirements for data centres; reconsidering data collection, transmission, storage, and management practices; and leading the way with government investment.
The report also stresses the need for tech firms to submit accurate accounts of the electricity and water consumed in the running of their data centres. Two of the world’s biggest tech giants, Google and Microsoft, have reported year-on-year rises in water consumption for data centres since 2020.
By increasing energy demand, AI systems and services could make it harder to move to a decarbonised electricity system, the report added.
“There are actions that can be taken now, across the AI value chain, to better understand and reduce this unsustainable resource consumption and the related environmental impacts,” it said.
UN’s Guterres calls for more sustainable practices – and US and UK refuse to sign AI pledge
At the AI Action Summit, held last week in Paris, António Guterres, the UN’s secretary-general, urged national representatives to use AI’s power to close the gap between developed and developing countries, and drew attention to the need for more sustainable practices around training machine-learning models and building core AI infrastructure.
“It is crucial to design AI algorithms and infrastructures that consume less energy and integrate AI into smart grids to optimise power use,” he said. “From data centres to training models, AI must run on sustainable energy so that it fuels a more sustainable future.”
Other news to have come from the summit includes the refusal of the US and UK governments to sign an international declaration on open, inclusive and ethical approaches to AI.
As well as emphasising the need for greater AI accessibility, safety and trustworthiness, the statement also highlighted AI sustainability concerns. Countries to have signed up to the pledge include China and India, as well as the European Commission.
Read more: Generative AI could worsen regional divides, OECD warns
In a speech to Summit delegates, US vice president JD Vance said that excessive regulation of AI could “kill a transformative industry just as it’s taking off”, and added that “pro-growth AI policies” should take priority over safety.
“We are developing an AI action plan that avoids an overly precautionary regulatory regime while ensuring that all Americans benefit from the technology and its transformative potential,” Vance said.
“The Trump administration is troubled by reports that some foreign governments are considering tightening the screws on US tech companies with international footprints. America cannot and will not accept that.”
The UK government said it had decided not to add its signature to the statement of agreement citing concerns around national security and “global governance”.
UK drops ‘safety’ from AI institute
Discussion around AI’s impact on national and international security was also a focus of the Munich Security Conference held between 14-16 February.
Peter Kyle, UK secretary of state for science, innovation and technology, announced at the event that the UK’s AI Safety Institute had been renamed the AI Security Institute.
Kyle described the decision as the “logical next step” for the institute, whose renewed focus will be on crime and national security and not on “bias or freedom of speech”.
“The work of the AI Security Institute won’t change, but this renewed focus will ensure our citizens – and those of our allies – are protected from those who would look to use AI against our institutions, democratic values, and way of life,” Kyle said.
Read more: New Zealand government forges path to responsible AI with new framework