UK NHS to test AI systems for biases in healthcare

The UK’s National Health Service (NHS) is trialling a programme designed to identify algorithmic biases in systems used to administer healthcare.
Its aim is to use Algorithmic Impact Assessments (AIAs) to address decisions made by artificial intelligence (AI) that risk worsening healthcare outcomes for patients based on their profile and background.
“By allowing us to proactively address risks and biases in systems which will underpin the health and care of the future, we are ensuring that we create a system of healthcare which works for everyone, no matter who you are or where you are from,” said Syed Kamall, under-secretary of state for innovation.
The body behind the pilot, NHS AI Lab, commissioned the Ada Lovelace Institute to produce a methodology for using AIAs. The institute has since published a research paper that outlines its methodology and aims to help developers and researchers understand the ways in which AI technologies can impact people, society and the environment.
Octavia Reeve, interim lead at the Ada Lovelace Institute, commented: “[These] assessments have the potential to create greater accountability for the design and deployment of AI systems in healthcare, which can in turn build public trust in the use of these systems, mitigate risks of harm to people and groups, and maximise their potential for benefit.”
Following Canada’s example
The Ada Lovelace Institute report cites the only model of AIA currently in use, which was authorised by the Treasury Board of Canada Secretariat’s Directive on Automated Decision-Making. Aimed at Canadian civil servants, it was created for the purpose of managing public sector AI delivery and procurement standards. So far, it has been used to complete four AIAs in Canada.
The AIA model consists of an online questionnaire divided into eight sections containing 60 questions on “technical attributes of the AI system, the data underpinning it and how the system designates decision-making”. Impacts are ranked on a sliding scale from ‘little to no impact’, to ‘very high impact’ across a range of concerns, from individual rights and health and wellbeing to economic interests and the surrounding ecosystem. Once completed, each assessment is exported and uploaded to the Open Canada website.
The report cautioned that AIAs were not intended to replace existing regulatory frameworks but to complement those already used in the UK.
“This [process] is… proposed as one component in a broader accountability toolkit, which is intended to provide a standardised, reflexive framework for assessing impacts of AI systems on people and society,” it said.
Through the pilot programme, the NHS is expected to support researchers and developers with information obtained through engagement with patients and healthcare professionals.
“Building trust in the use of AI technologies for screening and diagnosis is fundamental if the NHS is to realise the benefits of AI,” said Brhmie Balaram, head of AI research & ethics at the NHS AI Lab. “Through this pilot, we hope to demonstrate the value of supporting developers to meaningfully engage with patients and healthcare professionals much earlier in the process of bringing an AI system to market.
“The algorithmic impact assessment will prompt developers to explore and address the legal, social and ethical implications of their proposed AI systems as a condition of accessing NHS data. We anticipate that this will lead to improvements in AI systems and assure patients that their data is being used responsibly and for the public good.”
The report noted that it could not offer details on the Canadian government’s experience of the AIA process, nor any information of changes made as a result.
“Policymakers may be disappointed to find that AIAs are not an ‘oven-ready’ approach, and that this AIA will need amendments before being directly transferable to other domains. We argue there is real value to be had in beginning to test AIA approaches within, and across different domains,” it said.
“Policymakers should pay attention to how this proposed AIA fits in the existing landscape, and to the findings related to process development that show some challenges, learnings and uncertainties when adopting AIAs.”
The new pilot complements ongoing work from NHS AI Lab’s ethics team on ensuring datasets for training and testing AI systems are diverse and inclusive. It introduced the AI Ethics Initiative to support research and practical trials alongside wider efforts to develop and regulate AI-driven technologies in health and care, with the overall goal of preventing and addressing health inequalities.