Embracing AI: the interplay between people and intelligent machines

By on 30/11/2023 | Updated on 12/12/2023
Photo by Pavel Danilyuk via Pexels

Fears about artificial intelligence have abounded in recent months, driven in part by new tools that have shown both great opportunity and potential for misuse. There are risks. But, as Caroline Payne of AI and analytics partner SAS, argues, managed properly and with appropriate human oversight, AI is a force for good for the public sector – and its people   

“I remember in one particular week this summer, both the Pope and Ed Sheeran said something publicly about AI. That’s incredible. If you rewind a year or two ago, that would never have happened,” says Caroline Payne, head of customer advisory – public sector, at SAS UKI.

That one of the world’s biggest religious leaders and one of its top pop stars have joined the public debate on artificial intelligence demonstrates that AI – or the conversation about it at least – has hit the mainstream. And that while it represents a huge opportunity, there remain major concerns about how it might be used for ill rather than good – whether purposefully or not.

Addressing one of people’s primary fears – that AI will result in a great swathe of job losses – Payne says what will happen in the civil service, for example, is that officials will be freed up from working on manual repetitive tasks and have more time to focus instead on creative endeavours that only humans are capable of, and which will bring them more satisfaction.

Indeed, in Payne’s view, AI is an “augmentation technology” that if used well “makes something greater than the sum of its parts”, and which should be used, where appropriate, in conjunction with human oversight and decision-making.

“A lot of the doom and gloom around AI that you hear in the news is about AI replacing jobs. SAS’s position is that it’s not about doing that – it’s about augmenting whatever human processes and efforts are in place: automating the mundane tasks so people can concentrate on the value add,” Payne says.  

Whenever SAS talks to a public sector body about AI, the first question asked is always whether automation can be done safely. What an AI component does, Payne says, is to sift through and make sense of data, processing large datasets and performing complex calculations that people can’t do easily. At the end of the process is the decision point and that’s where people come in – that decision either needs to be taken by people or if taken by the system through some kind of intelligent decisioning process, should be referred for human review.  

“It’s easy to say, ‘Okay what’s the delineation’. Think about the things that humans are good at,” Payne says. “Generally, they have common sense and intuition, they’re good at creative pursuits, they have empathy. Machines don’t. At present, when a machine or an AI model is designed, it’s typically for a specific task – it doesn’t currently have the versatility that humans do.” 

AI in practice – six examples  

Payne gives examples of AI and analytics systems in use in the public sector around the world and how they interface with human colleagues.

One of SAS’s largest customers in the UK is HMRC (His Majesty’s Revenue & Customs). SAS effectively acts as the analytics cog in the department’s Connect programme, which manages tax return compliance, fraud and error.

HMRC estimates that the tax gap – the difference between tax owed to the government and what was actually paid – was £32bn (US$40bn) in 2020-21. SAS uses AI techniques to identify where the department should focus its efforts, automating the screening and assessing of tax returns so that officials can channel time and resources into more sophisticated areas like identifying proceeds of organised crime and sharing information with investigation units.

AI is also used at the Department for Work and Pensions (DWP) to manage the timely payment of pensions and benefits, for example, and to ensure the Winter Fuel Payment – which helps seniors cover the cost of energy bills – goes to eligible people. That had until recently been a manual process that involved collating DWP data with that of utility companies.

In defence, SAS has witnessed a surge in demand for systems that can optimise supply chains.

“There’s a view that wars have been lost because the infantry didn’t have warm boots on their feet and evidence that not having a regular supply of toothbrushes and toothpaste can have a big impact on soldier morale. There’s a piece about supporting generic supplies and making sure they’re in the right place at the right time,” Payne says.  

And then there are examples, such as in the Netherlands, of AI being used to screen for liver or lung cancer. Using image processing techniques, it’s possible to build a profile or image bank of what a healthy liver looks like vs what a liver with a tumour or anomaly looks like. AI can then use this data to detect whether a liver is healthy or not – checking the image along with key phrases in a person’s medical history that might signal a higher risk of cancer. If there’s a grey area, the scan is sent to a doctor for review. This helps to increase the speed and accuracy of tumour evaluations.  

‘Digital twins’ – the use of synthetic data to create an AI equivalent of a real-life scenario – is another use of AI that enables people to concentrate on other tasks. In healthcare, creating a digital twin of a particular hospital allows management and staff to run simulations and gauge how that hospital would perform should a certain situation arise. 

“That’s a really powerful use of AI,” Payne says. “It means healthcare professionals don’t need to be diverted away from their usual duties to run physical simulations.”

Artificial intelligence is also being used to fight modern slavery and child sexual exploitation. It can be trained to trawl through huge amounts of unstructured data – case notes from police officers and social services, for example – and pull out key terms that might be indicators that an investigation is needed.

“What SAS can do,” Payne explains, “is to take all of the text-based data and overlay taxonomies and classifications around word meanings and sentiment, then AI can start to identify what the profile of a child at risk looks like.”

This can lead to more timely intervention and reduced backlogs and help to build a better understanding of the common factors in such cases so that preventative measures can be taken.

Essentially, “AI squeezes the time to get from raw data to an intelligence report, helping the police to protect children more quickly,” Payne says.  

Achieving responsible AI

The examples above represent just a small snapshot of the uses for artificial intelligence in the civil service and wider public sector.  

“A little while ago we were talking about whether organisations would be deploying AI. We’ve moved on from that now. AI is driving innovations. The conversation has moved on to how we do it in a responsible way,” Payne says.  

For SAS, this means developing and using AI tools in an ethical manner that reflects society’s values and doesn’t harm people. If this means turning down certain projects, it will.

Human centricity, accountability, transparency and explainability, privacy and security, and inclusivity are all vital if AI is to be used responsibly.

In terms of inclusivity, bias is a common problem with AI that will need to be overcome. If the data an AI model is trained on is biased, so too will the outputs. There have been numerous examples of algorithms exhibiting bias. California’s public benefits algorithm denied Medicaid to foster children, while in Colorado, the same was true for pregnant women; a system used by hospitals and doctors to guide decisions about vaginal versus c-section births was found to be racially biased; and in Seattle, Somali grocers were disqualified from a food stamp programme on the basis of AI decisioning. (Note that none of these were SAS systems).

Clearly, the risk of bias will need to be minimised if AI is to be trusted.

SAS manages, prepares and presents data in a way that means output is explainable, and it builds in governance, feedback loops and reviews to ensure models provide the right results and remain accurate.

Its SAS Viya platform is designed to enable trustworthy practices at every stage of the AI and analytics lifecycle from development to deployment. Its AI capabilities are designed in collaboration with diverse stakeholders and with human centricity in mind, and it incorporates appropriate levels of accountability, inclusivity and transparency, including features to promote human agency.

There is a clear business case for using AI responsibly. Global research and advisory firm Gartner predicts that by 2026, organisations that use AI in a way which is transparent, trustworthy and secure will see their AI models achieve a 50% improvement in terms of user acceptance, adoption and fulfilment of business goals.

To help ensure that companies and organisations are tied to certain standards, regulation will be needed, with the U.S. Government Accountability Office, the UK Department for Digital, Culture, Media and Sport, the European Commission and others having called for the development of common standards.

The UK government hosted a global AI safety summit earlier this month, which may prove to be a starting point.

While we wait for regulations to be worked up and adopted, ensuring that there is an element of human oversight in any AI project or programme is a necessity. As Payne argues, it isn’t about AI at the expense of people, but AI and people – and using AI tools to create a whole that is greater than the sum of its parts.

This is a piece of partner content, produced by Global Government Forum on behalf of SAS.

About Partner Content

This content is brought to you by a Global Government Forum, Knowledge Partner.

Leave a Reply

Your email address will not be published. Required fields are marked *