A closer look at Singapore’s AI governance framework: insights for other governments

By on 06/05/2021
Reflecting Singapore’s approach: many countries look to the nation’s AI governance framework to shape their own models. Credit: Sebastian Pichler/Unsplash

As officials across the world work to establish AI ethics codes, Amit Roy Choudhury argues that Singapore’s system provides a model for others at a time when common global standards are emerging

Governments around the world are rushing to formulate risk- and rules-based approaches to ensuring transparent and fair use of artificial intelligence (AI). Both Scotland and Brazil have recently launched AI strategies, while last week the European Commission published its draft regulations for AI.

Ethics must be applied to the design, development and use of AI to ensure that outcomes are explainable and not subject to unintended bias, says Dr Chong Yoke Sin, president of the Singapore Computer Society (SCS). “AI is only useful when applied ethically, and users and developers must remain aware of this,” she says.

One of the early movers for establishing guidelines to govern the use of AI was the island nation of Singapore. The collaborative approach to development and practical principles mean the AI ethics code is enjoying local success and could help to inform other governments’ approaches at a time when globally-acceptable ethical principles for AI governance are emerging.

How does Singapore’s model work?

Singapore introduced its Model Artificial Intelligence Governance Framework in January 2019 at the World Economic Forum (WEF) in Davos, and made important updates a year later at the same event. The two guiding principles of the framework state that: decisions made by AI should be “explainable, transparent and fair”; and AI systems should be human-centric (i.e. the design and deployment of AI should protect people’s interests including their safety and wellbeing).

These core principles are then developed into four areas of guidance. The first is establishing or adapting internal governance structures and measures to “incorporate values, risks, and responsibilities relating to algorithmic decision-making”. The second determines the level of human involvement in AI decision-making and helps organisations decide what their risk appetite is. For example, the model includes a “probability-severity of harm” matrix.

The third area of guidance focuses on operations management and deals with factors that should be considered when “developing, selecting and maintaining AI models, including data management”. The final area shares strategies for communicating with stakeholders and management on the use of AI solutions.

The framework translates ethical principles into pragmatic measures that businesses can adopt voluntarily, according to Singapore’s Minister for Communications and Information, S Iswaran. Organisations have a ready-to-use tool to help deploy AI in a responsible manner, he adds.

Adoption of the model

Chong, from the SCS, an IT industry body that worked with the Singaporean government to develop the AI governance framework, says that ethical AI can only be adopted through commitment, skills and culture, facilitated by a trusted ecosystem.

Stakeholder confidence is crucial to ensure total confidence in the use of AI, adds Chong. “That can only happen if solution providers and deploying organisations ensure responsible use of AI to manage various risks,” she says. This includes aligning internal policies, structures and processes with relevant practices in data management and protection, such as Singapore’s Personal Data Protection Act (PDPA), she adds.

Uptake of the model is also essential. Singapore’s framework can be adopted by any organisation that develops or uses AI. Chong notes that the SCS was “encouraged” when its research showed that that 75 organisations signed up to the model.

Indeed large corporations, including global banks and technology companies, are using Singapore’s model to validate their technology governance and risk management frameworks, according to Achim Granzen, principal analyst at Forrester research agency.

But smaller organisations may face a challenge in implementation, notes Granzen, as they usually only have limited resources – both in terms of frameworks and expertise – for technology governance. “This is the primary gap that must be filled – helping organisations without deep governance resources to apply the framework to ensure they deploy their AI responsibly,” he adds.

Collaborative design, practical steps

The Singapore model recommends a rules- and risk-based management approach to address the technology risk associated with AI, Granzen notes. This aligns with other global frameworks, for example, the EU’s recently published draft regulations.

“Ideally, this [ethical use of AI] should be a dimension added to corporate risk management frameworks,” says Granzen. “That will elevate the risk beyond IT and individual business units to the corporate level – and that’s where it must be addressed.”

While major global AI frameworks are well aligned on the approach, Granzen says the collaborative development of Singapore’s model is something other governments should look out for, alongside how comprehensive it is.

“The (Singapore) model framework contains exemplary ethics measures, a self-assessment guide, and two volumes of use case libraries. All has been developed in close collaboration between government agencies, corporations, major tech companies, and academia. This collaboration is a key strength of the Singapore approach,” he notes.

Global perspectives merge and emerge

As many AI governance initiatives build on common principles and approaches, common standards are beginning to develop, according to Granzen. “I believe we are starting to see something like ‘Generally Accepted AI Principles – GAAiP’ emerging,” he says.

He adds: “Similar to the Generally Accepted Accounting Principles (GAAP), such principles would form a global, cross-cultural core set. For AI, the core GAAiP are centred on fairness, explainability, and accountability, forming the basis of ethical, responsible use of AI.”

Singapore’s framework has been recognised as a firm foundation for the responsible use of AI and its future evolution, says Minister Iswaran. “We will build on this momentum to advance a human-centric approach to AI – one that facilitates innovation and safeguards public trust – to ensure AI’s positive impact on the world for generations to come,” he adds.

About Amit Roy Choudhury

Leave a Reply

Your email address will not be published. Required fields are marked *