Responsible innovation: six principles for the ethical application of data analytics

By on 31/10/2022 | Updated on 31/10/2022
Photo by geralt via Pixabay

Long-held beliefs of data and related technologies unlocking human potential can become reality only when data-driven systems are designed to expand equity and human agency with appropriate guardrails in place to mitigate harms to the most vulnerable.

As we move forward with using our data, performing analytics, and deploying artificial intelligence, we need to be proactively working to mitigate risks and leaning into discomforts so that we ultimately build sustainable solutions that centre and promote equity and human dignity, agency, and wellbeing.

Designing data driven systems responsibly requires asking not only what we can do, but also what we should do. It requires shifting our guise to simultaneously consider profits and people as we ask questions like:

  • Is it possible to use data and analytics to mitigate harms to vulnerable populations around the globe?
  • In what ways can artificial intelligence help alleviate the impact of sexism, ageism, racism and other social ills?
  • How can we use technology to leave a more civilised, peaceful, and advanced world for future generations?

These are existential and complex questions, but they are the types of questions fuelling our ambition to leave a better world than the one we inherited. They are also core to the belief that access to quality data, at the optimal time, interpreted with compassion, is essential to ethical and human-centered innovation.

Often ethical dilemmas present tensions that are not solved; rather, they are navigated, managed and negotiated in the most productive, least harmful ways feasible. Managing tensions consistently and in a trustworthy manner requires an unwavering set of principles rooted in core beliefs and proven, effective strategies.

The ethical application of data analytics require the following six principles:

Human Centricity

Promote human wellbeing, agency, and equity. Technology does not exist in a vacuum. It is inextricably linked to society and the natural world around us – often influencing one another in unexpected ways. This understanding drives the primary principle of human-centricity, where data-driven systems should be designed to promote and preserve human dignity, agency, and overall wellbeing – all with an eye towards equity.

Inclusivity

Ensure accessibility and include diverse perspectives and experiences. Data-driven systems carry the potential to amplify societal biases and barriers at scale, exacerbate power imbalances and calcify the marginalisation of vulnerable populations. Intentional measures must be taken to ensure data-driven systems are built such that the resulting benefits and insights serve all members of our community. These systems should be accessible and reflect diverse perspectives and experiences. Inclusivity should show up in design, development, deployment, and decision-making.

Accountability

Proactively identify and mitigate adverse impacts. Accountability is a shared responsibility of all people and entities that interface with a data-driven system from data collection to analytics to decision-making. Individuals and organisations on this spectrum of responsibility must recognise the role they play and proactively mitigate and remediate adverse impacts of decisions derived from such systems.

Robustness

Operate reliably and safely, while enabling mechanisms that assess and manage potential risks throughout a system’s lifecycle. Data-driven systems must produce consistent and accurate results and have clear guidelines for safe operation, which will in part require robust data and model governance. Data-driven systems should be carefully tested against a range of inputs and real-world scenarios to reduce unforeseen harms. If conditions do not support accurate and consistent output, safeguards should be put in place to minimise the potential for harm.

Transparency

Explain and instruct on usage openly, including at minimum, potential risks and how decisions are made. The inner workings, limitations and potential adverse impact of data-driven systems and decisions need to be openly communicated. To foster trust, people must know when and why data-driven systems are being used and what to expect.

Privacy and Security

Respect the privacy of data subjects. Organisations must not only meet regulatory requirements but also respect the use and application of data about an individual or population. Active steps should be taken to mitigate malicious or accidental actions that interfere with the proper use of the system.

Establishing principles is the first step. The next challenge is to develop the discipline and courage to consistently execute according to those principles, especially at the speed of business.

Learn more about responsible innovation.

Author: Reggie Townsend

Reggie Townsend is the director of the SAS Data Ethics Practice (DEP). As the guiding hand for the company’s responsible innovation efforts, the DEP empowers employees and customers to deploy data-driven systems that promote human wellbeing, agency and equity in ways that meet new and existing regulations and policies. Townsend serves on national committees and boards promoting trustworthy and responsible AI, combining his own passion and knowledge with SAS’ more than four decades of AI and analytics expertise.

About Partner Content

This content is brought to you by a Global Government Forum, Knowledge Partner.

Leave a Reply

Your email address will not be published. Required fields are marked *