Google CEO calls for AI regulation

By on 23/01/2020 | Updated on 24/09/2020
Sundar Pichai says sensible regulation must take a proportionate approach, “balancing potential harms… with social opportunities”. (Photo by Maurizio Pesce/flickr).

Sundar Pichai, the head of Google and parent company Alphabet, has “real concerns” about the potential risks of artificial intelligence (AI) and has called on governments to work together on regulatory measures. “International alignment will be critical to making global standards work,” he said this week.

Writing in the Financial Times, Pichai gave deepfakes – computer-generated clips that are designed to look real – and nefarious uses of facial recognition as examples of negative applications of AI, and said that “while there is already some work being done to address these concerns, there will inevitably be more challenges ahead that no one company or industry can solve alone”.

He said good regulatory frameworks should consider safety, explainability, fairness and accountability, and that sensible regulation must also take a proportionate approach “balancing potential harms, especially in high-risk areas, with social opportunities”.

In many cases it is not necessary to start from scratch, he added, citing existing rules such as Europe’s General Data Protection Regulation which “serve as a strong foundation”.

“Regulation can provide broad guidance while allowing for tailored implementation in different sectors,” he said. For some AI uses, such as regulated medical devices including AI-assisted heart monitors, existing frameworks are “good starting points”. However, for newer areas such as self-driving vehicles, Pichai said governments should establish appropriate new rules that consider all relevant costs and benefits.

Public/private partnership

Pichai acknowledged that ensuring AI is safe should not be the burden of governments alone.

“Companies such as [Google] cannot just build promising new technology and let market forces decide how it will be used,” he said. “It is equally incumbent on us to make sure that technology is harnessed for good and available to everyone.”

He pointed to Google’s own AI principles, which were published in 2018. “These guidelines help us avoid bias, test rigorously for safety, design with privacy top of mind, and make the technology accountable to people,” he said. “They also specify areas where we will not design or deploy AI, such as to support mass surveillance or violate human rights.”

The tech giant is also testing AI decisions for fairness and conducting independent human-rights assessments of new products, according to Pichai, who added that the company has made the required tools and related open-source code widely available, “which will empower others to use AI for good”.

However, Google’s efforts have not all been plain sailing. The company launched its own independent ethics board in 2019, but shut it down less than two weeks later following controversy over its composition.

Concluding his piece in the Financial Times, Pichai added that Google wants to be a “helpful and engaged partner to regulators as they grapple with the inevitable tensions and trade-offs”, and said the company offers its expertise, experience and tools “as we navigate these issues together”.  

“AI has the potential to improve billions of lives, and the biggest risk may be failing to do so. By ensuring it is developed responsibly in a way that benefits everyone, we can inspire future generations to believe in the power of technology as much as I do,” he said.

Elon Musk in agreement   

In November last year, billionaire tech entrepreneur Elon Musk warned that AI is a potential danger to the public and said governments should step in quickly to manage the risks.

“I would argue that AI is unequivocally something that has potential to be dangerous to the public and therefore should have a regulatory agency,” he said.

Musk once dubbed AI the “biggest existential threat” faced by humanity, and has long been a vocal supporter of regulating the emerging technology.

Many countries have recently moved to put measures in place that aim to protect the public from the potential dangers of AI, including CanadaAustralia and New Zealand. Earlier this month, the US proposed a set of principles that agencies should take into consideration when regulating AI. However, it is concerned “over-regulation” could stifle innovation and is advocating a light-touch approach.

About Mia Hunt

Mia is a journalist and editor with a background in covering commercial property, having been market reports and supplements editor at trade title Property Week and deputy editor of Shopping Centre magazine, now known as Retail Destination. She has also undertaken freelance work for several publications including the preview magazine of international trade show, MAPIC, and TES Global (formerly the Times Educational Supplement) and has produced a white paper on energy efficiency in business for E.ON. Between 2014 and 2016, she was a member of the Revo Customer Experience Committee and an ACE Awards judge. Mia graduated from Kingston University with a first-class degree in journalism and was part of the team that produced The River newspaper, which won Publication of the Year at the Guardian Student Media Awards in 2010.

One Comment

  1. JH says:

    Pichai: “These guidelines help us avoid bias..”
    Watch the vids from Project Veritas about Google. Or the multitudes of videos of employees that have been fired for daring to think for themselves. Pichai is a funny guy.

Leave a Reply

Your email address will not be published. Required fields are marked *