US Department of Defense to consider ethical AI recommendations

By on 05/11/2019 | Updated on 24/09/2020
Former Secretary of Defense Ash Carter (left) with Eric Schmidt, Google’s former chief executive and chairman of the Defense Innovation Board. (Photo by Army Sgt. Amber I. Smith, courtesy US Secretary of Defense/flickr).

The US Defense Innovation Board has unanimously agreed on five artificial intelligence (AI) ethics principles that it recommends the Department of Defense (DoD) consider when implementing the emerging technology.   

The board – which is chaired by former Google chief executive Eric Schmidt, and includes distinguished academics, think tank founders, and senior executives of companies such as Microsoft and Facebook – began with a list of 25 draft principles before settling on the final five, which outline that use of AI should be responsible, equitable, traceable, reliable, and governable.

The board recommends that humans should exercise appropriate levels of judgment and remain responsible for the development, deployment, use and outcomes of AI systems; that DoD should take deliberate steps to avoid unintended bias in the development and deployment of AI systems that would inadvertently cause people harm; and that AI systems should have an “explicit, well-defined domain of use, and the safety, security and robustness of such systems should be tested and assured across their entire life cycle within that domain of use”.

It also recommends that technical experts possess an appropriate understanding of the technology, development processes and operational methods of the department’s AI systems, including transparent and auditable methodologies, data sources, and design procedure and documentation.

Finally, board members agreed that DoD AI systems should be designed and engineered to fulfil their intended function while possessing the ability to detect and avoid unintended harm or disruption. Should the technology display unintended behaviours, there should be safeguards in place that enable it to be shut down by a human or automated system.

Extensive study

DoD leaders tasked the Defense Innovation Board to propose a set of ethics principals for consideration in July 2018. Board members met at a public meeting in Washington last week to vote on the recommended principals, following an extensive study that included numerous in-depth discussions with experts, interviews with more than 100 stakeholders, and monthly meetings in which representatives of partner nations participated.

The board’s draft document states that it consulted with “human rights experts, computer scientists, technologists, researchers, civil society leaders, philosophers, venture capitalists, business leaders and DoD officials”.  

“The valuable insights from the [board] are the product of 15 months of outreach to commercial industry, the government, academia and the American public,” said Air Force Lt. Gen. John N.T. ‘Jack’ Shanahan, director of the Joint Artificial Intelligence Center. “The [board’s] recommendations will help enhance the DoD’s commitment to upholding the highest ethical standards as outlined in the DoD AI Strategy, while embracing the US military’s strong history of applying rigorous testing and fielding standards for technology innovations.” 

The DoD’s AI strategy falls under the framework of the National Defense Strategy, which supports the research and use of AI as a warfighting tool. As part of this strategy, the department will take the lead in developing ethical AI guidelines.

Global consideration

The DoD’s steps towards ethical use of AI are similar to those being made by other countries.

Last month, Canada’s CIO Strategy Council published national standards for the ethical design and use of AI. Earlier this year, the Australian government launched a national consultation to gather feedback on proposals for an ethical framework to guide the use of AI. And in New Zealand, the government has been urged to set up an AI watchdog – with a report by researchers at the University of Otago warning of risks such as bias in the operation of predictive algorithms.

About Mia Hunt

Mia is a journalist and editor with a background in covering commercial property, having been market reports and supplements editor at trade title Property Week and deputy editor of Shopping Centre magazine, now known as Retail Destination. She has also undertaken freelance work for several publications including the preview magazine of international trade show, MAPIC, and TES Global (formerly the Times Educational Supplement) and has produced a white paper on energy efficiency in business for E.ON. Between 2014 and 2016, she was a member of the Revo Customer Experience Committee and an ACE Awards judge. Mia graduated from Kingston University with a first-class degree in journalism and was part of the team that produced The River newspaper, which won Publication of the Year at the Guardian Student Media Awards in 2010.

Leave a Reply

Your email address will not be published. Required fields are marked *