Agencies ‘don’t have the tools’ to head off ChatGPT threat to national security, warns Pentagon’s AI chief

The US Department of Defense’s chief digital and AI officer Craig Martell said he is “scared to death” of the potential for generative artificial intelligence systems like ChatGPT to deceive citizens and threaten national security.
Speaking on 3 May at the Armed Forces Communications and Electronics Association (AFCEA)’s TechNet Cyber conference in Baltimore, Martell, who heads up the Chief Digital and Artificial Intelligence Office, said that ChatGPT’s ability to fabricate plausible content such as academic essays meant it could mislead citizens.
“My fear is that we trust it too much without the providers of [a service] building into it the right safeguards and the ability for us to validate the information,” he said.
“I’m scared to death. That’s my opinion.”
Martell said that campaigns generated by AI could be used to influence Americans, and highlighted that ChatGPT’s use to date showed how it might be used to facilitate disinformation at scale.
“[ChatGPT] has been trained to express itself in a fluent manner. It speaks fluently and authoritatively. So, you believe it even when it’s wrong… and that means it is a perfect tool for disinformation,” he said.
“We really need tools to be able to detect when that’s happening and to be able to warn when that’s happening. And we don’t have those tools. We are behind in that fight.”
Read more: UK civil servants told to exercise caution around AI chatbot use
Leveraging ChatGPT to best advantage
Martell took on his current position at the Pentagon last year after lengthy experience in the private sector heading up machine learning at firms such as ride-hailing service Lyft and file hosting service Dropbox.
Martell’s fears were not echoed by all defence personnel. Lieutenant general Robert Skinner, director of the US Defense Information Systems Agency (DISA), spoke a day before Martell at the AFCEA conference, and used generative AI to clone his voice at the beginning of his keynote address.
Skinner said he saw the potential of generative AI models to do harm as a “challenge” to be met by agencies who stood to gain by turning such models to their advantage.
“Those who harness [generative AI] and can understand how to best leverage it, but also how to best protect against it, are going to be the ones that have the high ground,” he said.
In the same week as the conference, US vice president Kamala Harris met with the CEOs of technology companies Alphabet (which owns Google), Anthropic (an AI safety and research company), Microsoft and OpenAI to discuss the government’s role in helping the private sector mitigate the risks around AI.
The meeting dovetails with the Biden administration’s plan to roll out a set of initiatives aimed at putting citizens first in the advancement of AI technologies. Key to its aims is a plan to invest US$140m in seven new AI research institutes. Over the coming months, the Office of Management and Budget (OMB) is also planning to issue guidance to federal agencies to how best to use AI tools.
In 2022, US president Joe Biden announced plans to create an AI Bill of Rights to protect citizens from automated systems “that threaten the rights of the American public”.
Biden said in April this year that AI could play an instrumental role in tackling disease and climate change, though noted concerns about the challenge it posed both national security and economic stability.
Read more: President Biden sets out blueprint for AI Bill of Rights
Want to write for GGF? We are always looking to hear from public and civil servants on the latest developments in their organisation – please get in touch below or email [email protected]