The U.S. Department of Defense said on Aug. 10 that it had established a task force to analyze and integrate generative artificial intelligence (AI) tools, such as large language models, across the department.
According to the department, Task Force Lima “will assess, synchronize, and employ generative AI capabilities” across the department, ensuring the United States remains at the forefront of cutting-edge technology while safeguarding national security.
The task force was established under the direction of Kathleen Hicks, the deputy secretary of Defense, and will be led by Craig Martell, the department’s chief digital and artificial intelligence officer.
Mr. Martell said the United States will need “to identify proper protective measures and mitigate national security risks that may result from issues such as poorly managed training data” when adopting generative AI.
“We must also consider the extent to which our adversaries will employ this technology and seek to disrupt our own use of AI-based solutions,” Mr. Martell said in a press release.
Generative AI refers to AI algorithms that create new content “based on the data they have been trained on,” according to the World Economic Forum. The U.S. military aims to use this technology to enhance its warfighting, business affairs, health, readiness, and policy.
Last month, U.S. Air Force Colonel Matthew Strohmeyer told Bloomberg that the U.S. military conducted live assessments of generative AI models to determine their viability in decision-making.
Colonel Strohmeyer said that AI tools could process “secret-level and classified data” within 10 minutes, a significant reduction compared to the hours or days a human would typically take to complete such a task.
“That doesn’t mean it’s ready for primetime right now,” he told the news outlet. “But we just did it live. We did it with secret-level data.”
Declaration on Responsible Use of AI in Military
Earlier this year, the United States launched an initiative promoting international cooperation on the responsible use of AI and autonomous weapons by militaries, seeking to impose order on an emerging technology that has the potential to change the way war is waged.
Bonnie Jenkins, the State Department’s undersecretary for arms control and international security, said the U.S. political declaration contains guidelines outlining best practices for the responsible military use of AI.
Ms. Jenkins said that advancements in this technology “will fundamentally alter militaries around the world,” as demonstrated by the Ukrainian army’s application of AI to analyze battlefield situations.
“As a rapidly changing technology, we have an obligation to create strong norms of responsible behavior concerning military uses of AI, and in a way that keeps in mind that applications of AI by militaries will undoubtedly change in the coming years,” she said on Feb. 16.
The U.S. declaration has 12 points, one of which highlights the need to “maintain human control and involvement for all actions critical to informing and executing sovereign decisions concerning nuclear weapons employment.”
Military AI Increases Strategic Risk
The Center for a New American Security warned in a report (pdf) that the use of artificial intelligence in the military, combined with ongoing tensions between the United States and communist China, increases the risk of a strategic catastrophe.
The report says that “the intensifying geopolitical rivalry between the United States and [China]” is combining with “the rapid development of artificial intelligence technologies, including for military applications.”
“Taken together, the emergence of military AI will likely deepen U.S.-China rivalry and increase strategic risks.”
The report thus seeks to examine the potential “pathways” through which military AI could undermine global stability or contribute to a new war and provides policy recommendations to avoid such a catastrophic conflict.
Andrew Thornebrooke and the Associated Press contributed to this report. Article cross-posted from our premium news partners at The Epoch Times.