Report: Lethal Autonomous Weapons Could Intensify Wars; US Backs Limits

The emergence of lethal autonomous weapons could intensify military competition, according to a report published Oct. 31 by the Stimson Center. It follows a meeting of the United Nations First Committee in which the U.S. became one of 70 countries to favor limiting the weapons.

The authors of the policy brief “Bolstering Arms Control in a Contested Geopolitical Environment,” Michael Moodie and Jerry Zhang, advocate for reinforcing the world’s fracturing arms control framework.

Disruptive new technologies such as artificial intelligence (AI), “heightened competition” among world powers, and a “rapidly deteriorating security environment” have already exerted “stiff pressure” on the “global arms control regime,” according to the report—the United Nations Conference on Disarmament “unable to reach a single meaningful new agreement” for 20 years.

At the same time, the authors acknowledge “plausible scenarios” in which AI “plunges the world into a devastating war by error,” concluding that lethal autonomous weapons—employing AI, nanotechnology, and advanced sensors—“could exacerbate competition and make conflicts more destructive.” Already, “the risk grows that they will fall into the hands of terrorists, criminals, warlords, or other malign actors.”

After opposing a treaty to govern such weapons in 2021, the U.S. became one of 70 countries to provide the Joint Statement on Lethal Autonomous Weapons Systems to the U.N.’s First Committee on Oct. 21. The statement urges the adoption of “appropriate rules and measures, such as principles, good practices, limitations and constraints” on autonomous weapons to help allay “serious concerns from humanitarian, legal, security, technological and ethical perspectives.”

The statement praises, despite a lack of “concrete outcomes,” the “important” work being done to explore the implications of lethal autonomous weapons by a U.N. Group of Governmental Experts. It stresses the need for “human beings to exert appropriate control, judgment and involvement in relation to the use of weapons systems in order to ensure any use is in compliance with International Law, in particular International Humanitarian Law, and that humans remain accountable for decisions on the use of force.”

In remarks to the U.N. Security Council on Nov. 3, Secretary-General Antonio Guterres warned that the “world is transforming at breakneck speed” and that lethal autonomous weapons together with cyber warfare “are presenting risks we barely comprehend and lack the global architecture to contain.”

Former Deputy Secretary of Defense Robert O. Work said in a call with reporters in September that Western militaries “see AI primarily as a means to help humans make better decisions”—that autonomous weapons are not being “designed to supplant the human decision-maker.”

Work was vice chair of the National Security Commission on Artificial Intelligence, which completed its work in 2021. He now serves as a member of the Board of Advisors for the AI-oriented Special Competitive Studies Project and is listed as chairing the board of AI contractor SparkCognition Government Systems.

“In the U.S. conception, our AI systems will be able to create their own courses of action to complete a task assigned to them by a human and choose among them,” Work explained. “But we are staying far away from any system that could choose its own goals and choose among them.”

However, he acknowledged that a weapon’s ability to “set its own objectives” is “going to be central to competition. 

“We don’t know how authoritarian countries will view this. Perhaps they will assign more decision-making authority to machines than the West would be comfortable doing … and it might be a fruitful area for discussion among all the competitors.”

Arms control talks could help “make sure we don’t get to the most dangerous systems that I think of,” Work said—“and those are systems that might be able to unilaterally order a preemptive or a retaliatory strike. That would be extraordinarily destabilizing, and I think it would be in the interest of all competitors to stay away from those type of systems.”

In a speech to Air Force Academy cadets in February, the Space Force’s Vice Chief of Space Operations Gen. David D. Thompson told Cadets the U.S. will need machines that decide to kill—and that confronting the inherent ethical dilemmas “can’t wait.”

The Vatican’s Archbishop Gabriele Caccia, permanent observer of the Holy See to the United Nations, delivered a statement to the First Committee on Oct. 12 arguing that lethal autonomous weapons “cannot maintain compliance with International Humanitarian Law” if they separate “the unique human capacity for moral judgment from actions that could result in bodily harm or even death.”