NATO’s Plan to Grow Trust in Military AI

Western militaries—already “late to the party” in the creation of artificial intelligence—risk unforeseen consequences by adopting AI made for the commercial sector, said NATO’s David van Weel. 

That’s why the alliance is publicizing a new plan by which it hopes its governments will get involved in AI development from the start, both for security reasons and to “bridge a gap of distrust” in the technology.

Though he acknowledged that sharing the plan is a bit out of character for NATO, all 30 nations, including the U.S., have signed on.

“We are not known, at NATO, for publishing a lot,” said van Weel, assistant secretary general for emerging security challenges. “We try to keep secrets a lot.”

Van Weel introduced NATO’s AI strategy, published in October, during an American Enterprise Institute webinar Dec. 7. The webinar “Artificial Intelligence: Can We Go From Chaos to Cooperation?” accompanied the release of AEI’s paper, “Artificial Intelligence: The Risks Posed by the Current Lack of Standards.”   

As a “pervasive technology,” AI will “have an impact on everything we do,” said van Weel. Setting aside “the killer robot discussion,” van Weel dismissed the notion of excluding AI from all military uses: “The idea that AI would not be used for defense purposes is like saying that the steam engine, when it was invented, could only be used for commercial purposes, or electricity would not be supplied to the military.”

But being behind the private sector in AI development has left governments “in a situation where regulation comes after the broad use and misuse of technology,” van Weel said. “So we need to be early to the party and make sure that we understand new technologies, not to militarize them—no, but to understand the security and defense implications.”

Van Weel said military uses of AI should be regulated, but “you don’t want to over-regulate if you don’t know that you can defend yourself within the regulations that you’re proposing.” He provided the example of drone swarms “that collectively, powered by AI, are able to follow an intrinsic pattern—for example, our water supply or one of our cities. So how do we defend against them? Well, we can’t, frankly, because you need AI in that case in order to be able to counter AI.”

But even among peers, he describes skepticism. 

“I’ve been on panels quite a lot where people say, ‘Well, please, I don’t trust the defense use of artificial intelligence,’ and that’s something we need to address,” van Weel said. “We are a trusted user. We—NATO, all the 30 allies—we all subscribe to the democratic values. We all subscribe to the values our societies are built upon, and we’re there to protect them.” 

NATO’s strategy proposes six principles of responsible use of AI similar to the Defense Department’s Ethical Principles for Artificial Intelligence adopted in 2020—but with a plan to verify that the principles are followed. According to NATO’s list of attributes, military AI should be lawful; responsible and accountable; explainable and traceable; reliable; governable; and having bias mitigation. 

To engender confidence in the principles, NATO has also proposed a new initiative. 

“Principles are nice, but they need to be verifiable as well, and they need to be baked in from the moment of the first conception of an idea up until the delivery,” van Weel said.

To that end, to verify new AI, NATO wants to create test centers, co-located with universities throughout the alliance. This includes “existing test centers with knowledge, where allies that are thinking about co-developing AI for use in the defense sector can come in and verify, with protocols, with certain standards that we’re setting, that this AI is actually verified,” van Weel said. 

“It’s not a world standard yet, but if the 30 nations, Western democracies, start out by shaping industry to adhere by these standards, then I feel that we are making an impact, at least in the development of AI and hopefully also in the larger world setting standards.”