The Wide-Reaching Impact of DOD’s New Ethical Principles of AI

BOULDER, Colo.—Think of the Defense Department’s new list of Ethical Principles for Artificial Intelligence as a starting framework to help guide the thinking of people who will ultimately make the hard decisions: what’s right or wrong, then how to teach a machine to tell the difference.

In a special public lecture Feb. 25 at the University of Colorado, Mark Sirangelo, a member of the Defense Innovation Board, or DIB, talked unofficially about how he views the new principles—opinions his own—presenting the lecture in a private capacity as part of his faculty role as entrepreneur scholar in residence.

Defense Secretary Mark Esper announced the list’s adoption Feb. 24, five short paragraphs representing 15 months of work by the DIB, which is an independent advisory committee that advises Secretaries of Defense. Esper approved the DIB’s recommendations with some tweaks. The DIB, for example, said what “should” be. Esper says what “will” be. The DIB’s “human beings” became “DOD personnel.”

With the whole world just figuring out how to use AI—which Sirangelo defines as “any technique that lets a machine solve a task in a way that we would do as a human”—he thinks the ideas presented in the DOD’s list could have a wider effect.

“What happens in the military, what happens in Silicon Valley, what happens in the global markets around us will create the foundation that every business, every sector, every area—your medical world, your shopping, everything from your heart surgeries to how you go buy something in a store—will have an impact from this. So what happens behind the scenes, and how it’s created, and the foundation by which it’s created, are really super important.”

An Army veteran, Sirangelo was an entrepreneur and executive in the aerospace industry before joining the DIB.

Some insights from the lecture:

  • People will be accountable. The principles are intentionally broad. People will have to apply them under many circumstances. As with the advent of past technology, people will be accountable: “You can’t basically say, well, the machine did it. The question is, who in the chain is then going to be responsible?”
  • The anti-bias principle makes the point that this is a requirement. “‘Deliberate steps to minimize unintended bias’”—attitudes of programmers that end up reflected in AI—“means ‘you have to do this.’ It’s not just a question of ‘want to.’ You are required, as someone in this area, to take deliberate steps.”
  • Traceability requires transparency. It “means that we know what’s going on—that it’s not just a person or a small group of people—and that all that transparency is there so that if anything goes awry, or we need to change it [an AI application] or somebody needs to adjust it or audit it, it’s possible to do that.”
  • Reliability means meeting a threshold. “There’s going to be a standard set, and if it doesn’t work, you can’t use it. That’s a pretty high hurdle in these kinds of things.”
  • The private sector will have a say. How the DOD’s principles get implemented “will develop in concert with the people who are actually doing most of the work these days.”

The principles apply to AI used in war and in the day-to-day. Ethics take the form of “how we program our computers and what we do with them,” he said.

He offered an example of a DOD drone flying over another country searching for “the most wanted person in the world.” The U.S. hasn’t been able to locate the target for the last seven years, but that person has “killed thousands of people, [and is] likely to kill thousands more.”

“The drone identifies him. It’s never 100 percent but a high 90 percent probability that that’s XYZ in the front seat. Except he’s in the front seat of a bus with 10 kids in the back. Does the drone take an action, or does the drone not take an action?

“That’s the kind of programming ethical questions people have.”