In F-16 Dogfight, AI and Human Pilots Are ‘Roughly an Even Fight,’ Says Kendall

The artificial intelligence controlling the F-16 that Air Force Secretary Frank Kendall flew in last week matched up well against an experienced human pilot in dogfights, Kendall said May 8. Kendall said he was surprised by surging media interest in his May 2 autonomous flight at Edwards Air Force Base, Calif., and shared new details on how the flight went and how it informed his thinking on AI. 

Speaking at the Ash Carter Exchange, a conference in Washington, D.C., Kendall said the flight demonstrated “within-visual-range engagements” against a manned F-16, piloted by an Airman with “2,000 or 3,000 hours of experience.” Three different versions were tested in about 10-12 situations, with Kendall controlling when the AI took over. The AI then was able to maneuver the aircraft and could simulate an “engagement” with the adversary using short-range missiles or the F-16’s gun. 

“It was roughly an even fight,” Kendall said. “But against a less experienced pilot, the AI, the automation would have performed better.” 

Pilots with 2,000-3,000 flight hours are considered “senior pilots,” one step below the top rating of “command pilot.”

Kendall emphasized the AI is not yet ready to be deployed—but suggested it is well on its way to being so: 

  • “It’s making very good progress” 
  • “We’re on the right path, and we’re going to get to where we’re headed for” 
  • “It’s easy to see a situation where they’re going to be able to do this job, generally speaking, better than humans.” 

Kendall’s enthusiasm for AI, or automation, dates back to a classified book he wrote for the Defense Advanced Research Projects Agency between stints in the Obama and Biden administrations, he said. In the book, he sought to envision the future of warfare across different domains and kept returning to the theme of automation. 

“There are just inherent limitations on human beings,” Kendall said. “And when we can build machines that can do these jobs better than people can do them, the machines are going to do the job. That’s sort of the whole history of automation and industrialization over the last couple of centuries, quite frankly.” 

The Air Force’s Collaborative Combat Aircraft program seeks to field unmanned, autonomous “wingmen” that will fly alongside manned fighters, augmenting their combat power and complicating the defensive challenges for adversaries. For the Space Force, automation could mean AI monitoring and controlling satellites, jobs computers may be better suited to do than humans.  

But increased reliance on AI raises concerns about human control and whether autonomous weapons—so-called “killer robots”—can take action without active approval from a human. Kendall suggested there are ways to address those concerns while still embracing automation. 

“There’s a lot of discussion in the community about the need to regulate AI, to regulate completely full autonomy,” said Kendall. “We already have rules that govern how people apply violence in warfare. They’re called the laws of armed conflict. But what I think we need to do is figure out how to apply them to these types of issues. At the end of the day, human beings are still responsible for creating, testing, and putting those machines out and using them. So we have to figure out how to hold those people accountable to ensure that we have compliance with the norms we all agree to.” 

On that front, Kendall added, the U.S. may be at a disadvantage if adversaries opt to “turn the dial” on AI and prioritize lethality over minimizing collateral damage—making the software less cautious about whether or not a target constitutes a threat. 

Regardless, Kendall said the U.S. must embrace automation in some form, especially as the complexity and speed of threats grows, outstripping the ability of humans to process and make decisions fast enough. 

“I think the future is becoming clearer,” Kendall said. “I think the only question that really may remain is who’s going to get there first? And what are the constraints we want to place on ourselves that will limit our operational effectiveness compared to our adversaries and how we manage our way through that.”