Unclear Definitions Risk to Success in Fielding Autonomous Aircraft

Successfully introducing new, unmanned aircraft in the Air Force, with varying degrees of autonomy, is at risk as long as there’s widespread disagreement about what an “autonomous” aircraft actually is, according to an upcoming paper from AFA’s Mitchell Institute for Aerospace Studies. The think-tank urges the Air Force to set common definitions so that requirements-setters and engineers don’t work at cross-purposes.

The defense community is rapidly coming to “a consensus that unmanned aircraft will be essential to future force designs,” Mitchell researcher Heather Penney said in a discussion with reporters ahead of publication of a new paper, “Beyond Pixie Dust: A Framework for Understanding and Developing Autonomy in Unmanned Aircraft.” Autonomous aircraft can “affordably increase” the size of the air fleet, she said, which is essential because of greater expected wartime attrition “than we’ve experienced over a generation.” Autonomous aircraft will also offer new operational concepts that will present enemies with “operational dilemmas.”

But there are almost as many definitions of “autonomy” as there are people working in the field, she said.

“We need to be on the same page,” Penney warned. Without having “a shared understanding across the entire enterprise” of what autonomy “is and what it means for unmanned aircraft—we face a very real risk of failing in this endeavor.” Common understanding is needed to avoid development delays, failed acquisitions, late-to-need operational concepts, and a resistance in the force to accepting and using autonomous systems.  

She said the Air Force plans and requirements shop recognizes the need for a defined autonomy lexicon and is working on “a framework,” but “our development is further along than where they are, right now.”

Existing Pentagon methods of categorizing unmanned aircraft have to do with how large and heavy they are, or the altitudes at which they fly, but don’t assess their level of autonomic action, she noted.

Automation, Penney explained, defines how a washing machine works: it can be programmed to carry out a set series of tasks, but it can’t sense an imbalanced load, stop and “re-balance itself.” Autonomy, on the other hand, can sense and adapt to new conditions, and she made an analogy to the “R2-D2” robot in Star Wars: a machine able to assess and anticipate, and take unplanned action within the limits of its programming.

A “you know it when you see it” definition of autonomy won’t work, Penney asserted.

Mitchell “aligned our understanding of ‘automation’ as deterministic programming, and ‘autonomy’ with machine learning,” because it “matches our expectations of behavior.” Operators tend to believe that automated systems are programmed with “fixed and highly scripted” behavior, “like an autopilot: predictable, rigid, repeatable … The same inputs yield the same outputs” and the system cannot respond in real time to the unexpected. An autopilot can’t fly itself around a thunderstorm, she noted. But operators think of autonomous systems as independent, self-directed, and adaptive.

Penney said the Society of Automotive Engineers has a good starting point in their standards for what constitutes automation and autonomy in driven and self-driven cars. On a five-level scale, Level 0-2 means the human is driving and must supervise operation, even with gadgets such as blind spot warnings, automatic braking, lane centering, and adaptive cruise control operating. But for levels of 3-4,  the human is not driving, and the vehicle can operate in most scenarios, with the human taking over at times. At level 5, the car itself can operate safely under all conditions and doesn’t require human intervention.

Using a similar model will help operators talk to engineers about what they need and don’t need the autonomous platform to do, to achieve the desired effect. In some cases, Penney said, “less autonomy is the better solution.” The model will allow definition of levels of autonomy within categories of unmanned aircraft, and “levels of behavior within each category.”

Engineers can then translate what operators need and break it down into functions, technologies, and data. Their expectations will match.

The standards Mitchell will propose don’t govern weapons like loitering munitions, but unmanned aircraft that are expected to return from their missions, Penney noted.     

Having a common lexicon is crucial, she said, because pilots who have to collaborate with autonomous machine wingmen will not trust them unless they understand exactly how these new systems will behave under a range of circumstances. Without a common lexicon, confusion across the Defense Department is almost guaranteed, Penney said. And in discussions with many facets of the community, lack of trust was always stated as the No. 1 risk in getting autonomous systems fielded.

Common definitions are also crucial to getting agreement between Congress, “on what it thinks its buying,” and what the Air Force’s “strategic planners envision, what operational warfighters need, and what aerospace engineers” expect to deliver. Differing expectations will thwart rapid development of the field, Penney said, and this is a peril because adversaries are moving rapidly in the field of autonomous systems.

Senior leaders are also divesting systems and “collapsing” force structure “on the belief that these future systems will mature and field on time, and do the things they think they’re going to do,” Penney said. Mitchell wants those decisions to be informed by a true understanding of what unmanned aircraft can or will do.