New Report: How the Air Force Measures and Trains for Readiness Needs a Revamp

As the Department of the Air Force undergoes a sweeping “re-optimization” review focused on its readiness for great power competition, a new research report cites gaps in the department’s methods for measuring readiness and suggests advanced new simulators and relatred technologies could be useful to close those gaps.  

The report was published Oct. 19 by the federally funded RAND Corporation and shed light on senior leadership’s concerns “that the current readiness assessment system is not providing sufficient insight into the capability of the force to meet future mission requirements because of the lack of quality outcome measurements in the readiness system.”  

Those concerns about readiness seemingly culminated in a series of remarks by Air Force Secretary Frank Kendall in September.  

“If we were asked tomorrow to go to war against a great power, either Russia or China, would we be really ready to do that?” Kendall asked rhetorically during a livestreamed discussion with Chief Master Sergeant of the Air Force JoAnne S. Bass. “I think the answer is not as much as we could be, by a significant margin. And we’ve got to start spending a lot of time thinking about that and figuring out what we’re going to do about it.”  

Then, in his keynote address at AFA’s Air, Space & Cyber Conference a week later, Kendall declared that “we must ensure that the Air Force and Space Force are optimized to provide integrated deterrence, support campaigning, and ensure enduring advantage.” 

The re-optimization review, currently under way and set to produce recommendations by January, has five lines of effort, one of which is focused on “how we create, sustain and evaluate readiness across the Air and Space Forces,” Kendall said. 

The RAND report, commissioned by the Headquarters Air Force Training and Readiness Directorate, could inform the review effort and its recommendations. Relying on interviews with subject matter experts and Air Force leaders, RAND researchers homed in on issues with the department’s current readiness metrics and recommendations for improving the test and training infrastructure to close those gaps. 

“The [Air Force] is not measuring the most useful things to gain insight on the readiness of the force,” researchers concluded. “Legacy metrics focus on the ability of individual service members to conduct individual missions. But most National Defense Strategy missions require an integrated approach: Both USAF training requirements and how training is achieved need to change to capture more meaningful readiness metrics.” 

Researchers highlighted three interconnected issues with current readiness metrics: 

  • They don’t do a good job of measuring integration across services or even Unit Type Codes (UTCs), instead asking individual units and service members to assess their own readiness in isolation 
  • They don’t reflect how the Air Force actually presents forces; individual units are assessed for different missions or environments, but the service then has to aggregate and extrapolate for the force packages it actually uses to respond to situations. 
  • “Opportunities can be nonexistent or scarce for units to practice and demonstrate proficiency for certain capabilities,” researchers said, meaning leaders have to make educated guesses as to their forces’ readiness for certain missions. 

“Addressing these gaps is not a simple matter of adjusting the current training infrastructure,” the researchers wrote. “Qualitatively different capabilities are needed to scale, integrate, and present complex scenarios and environments, which could be scheduled across units to aggregate force packages and executed to align with readiness reporting cycles. Furthermore, to fully close the gap in readiness assessment, the capabilities must allow some form of data collection to capture necessary and interpretable readiness measurements.” 

The answer, the report suggests, are substantial investments in training infrastructure, particularly in simulators and “synthetic environments” that would allow the department to conduct more integrated training without massive, costly exercises; and to test troops’ readiness against threats and missions that might be impossible to recreate in the real world. 

Such improvements would also provide more objective data on readiness, the report notes, helping leadership make more informed decisions. 

In discussions with leaders from Air Combat Command, Air Force Global Strike Command, and Air Mobility Command, researchers found a common requirement for more and better distributed mission operations training, which would require expanding simulator training to more communities, standardizing the simulators and synthetic training environments currently used, and upgrading the IT infrastructure to support the connectivity required for such training—a frequent concern across many of the department’s technology efforts. 

The Air Force’s main effort to address that demand is the evolving “Common Synthetic Training Environment,” which has been in development for several years. “This approach intentionally shifts the focus of training capabilities away from system-specific simulators to a modular, open architecture that directly supports integrated training across air platforms,” RAND researchers wrote. But they cited a range of technical challenges, from scalability to realism to integration, that continue to challenge developers. 

The report concluded with five recommendations for the Air Force to consider: 

  • Define readiness more broadly and add specific measures within that definition 
  • Create processes to determine readiness for force packages that go beyond individual unit commanders 
  • Add a field in the Defense Readiness Reporting System–Strategic where commanders can explain qualifying information relating to their subjective assessments 
  • Establish a working group focused on data and measurement in support of the Common Synthetic Training Environment 
  • Factor readiness assessment gaps when establishing priorities for training infrastructure