Air Force Eyes More Uses for AI—with Guardrails

The Air Force and other military services are deploying artificial intelligence tools in their IT networks and Security Operations Centers where personnel monitor cyber threats, officials said May 6—but they are leveraging the emerging technology cautiously even as some say it is ready to transform the very nature of warfare.

Col John W. Picklesimer, commander of the 67th Cyberspace Wing, said AI is more than just a buzzword. Airmen are using it to counter data overload in the SOC, a pervasive problem in defensive cyber operations across both the government and the private sector. Small SOC teams can easily be overwhelmed by the volume of alerts, most of which are false positives or routine attacks defeated by automated defenses.   

“We’ve engaged with a couple of our industry partners to bring AI … into a SOC location, pull the data feeds, and then let the AI actually analyze and provide some of those quick insights,” Picklesimer said during a panel discussion at AFCEA International’s TechNet Cyber conference. Personnel could then more easily triage reports and “go and dig deep” on the significant ones, he explained.  

Picklesimer said the wing had also been using NIPRGPT, an experimental generative AI chatbot developed by the Air Force Research Laboratory and cleared to run on the military’s unclassified global network—Non-secure Internet Protocol Router Network, or NIPRNet.

But he said Airmen shouldn’t use commercial tools that NIPRGPT was designed to mimic, like ChatGPT and other Large Language Models: “For day-to-day use, NIPRGPT is what we’re allowed to use,” he said. 

The chatbot has proved useful for summarizing large volumes of information, he said, and pulling together multitudinous data sources. For example, it can monitor who has signed up for various commander’s programs, and what level of training they have.

“Are they signed up for the right programs? The right activities? How do you automate the tracking of all those different things across disparate systems?” Picklesimer said.

He told Air & Space Forces Magazine after the panel that he had not personally noticed NIPRGPT hallucinating—making up untrue but convincing-sounding answers—which is something commercial generative AI chatbots are known to do.  

Nonetheless, he said he was more comfortable with use cases that involved summarizing or pulling together a defined data set, rather than asking more open-ended questions.   

Last week, Lt. Col. Jose Almanzar, commander of the Space Force’s 19th Space Defense Squadron, said NIPRGPT had “helped tremendously in mission planning and reducing administrative actions and helping to standardize a lot of the appraisal writing and award writing and whatnot.”  

Col. Heath Giesecke, director of the Army’s Enterprise Cloud Management Agency, also emphasized AI’s utility for back-office tasks.

“On the business side, the Army has adopted a large language model, [and] across the force we’re looking at specific use cases,” he said. 

The Army’s AI is called CamoGPT, although the service says it is not strictly a generative AI chatbot. ”CamoGPT is a machine learning platform that optimizes equipment maintenance, logistics, and supply chain management using data analytics and algorithms,” the service said in March. 

Giesecke gave one example of a successful use case: “In the HR domain, we took a bunch of [position descriptions] and enabled our contractor, the hiring agency, to use an LLM to reassess and reclassify hundreds or thousands … in a single day,” he said. 

He added that an Army priority right now is that “we’re really trying to centralize the use of generative AI and make sure things aren’t being done on personal devices or personal accounts.” 

Col. Dennis Katolin, the assistant chief of staff for operations for Marine Corps Forces Cyberspace Command, said he wants to use AI for three primary tasks:

  • First, as “something that’s able to discover new information.”
  • Second for the ability “infer from that data. What is going to happen, something that’s anticipatory.”
  • Third is “Synthesis: Generating a proposed solution to that problem set.”

Yet he also noted the danger of hallucinations.

“When it comes to AI, I have a bias that I think most of us up here do when it comes to artificial intelligence,” he said. “This group thinks it’s going to be great, that it’s going to accelerate everything. But I think it does introduce certain liabilities as well. There are times when ChatGPT has been incorrect. There’s times when AIs have been wrong.” 

He compared the process of getting to know the limitations of an AI to that of getting to know a colleague: “Do you trust that individual making a recommendation? ‘Sir, we’ve got to do this!’ Does Col. Katolin have a really good track record? Then you’d be like, ‘Hey, I didn’t check his homework. But he’s been on the money before.’”

It is a very different calculus if the colleague—or the AI—has been wrong 75 percent of the time, he said. 

“I think it does offer some risk, but I think that risk is mitigated by training with it, learning it … so we can build that trust and build that comfort level.” 

Katolin went on to say that the rise of cyber and information operations had had an impact on warfare unmatched since the introduction of military aviation. “From our perspective, there are five sort of truths [about information operations,] that information is inherently global, persistent, instantaneous. It compresses the levels of war and requires maneuver in all domains,” he said.