Pentagon Brings ChatGPT into Its Official AI Tool Set


Audio of this article is brought to you by the Air & Space Forces Association, honoring and supporting our Airmen, Guardians, and their families. Find out more at afa.org

The Pentagon’s adoption of generative artificial intelligence tools—including the recent addition of the world’s most popular model, ChatGPT—holds promise for more efficient work for Department of Defense personnel but also poses risks unless users remain vigilant, experts told Air & Space Forces Magazine.

The department announced Feb. 9 it was adding OpenAI’s ChatGPT to its GenAI.mil platform, which uses machine learning on large data sets to function as a chatbot and produce text, images, or software code based on unclassified information.

GenAI.mil launched in December, using Google’s Gemini for Government GPT and later adding xAI’s government suite, based on its Grok model. Already, the platform has surpassed one million unique users.

The addition of ChatGPT may fuel further interest and growth. OpenAI is largely credited with launching a boom in generative AI, and ChatGPT remains the most popular version of the technology. According to a January study of web traffic, ChatGPT accounted for nearly 65 percent of generative AI chatbot site visits among the general public, triple that of Google’s Gemini.

Gregory Touhill, a retired Air Force brigadier general who now serves as the director of cybersecurity at Carnegie Mellon’s Software Engineering Institute, told Air & Space Forces Magazine that expanded access to AI is important because Airmen and Guardians need it to be competitive.

“I think it’s important for our Airmen today, we want our Airmen to be well prepared for the future, and the future is racing toward us now,” Touhill said. “AI is a tool that our Airmen and our Guardians can use to obtain decisive capabilities in the cyber domain.”

Touhill and SEI are currently working with the Pentagon and other agencies to develop risk management processes for using AI in government settings, he said.

Touhill added that he’s confident AI can help service members automate and eliminate tasks to do more faster. But more importantly, AI can potentially free Airmen and Guardians from lesser tasks so they can apply more time to higher-order work.

Caleb Withers, a research assistant in the Technology and National Security Program at the Center for a New American Security, foresees AI benefiting prototyping, wargaming, research, and bureaucratic paper-pushing. “With wide adoption, I imagine these tools will quickly become some of the most used,” Withers said.

But Touhill and Withers cautioned that AI also poses real risks as its use grows. “The security challenge is the fusion of hardware, software, and wetware,” Touhill said, the last term referring to the humans using AI. “We don’t want our Airmen and Guardians disclosing information into a system not designed to process that information. Once it’s in, it’s part of the system; it’s not like you can back away and wipe it clean.”

Using official defense applications and not open commercial solutions is one step to protecting information loss. Good training, common sense, and protocols will also help, Withers said, as does a skeptical approach to AI use.

“These systems are not yet fully reliable, and in some cases can be quite unreliable or fail,” Withers said. “There’s a risk of overconfidence in them.”

In June 2024, the Air Force Research Laboratory rolled out its own generative AI for the Air Force, called NIPRGPT. NIPR stands for Non-secure Internet Protocol Router network, or the unclassified Internet for the military. Adoption was swift: 80,000 users in NIPRGPT’s first three months and 700,000 users before the pathfinder program was shut down in December to make way for GenAI.mil.

The Army had also developed its own generative AI tool, known as CamoGPT. The two services briefly clashed last April when the Army blocked NIPRGPT from its networks, citing cybersecurity and data concerns. Chief Technology Officer Gabriel Chiulli of the Army’s Enterprise Cloud Management Agency told Air & Space Forces Magazine at the time that blocking NIPRGPT was part of a wider move by the Army to shift from experimentation with AI tools to full implementation.

“The block was focused on getting us to a governance framework for AI used in a production state,” he said. “We were trying to make sure we had the guardrails in place for how we’re doing AI for real.”

Now the Pentagon-wide GenAI.mil platform is the only approved AI platform, enabling the military to absorb the latest models, tailored for government use at a uniform location.

Touhill called NIPRGPT a “bold initiative,” an early effort to get a specific large language model in the hands of Airmen and Guardians. “It’s like training on a simulator that actually has some live data,” Touhill said. “It was a good first step in getting Airmen and Guardians comfortable using AI.”

GenAI.mil is likely to expand to include other AI tools, Withers predicts, just as it has recently added ChatGPT. Different generative AIs excel at different tasks, he said, and no one single solution will enable DOD to maximize AI’s potential to tackle different problems.

Audio of this article is brought to you by the Air & Space Forces Association, honoring and supporting our Airmen, Guardians, and their families. Find out more at afa.org