There are many use cases for different kinds of artificial intelligence in the Space Force, but the service is moving cautiously towards adoption, hampered in part by a disconnect with vendors, officials said May 1.
At the ACFEA Northern Virginia chapter’s Space Force IT Day in suburban Virginia, Lt. Col. Jose Almanzar had a blunt answer when asked how the unit he commands, the 19th Space Defense Squadron, is using AI.
“To make a long story short, we’re not,” he said.
However, he told the audience of defense industry contractors, “We do know how to spell AI, so that’s good.”
Joking aside, Almanzar said his squadron is looking at using NIPRGPT, a generative AI model cleared to run on the military’s Non-secure Internet Protocol Router Network (NIPRNet), an unclassified global network run by DOD.
NIPRGPT is an experimental chatbot developed by the Air Force Research Laboratory, which Almanzar said had “helped tremendously in mission planning and reducing administrative actions and helping to standardize a lot of the appraisal writing and award writing and whatnot.”
But the 19th, as one of the Space Force units responsible for tracking objects in space, has a big data problem, and it needs to use other kinds of AI to get after that, Almanzar said.
“Where we need help is, we have a lot of data,” he said, explaining that the squadron receives about 1 million observations a day from the service’s Space Surveillance Network, comprised of over 20 different sensors, both in space and on the ground. That’s on top of a daily feed of commercial SDA data compiled by the service’s Joint Commercial Office, he said.
Validating data from new private sector sensors for inclusion into the Space Force’s definitive data catalogue is very time-consuming, Almanzar said.
“Having AI tools to help our analysts in [Space Operations Command] and [Space Systems Command] adjudicate the information that these new sensors bring on so we can validate [it] and use it in our gold standard catalog would be extremely helpful,” he said.
Machine learning AI could also help with preparing ‘Conjunction On Launch’ Assessments (COLA), which the 19th provides to the FAA as part of the aviation regulator’s approval process for space launches in the United States.
COLAs are designed to ensure that a launch won’t collide with an existing satellite, but they take “hours upon hours upon hours,” Almanzar said. Safety assessments for on-orbit maneuvers—to ensure satellites’ new locations are safe and their new orbits won’t cause collisions—are similarly time consuming.
“If there’s ways that we can automate that and make it go faster,” he said, “how do we compress that timeline, especially in scenarios that we have had recently when a satellite in [Geostationary Earth Orbit, or] GEO blew up and generated a lot of debris? How do we get that data quickly and make sense of that?”
On top of all that, Almanzar pointed out, the Space Force had historical domain awareness data “going back to Sputnik,” which could be useful to train machine learning AI systems to spot anomalies in current orbital data.
“Ideally, what I would like for us to do with it is predictive analysis,” he said: “Predictive AI on patterns of behavior, patterns of life [in the data], helping us with orbit determination.”
Part of the issue with AI adoption, his fellow speakers on a panel discussing data and AI said, is a disconnect with vendors.
“We absolutely have plans to leverage AI,” said Shannon Pallone, program executive officer for battle management and command, control, and communications at Space Systems Command.
She said AI could bring immediate value in “helping [with] a lot of mundane administrative tasks. So, how can I start putting information that I had in the templates [for procurement documents]? How do I use it on the back end? How can I be auto generating documentation and … all the artifacts that I need to get an [Authority To Operate on DOD networks].”
But getting vendors to focus on those issues isn’t easy, she said.
“One of the biggest challenges I have is you all come in and you’re like, ‘Look at this cool AI stuff I’m doing!’ That does not solve any of my problems,” Pallone said.
In many vendor pitches, she explained, it isn’t clear, “what is it that your company brings to the table? Is it a large language model? OK, anybody can do that. Is it the data you trained on top of it? That might be more interesting, but is that data relevant to what I’m doing? Or is it just data you picked because it was readily available?”
Above all, she said vendors had to answer the question: “How does [your product] help me get after the problems I’m trying to solve? How does it get after more space-specific problems? And unless I can see that last piece, I’m struggling to find where the value is,” she said.
The Space Force’s Space Data and Analytics Officer Chandra Donelson added that vendors needed to go back to basics: “The first question is: What problem are we trying to solve? … And I cannot tell you how many times people walk into my office and they’re like, ‘Hey! We have a solution. Now let’s go look for problems across the Space Force that we’re able to get after with it.’ That is the wrong thing.”
She said starting with the problem meant you can look for a solution, even if it isn’t the latest buzzy concept.
“Once you identify the problem, maybe artificial intelligence is a solution. Maybe it’s something else. Maybe it is a specific type of artificial intelligence,” she suggested.
She also urged vendors to focus on their core strengths, since that is their value proposition and what makes them an attractive partner for the Space Force.
“In all aspects of our life,” she said, “choosing a partner is the most important decision you’re ever going to make. So when we choose our technology partners, those are the most important decisions that we have to make. So for our partners, I want you to also be realistic about what capability you can provide for the service. If you are not an AI company, do not try to become an AI company, just because that’s what’s selling right now. Do what you do very well, and let’s have some real, I would say, critical and crucial conversations about that.”
Experts caution that building trust with operators is vital to the acceptance of AI tools, and Col. Ernest “Linc” Bonner, commander of the Space Force’s Futures Task Force, said the service needed to be careful as it moved towards adoption.
“There needs to be a deliberate examination of both what the capability of the technology is at this time and how those things can be brought to bear, and what would be required for them to be brought to bear for the service, for our various missions,” he told an earlier session.
“AI has a lot of potential, and I think it’s still unclear where that’s going to take us. There’s certainly potential in terms of things like mission planning and generation of courses of action to facilitate that type of decision making.”
He said another use case was autonomous defense systems for the large-scale, low-Earth orbit constellations, “and I’m sure there are others that I haven’t even scratched the surface of.”