Space Force CTIO: AI Will Be ‘Game-Changer’ for Operational Space

Space Force Chief Technology and Innovation Officer Lisa Costa called artificial intelligence a “game-changer” for the service during a recent Mitchell Institute for Aerospace Studies’ webinar event, highlighting its potential to enhance USSF’s operational capabilities and affect the global space race, while also acknowledging some of the hurdles still in the way.

A common concern for the U.S., China, and other space-faring nations is the quality and immense volume of the data. But Costa expressed confidence that AI technology like machine learning and natural language processing could help solve that issue.

“Computer-based tagging of large amounts of information in real-time is possible, and, in fact, computers are much better at tagging and marking up data than humans are,” Costa said during the Nov. 8 event. “I believe this is going to be a real game-changer in terms of being able to use AI in operational space.”

AI-driven real-time tagging of vast information sets could surpass human capabilities in consistency and efficiency, Costa said. To get there, though, she emphasized the need for real-time training for large language models, as they allow effective control of various sensors and sensor webs.

The Space Force previously limited usage of those models for official purposes, but Costa made clear at the time the pause was temporary as the service considered data security concerns. In the long term, she said such models “will undoubtedly revolutionize our workforce and enhance Guardians’ ability to operate at speed.”

Before that long-term vision comes to fruition, the Space Force must deal with the aging infrastructures and technology it inherited when starting up four years ago. These outdated technologies, networks, and software may not be conducive to the integration of advanced technologies like AI, which Costa refers to as a “tech debt.” This includes the limitations of older GPS satellites, various constellations running on different or outdated networks, and the difficulty of building advanced AI models on top of these aging infrastructures.

“We’re working to modernize those capabilities, fundamentally looking at fixing the foundation.” Costa said.

To update the foundational elements will require taking risks and being innovative—Costa said the goal is to bring in innovation and agility without compromising the reliability of crucial systems. This is part of the branch’s overall plan to modernize and transform digitally.

Another unique potential challenge in the realm of space operations is satellites reacting to AI-perceived threats that are not real, disrupting operations and wasting resources. Repositioning satellites due to such perceived threats could also create space debris, endangering future missions.

Adversaries may also use AI for threat detection, raising concerns about security breaches and increased errors. The picture grows even more complicated when countries such as China are often opaque about their procedures and intentions in space. China’s on-orbit presence has exponentially grown since 2015, with a 379 percent increase in satellites.

“When China does not make available their TTPs [Tactics, Techniques, and Procedures] and their CONOPS [Concept of Operations] for mission operations … mistakes can be made,” Costa said. “We want to make sure that space is usable by everyone in the future.”

For responsible AI in space, Costa suggested Human in the Loop (HITL) and On the Loop (OTL) approaches. HITL ensures human oversight and control over AI by having someone directly involved in the decision-making who can intervene the process if necessary. The OTL approach allows humans to monitor AI systems and make decisions based on the information provided by AI.

Another solution she touched upon is potentially integrated the ‘Guardian AI’ method, which involves training AI and managing its data exposure. This idea opens the door to putting a certain level of responsibility and trust in the technology. Criteria for a Guardian AI would include the amount and types of data it has been trained upon, the duration of its use, and the level of trust it has earned. However, for this to be effective, it will take time for people to be comfortable with the technology’s capabilities and to build trust through positive experiences and demonstrated reliability.

Nationwide awareness is growing for the essential adoption of reliable, secure, and trustworthy AI. In October, President Joe Biden signed an executive order promoting responsible AI adoption across the government. Following the announcement, Department of Defense said it is anticipating collaboration with the White House and other national security agencies on a national security memorandum on AI, to build upon their ongoing responsible AI initiatives.