A U.S. Air Force F-22 Raptor and F-35A Lightning II fly in formation with the XQ-58A Valkyrie low-cost unmanned aerial vehicle over the U.S. Army Yuma Proving Ground testing range, Ariz., during a series of tests in December. The Valkyrie operates autonomously, taking its cues from the manned fighters with which it flies. Tech. Sgt. James Cason
Photo Caption & Credits

Turning Up the Heat on AI

Jan. 19, 2022

The Pentagon battles its own inertia to make progress in artificial intelligence. 

Whenever Col. Tucker Hamilton encounters skeptics about artificial intelligence, he goes back to a time years ago before the Air Force adopted collision avoidance technology for its F-16 fighters. An Air Force F-35 test pilot, Tucker is familiar with resistance. He’s spent the past two years leading the Department of the Air Force’s AI Accelerator at MIT, and the parallels with the collision avoidance system are clear in his eyes.  

In the 14 years the military waited to require automatic collision signaling in its fighters, at least 17 F-16 pilots “died from collisions that could have been avoidable with this technology.” The problem wasn’t with the technology. It was just that “people didn’t trust it.”   

Eventually the Automatic Ground Collision Avoidance System became a welcome tool, taking over for pilots who lost consciousness or misjudged terrain. Today no one thinks twice about “It took the ability of someone to trust the autonomy in order to be able to fly with it and feel comfortable with it,” Hamilton said of that system, introduced in 2014. Now he wants to apply the same approach with AI. “We want to make sure that we’re approaching the technology rightly and that we are making it so that society can trust in the outcomes,” he said.

Col. Tucker Hamilton leads the Department of the Air Force’s AI Accelerator at MIT. USAF-MIT AI Accelerator/courtesy

The accelerator teams the unit’s 16 Airmen and Guardians (12 active, plus four reservists) with about 140 MIT researchers and focuses them on 10 projects. The work is “meant to further the science of AI, not just in some military sense,” but for a broad array of applications, Hamilton said. “AI is ubiquitous right now. Everything is being influenced by machine learning. So how do we, as a military, approach the technology?” 

Investigations include AI amplifying human decision-making, AI-assisted optimization of training schedules, and machine learning-enhanced processes for sorting and sharing data, among several others others. When the Airmen and Guardians complete their time, they carry what they’ve learned back to their units.

Seeking the Edge 

The National Security Commission on Artificial Intelligence completed its work less than a year ago, citing deepfake videos, drones in the hands of “terrorists and criminals” and a “gathering storm” of “foreign influence and interference” as threats to the United States. In response, the authors said, the U.S. “must prepare to defend against these threats by quickly and responsibly adopting AI for national security purposes.” The National Security Commission on Artificial Intelligence concluded that the U.S. “must prepare to defend against these threats by quickly and responsibly adopting AI for national security purposes.”

Convened in 2019 with 12 members appointed by Congress and three by the Executive Branch, the commission studied AI and related threats for more than a year and published its final report in March 2021. Chaired by former Google and Alphabet CEO Eric Schmidt, it concluded that the DOD’s digital innovation initiatives are “uncoordinated and under-resourced” and said the department should “embrace proven commercial AI applications” as “a critical first step” to building a “modern digital ecosystem” that could serve to “integrate AI across the organization.”

The commission brought together senior members of Congress and the U.S. national security establishment—along with leaders from India, Japan, Australia, South Korea, New Zealand, NATO, and the European Union—for a summit in July 2021. All agreed that China presented the greatest competition in AI. 

China, for its part, had declared in 2017 that it intended to be globally dominant in AI by 2030.

The commission advised what it called a “modest down payment on future breakthroughs,” telling Congress that DOD needed everything from “widespread integration of AI” to “a state of military readiness of AI,” all by 2025. And it wanted  DOD to spend a lot more on AI, proposing an increase from $1.5 billion a year to $8 billion a year by that time. 

“If anything, this report underplays the investments America will need to make,” write Schmidt and his vice chair Bob Work, a former deputy secretary of defense, in their letter introducing the final report. Saying the money is to “expand and democratize federal AI research,” Schmidt and Work also said they “worry that only a few big companies and powerful states will have the resources to make the biggest AI breakthroughs.”

Schmidt was chief executive officer, then executive chairman, at Google and its parent company, Alphabet, from 2001 to 2017, then served as a technical adviser to Alphabet until 2020. Schmidt heads the list of investors in AI startup Rebellion Defense, founded in 2019 and reported to have been valued at $1 billion as of September.

600-Plus AI Projects

At the commission’s summit to accompany the report’s release, Secretary of Defense Lloyd J. Austin III proudly noted that the Defense Department has more than 600 AI projects underway across the services and DOD agencies. Some commands are just trying to get a handle on their data—what they have and how to format it for future use, for things as simple as optimizing a schedule.

Other projects are advancing AI for more inherently military uses, such as the Air Force Research Laboratory Sensor Directorate’s $88 million contract with the University of Dayton to study AI and machine learning for autonomous systems.

Sgt. Shane Keahiolalo tests the new Battle Management Training NEXT system at Joint Base Lewis-McChord, Wash. BMTN uses a video game approach to teach battle management. Maj. Kimberly Burke/ANG

Like the DAF’s MIT AI Accelerator, DOD’s Joint Pathology Center wants to make a wider contribution—in its case, to medical research. The center houses the world’s most extensive repository of diseased tissue samples, largely in the form of slides. Its director, pathologist Army Col. Joel T. Moncur, envisions AI algorithms learning to predict a patient’s prognosis. AI might help predict whether a cancer patient could get by with just monitoring or would need aggressive treatment. 

To train the algorithms, the center is scanning slides at high-power magnification—recording hundreds or thousands of digital images per slide—to link with information such as the person’s outcome. 

Even while recognizing the potential, Moncur said “privacy, security, and ethics” continue to take priority. Having first figured out how to manage the data, the center is speeding up the rate of scanning from 500,000 slides to more than 1 million slides a year. 

Austin, for the Pentagon’s part, promised the department wouldn’t “cut corners on safety, security, or ethics,” not believing “for a minute that we have to sacrifice one for the other.” With “some of our competitors” thinking that emerging technologies such as AI represent “an opening,” Austin said DOD had requested its “largest ever” budget for research and development.

The Pentagon already received an increase of $3 billion in the fiscal 2022 National Defense Authorization Act for science and technology research such as in AI, and Congress required a new comparison of U.S. and Chinese research and development activities “on certain critical, military-relevant technologies.” 

But without accepted technical and ethical standards in developing AI, it may not yet be ready for some military uses.

An analysis published in December by the American Enterprise Institute (AEI) found that “the international community faces disarray that stands to cause considerable harm to consumers, companies, and countries.” Broad implications include exploitation of individuals’ data and bias AI against certain groups of people in the development of AI.

The Washington, D.C., think tank’s author, Elisabeth Braw, a former senior fellow of the Atlantic Council who focuses, in part, on nonkinetic threats, describes the approach to standards by U.S. companies and the federal government as “laissez-faire,” while noting that “China spends massively on AI and eschews international standards while pushing heavily for de facto international acceptance of its own standards.”

Recognizing the problem, NATO chimed in separately in 2021, publishing new principles on appropriate uses of AI while its assistant secretary general for emerging security challenges went public with a cautionary message.

Adopting already developed AI for military purposes carries risks because most AI to date has been developed for commercial purposes, “then maybe a dual-use case later on in the process,” said NATO’s David van Weel in a webinar that accompanied the release of the American Enterprise Institute’s report.

“If you do not master it, if you are not there when the technology is being developed, and those developing it are not looking at the security impact of their technology,” van Weel said, “it means that governments, but also the defense sector, [are] always late to seeing what the potential impact of technology is.”

In addition to spending the most on developing AI, the AEI report pointed out that China’s researchers publish the most papers in the field of AI and that China files for the most AI patents, calling China and the U.S. “the undisputed leaders in a fast-growing and, so far, little-regulated field.”

Trust Lies in Standards

Members of DOD and even NATO have acknowledged that without a window into its development, repurposing commercial AI for the military carries risks. “We are not known, at NATO, for publishing a lot,” accordinmg to van Weel. “We try to keep secrets a lot.”

But lagging behind the private sector has left governments “in a situation where regulation comes after the broad use and misuse of technology,” van Weel added. “So we need to be early to the party and make sure that we understand new technologies, not to militarize them—no, but to understand the security and defense implications.”

To build confidence in AI, NATO has proposed that governments join with universities to set up test centers “where allies that are thinking about co-developing AI for use in the defense sector can come in and verify, with protocols, with certain standards that we’re setting, that this AI is actually verified,” van Weel said. “Principles are nice, but they need to be verifiable as well, and they need to be baked in from the moment of the first conception of an idea up until the delivery.”

“It’s not a world standard yet, but if the 30 nations, Western democracies, start out by shaping industry to adhere by these standards, then I feel that we are making an impact, at least in the development of AI and hopefully also in the larger world,” he said. 

The Defense Advanced Research Projects Agency hopes its new public toolkit to help developers defend their AI against attacks, such as tricks that can fool a system. 

Acknowledging the lack of visibility into the development of commercial AI, “How do you vet that—how do you know if it’s safe?” said DARPA’s David Draper, program manager for the newly available set of tools called GARD, for Guaranteeing AI Robustness against Deception.

Network Lacking

DOD will have to overcome “insufficient network architecture” and “weak data practices” to get to “a state of military readiness” of AI and machine learning, the national commission said. 

Moncur at the Joint Pathology Center, for example, recognizes that DOD needs some “common resources” for AI and that they’re being looked at. “Resources” include more than secure data storage: “We need to know whether or not there will be a computing environment that has sufficient power within the military to invite collaborators to operate and to development within our secure environment.”

He suspects the solution “will probably be a mixture—sometimes inviting collaborators in; other times exporting data out.

Either way: “To the degree that the military could invest in the high-capacity computing environment that’s necessary to process data, to train algorithms—I think that that would be very useful.” 

The national commission’s idea if for a “digital ecosystem” by 2025 made of up of data repositories; prepackaged “environments” with tools for developing AI; a “marketplace” of AI resources, including software; and “pre-negotiated computing and storage services from a pool of vetted cloud providers.”

This was, in part, the concept behind the JEDI—the Joint Enterprise Development Initiative—which sought to create a common cloud environment for military operators. While JEDI fell by the wayside amidst protests and legal wrangling, however, a variety of nascent cloud-enabled AI projects began to gain steam. Now DOD is pursuing a multi-cloud solution rather than the one-stop-shop envisioned with JEDI. This Joint Warfighter Cloud Capability will provide the same kind of tools and pre-negotiated security and prices as JEDI, but enable users to work with the technology offerings from a number of cloud service providers. To lead the AI revolution in DOD, the Biden administration is creating a new position at the Pentagon and reorganizing some AI-oriented entities within DOD.  

Deputy Secretary of Defense Kathleen H. Hicks announced in December that DOD is replacing its Joint Artificial Intelligence Center (JAIC) with a new office and realigning the Defense Digital Service and chief data officer role.

The new Office of the Chief Digital and AI Officer “will serve as the successor organization” to the JAIC and “intervening supervisor” between the Defense Digital Service and Office of the Secretary of Defense. The chief data officer will continue to report up through the chief information officer but will be “operationally aligned” to the new office.

Airmen operate cyber systems using an enhanced communications flyaway kit during the Global Information Dominance Experiment 3 (GIDE 3) and Architect Demonstration Evaluation 5. Tech. Sgt. Amy Picard

The chief data and AI officer job is effective Feb. 1, 2022. The person selected will “serve as the Department’s senior official responsible for strengthening data, artificial intelligence, and digital solutions in the Department,” Hicks said in a Dec. 8, 2021, memo.  

Echoing some sentiments of the national commission, Hamilton at MIT said he wasn’t worried about the pace of algorithm development but instead the data architecture to develop AI and run it on.

“What you really need the money for is to develop the infrastructure that would allow and empower machine learning solutions,” Hamilton said. “The ability to share data securely and effectively across the DOD—across the government.”

Editor’s note: This story was updated at 9:21 p.m. Jan. 21 to include Col. Tucker Hamilton’s correct rank.