RELIABLE DATA NEEDED: Unmanned Aircraft System (UAS) operators assigned to Company D, 82nd Combat Aviation Brigade, 82nd Airborne Division conduct flight operations on April 8, 2024. Reliable data is essential for UAS operations to navigate, identify targets and engage. (Photo by Sgt. Vincent Levelev, 82nd Combat Aviation Brigade)
by Jennifer Swanson
A few years ago, I needed to meet with an industry partner whose office I was not familiar with. I typed the address in my GPS and dutifully followed the turn-by-turn instructions; bear right in a quarter mile, make a left at the stop sign, and finally arrive. Unfortunately, the elementary school parking lot it had directed me to was not where I needed to be.
Navigation systems leverage huge data sets that include street maps, topography, business data, traffic information and more, to generate responses to our queries. Unlike my experience that day, they are usually helpful. They use available data to select the best options for us as we find our way.
Today, these systems use machine learning algorithms to analyze data and detect needed updates. For example, algorithms can identify when a business has changed its location or when a new road has been built. We welcome these modern capabilities, as they help us know if there is a road closure, heavy traffic or if there is a new coffee shop on our route.
But as with any piece of technology, there are risks. If the data that the application uses is stale or corrupted in any way, you can end up arriving at a restaurant on the day it’s closed or be directed toward the high occupancy lane when traveling solo, or even end up at an elementary school when you are looking for a technology company.
In many cases the risk of a mistake with your navigation system is low enough for us to not worry about it. But what if the stakes were a bit higher? Consider if you were planning to host a VIP for dinner. In this case you might check an alternative data source to confirm hours of operation (e.g., the restaurants website) instead of relying on the data that comes up in the map application. You might even call ahead for a voice confirmation. These are ways of mitigating the risks to the navigation system; ways of implementing some simple safeguards to ensure your plans are successful.
In many cases the risk of a mistake with your navigation system is low enough for us to not worry about it. But what if the stakes were a bit higher? Consider if you were planning to host a VIP for dinner. In this case you might check an alternative data source to confirm hours of operation (e.g., the restaurants website) instead of relying on the data that comes up in the map application. You might even call ahead for a voice confirmation. These are ways of mitigating the risks to the navigation system; ways of implementing some simple safeguards to ensure your plans are successful.
AI-SPECIFIC RISKS
When we consider developing or deploying artificial intelligence (AI) capabilities for military, there are risks that can’t be mitigated by calling the restaurant. Leveraging AI capabilities in the Army necessarily means that we have some systems that provide sensitive and critical output; output that must be right. Consider an unmanned aerial system that has been developed to operate semi-autonomously to execute an attack. Even when its communications systems are jammed as it approaches enemy lines, it needs to continue operations to complete the mission. A system like this must rely on its internal data sets to navigate, to identify targets and decide when to engage. The risk we considered previously of having a poor quality or corrupted data set could have catastrophic consequences in a scenario like this.
But it’s not just the nature of the military scenarios that make AI risks complex. AI capabilities have unique risks all their own. For instance, AI uses massive amounts of data, frequently from disparate sources. This increases the potential vectors for someone to pollute your data. Consider requesting of your GPS to find the closest coffee along your route. It’s entirely possible that company A might be closest, but a competitor could poison the data to make it look like company B is closer. Of course, data poisoning by a peer adversary in a military context is even more troublesome.
Another AI-unique risk is related to the black box nature of modern AI systems. Complexity has increased so much that we can’t discern the way the AI system is generating its output. The navigation system is relatively easy, as there are a finite number of routes that get chosen from. But the way large language models are operating today obscures much of the operation under the hood, increasing the challenge of addressing risks. Checking enormous, crowd sourced data sets or trying to peel back the layers on highly complex neural network algorithms is extremely difficult and requires a high level of expertise and maybe even new technologies.
MAXIMIZING BENEFIT
Challenges aside, the Army must establish a framework for mitigating AI-related risks. The pace of advance on the battlefield doesn’t afford the U.S. to abstain from developing AI enabled overmatch capabilities. Going forward means understanding the risks and employing the appropriate measure of mitigations. Each mitigation adds expense and time so it’s about balancing the need to develop with the need to secure.
For instance, a bank has a more secure lock on its vault than a child has on her piggy bank. The bank’s vault is vastly more expensive to build, and more time consuming to install and operate, but its protection is appropriate for the value of the contents. If we put the bank’s vault lock on the piggy bank, that would be a misuse of resources. Valuable or critical systems require more mitigations.
Every mitigation makes development and deployment more complex. Accessing the Secret Internet Protocol Router Network, or SIPRNet, requires a physical token. This process is cumbersome but required to maintain control. The key to maximizing value though, is to use the fewest mitigations to get to a reasonable level of risk. That’s where an AI-specific risk framework comes into play.
AI LAYERED DEFENSE FRAMEWORK
As part of the AI Implementation Plan for Assistant Secretary of the Army Acquisition, Logistics and Technology (ASA(ALT)), announced in March 2024, Deputy Assistant Secretary of the Army (Data, Engineering and Software) is building the AI Layered Defense Framework. The intent of the framework is to give program managers a toolkit to self-assess for AI-specific risks and be informed on relevant mitigations available to them. All AI capabilities would undergo the most basic mitigations (Layer 1) while more critical systems would get additional layers.
There are three layers of risk, in which all AI capabilities fall:
- Layer 1—AI tools that have broad value, and if compromised have limited potential for harm or hindrance of Army’s objectives. The maximum benefit is achieved with the fewest controls. Using a navigation system to find lunch would fall in this category. We don’t need to invest a large amount in mitigations here.
- Layer 2—AI software that offers strategic value to a more limited number of users. This layer will have more significant mitigation strategies to balance value and security. Key risks will be mitigated.
- Layer 3—AI models that provide high value and cannot be compromised. Layer 3 will employ state-of-the-art defenses for the most valuable or critical capabilities.
The AI Layered Defense Framework aims to incrementally increase security measures from an open, accessible strategy to a highly secure approach with stringent controls, tailored to the unique sensitivity and importance of Army data and systems. The AI Layered Defense framework will serve as a thorough theoretical and practical framework for mitigating adversarial risks to our systems and warfighters. Here, risk is being broadly defined as the possibility that the occurrence of an event, related to AI systems, will adversely affect the achievement of the Army’s objectives. It’s a general statement which reflects the multitude of challenges the Army faces every day and the idea that many mission objectives must be achieved even if there are dangers. The risk is not the potential of harm or injury, those have to be tolerated at some level; the risk is failing to achieve an objective.
THE UPPER HAND: Staff Sgt. Tessa Mehler, assigned to 2nd Squadron, 2nd Cavalry Regiment, waits for a UAS to land in her hand during Saber Strike 24 on Bemowo Piskie Training Area, Poland, April 15, 2024. To remain ahead on the battlefield, the Army must remain on the forefront of technological advancement. (Photo by Spc. Austin Robertson, 22nd Mobile Public Affairs Detachment)
While AI systems face the traditional cybersecurity risks associated with all computer systems, the AI Layered Defense Framework is concerned with building a comprehensive library of risks and mitigations unique to or inherent in AI systems: risk associated with the data used to train the system, the system itself, the use of the system and the interaction of people and system. The AI Layered Defense Framework is intended to be a flexible, structured and measurable approach to address AI risks prospectively and continuously throughout the AI life cycle. ASA(ALT) is interested in learning more about risks associated with traditional adversarial methods, such as Data Poisoning and Model Stealing, and emerging and future risks broadly associated with all branches of computer science as well as the potential for security disruption from theoretical advances in future technologies such as quantum computing.
Identifying risk is only the first step in developing and implementing industry-leading risk mitigation strategies and technologies. ASA(ALT) is committed to exploring computational methods for, among other things, detecting and removing “Trojaned” data among the vast public and crowdsourced data sets used to train models and detecting the creation of backdoors before deployment.
Figure 1 shows a cursory view of how data can be secured and safeguarded depending on the system need and risks strategies to balance value and security. Each system will have unique requirements and considerations that evolve over time.
CONCLUSION
It’s no secret that the character of warfare is changing rapidly. To maintain dominance in the battlefield of tomorrow, the U.S. needs to continue to lead in developing systems on the bleeding edge of technology. This means development and inclusion of AI capabilities.
But development of AI capabilities comes with unique risks that require deliberate and appropriately scaled mitigations. The acquisition community has the responsibility to understand the risks and employ appropriate mitigations to ensure maximum benefit for the warfighter.
For more information about the AI Layered Defense Framework, go to https://www.army.mil/dasades or https://armyeitaas.sharepoint-mil.us/sites/ASA-ALT-DASA-DESPlaybooks/ (CAC-enabled).
JENNIFER SWANSON is the deputy assistant secretary of the Army for data, engineering and software (DASA(DES)). She leads the implementation of modern software practices, including agile software development, DevSecOps, data centricity and digital engineering across the Office of the ASA(ALT). She holds an M.S. in software engineering from Monmouth University and a B.S. in industrial and systems engineering from the University of Florida. She is a DAWIA Certified Practitioner in engineering and technical management and Advanced in program management.
ONLINE EXTRAS
Army Officials Plan to Reduce Cyber Risks of Artificial Intelligence | AFCEA International
DOD Releases AI Adoption Strategy > U.S. Department of Defense > Defense Department News