How new artificial intelligence tools are transforming the Army.
by Thom Hawkins
Prompt: Write the introduction to an article about how the Army is using generative artificial intelligence for military applications such as creating synthetic data for model training.
ChatGPT: “Generative artificial intelligence (AI) is a type of machine learning that involves creating new data or content using algorithms. It has the ability to generate synthetic data, images, text and even music, making it a powerful tool for a variety of industries and applications. In the military, generative AI is being used to create synthetic data for model training, simulations and other purposes. The use of generative AI in the military is an exciting development that has the potential to revolutionize the way that military operations are planned and executed. In this article, we will explore how the Army is using generative AI for military applications, and the benefits and challenges of this technology. So, let’s dive in and see how the Army is leveraging the power of generative AI to improve military operations.”
(ChatGPT (Generative Pre-trained Transformer) is a chatbot launched by OpenAI last November.)
BUILDING ROBUST MODELS
At a Massachusetts Institute of Technology laboratory, scientists are using generative AI to replicate what a potato feels like to touch. It’s not important that the subject is a potato. The point is the ability to simulate multimodal sensory information in a virtual environment. Understanding how an anti-tank mine looks and feels allows our Soldiers to recognize or manipulate it, even under low-visibility conditions.
The variation of individual organisms is key to the process of natural selection—if all things were the same, there would be no evolution—but these differences can be hard to distill into a single model that represents all objects in a particular class—i.e., all potatoes, or all anti‑tank mines. Generative AI can be used to simulate variation, broadening the perspective on what a potato can be—in terms of how it looks or, in this case, how it feels.
One of the challenges in training image recognition models is that there can be infinite ways an object looks in the context of its environment. It might be seen from various angles, or partially hidden behind other objects. Anticipating all of these variations is impossible and finding representative images laborious. Generative AI, however, allows us to expand on a smaller data set, introducing further variation to make the resulting model more robust—more accurate at identifying the target under varied circumstances—because it can better identify which features are significant in determining whether or not an object belongs in a particular class of objects. In other words, if we know what an anti-tank mine looks like partially buried, we (or our sensors) can identify them more accurately.
GENERATIVE DESIGN
For more than a century, we’ve used a process called “design of experiments” or “experimental design” to determine the factors that contribute to performance of a process or design. Experimental design varies certain features in a design, the predictor variables, to gauge their impact on one or more response variables—for example, how the length and width of a plane’s wings affect fuel usage. However, because there may be several predictor variables (length, width, etc.), and each of those may have many potential settings (12 meters, 13 meters, etc.), running a design of experiments can be complex and costly.
However, two factors have changed this paradigm in recent years. The first is that advances in simulation allow digital engineering models to be assessed without building physical components. The second, more recent, factor is that computers can be used to rapidly generate and assess models, allowing a design of experiments to consider far more predictor variables and factors than previously possible. For the Army, we can apply this process to things like developing hardware chassis that optimize heat dispersion, or a program that identifies command post configurations that reduce concentration of the radio frequency signature.
Generative design can be used for more than hardware—for example, course of action development, which Maj. Patrick Beaudry, a science and technology analyst for the Mission Command Battle Lab, notes that his organization is looking into. The design can move beyond the planning phase and into problem solving during operations, which allows autonomous action in pursuit of a delegated objective.
OFFLOADING BUREAUCRACY
While applications like DALL-E 2 and Stable Diffusion have popularized AI‑generated images on social media, OpenAI’s generative pre‑trained models, based on massive amounts of text from public documents and the internet, have both impressed and horrified users with their capability.
Much of the focus for generative AI has been focused on the negative impacts—the end of the student essay, the loss of jobs for professional illustrators—but there are also positive impacts.
Generative AI can be used to develop document outlines (see box), draft sections of documents (such as a concept of operations) or correspondence, write abstracts or summarize documents, and even write code, reducing the amount of time and effort involved. While the AI works best on requests where the output has a defined form, the more data provided in the request, the more specific the result. The output is generally not to the point where it can be deployed unsupervised, but the models can provide a useful first draft for a human to review and revise.
Prompt: Outline a project plan for a capability that detects cyber anomalies ChatGPT: Sure! Here is an outline for a project plan to develop a capability to detect cyber anomalies:
I hope this outline is helpful! Let me know if you have any questions or need further information. |
One of the most impressive capabilities of generative pre-trained models is their ability to respond accurately to prompts. However, domain-specific applications, like Army operations, may require post-training of these models for more accurate responses. Beaudry explains that “even within the green‑suit Army, we speak different languages. If I, as an artilleryman, were to say that our objective was to ‘destroy the target,’ that creates a different impression in the minds of an infantryman or armor Soldier than it does to artillery, where artillery destruction is defined as one-third destroyed versus destroying everything using the infantry and armor mindset.”
OpenAI has at least mitigated one of the early problems with generative text models. When a model is trained on such massive amounts of data, it becomes difficult to control the data for quality. Early models betrayed the ignorance of trolls in the data they consumed, producing, at times, horrifying output. Since that time, content guardrails (ChatGPT will refuse to write an ode to your armpit, but an earlobe is fine) as well as “charm school” training, with evaluators rating the model’s responses, have made these tools viable for general use.
CONCLUSION
Looking forward, the potential for generative AI to be used by the Army, in both operations and for enterprise use, has clear benefits for transformation—in the hands of those with the experience, expertise and willingness to embrace new ways of working. That will make the potatoes feel great.
For more information, contact Thom Hawkins at jeffrey.t.hawkins10.civ@army.mil.
THOM HAWKINS is a project officer for artificial intelligence and data strategy with Project Manager Mission Command, assigned to the Program Executive Office for Command, Control and Communications – Tactical, at Aberdeen Proving Ground, Maryland. He holds an M.S. in library and information science from Drexel University and a B.A. in English from Washington College.