GENERATION GENERATION

By February 6, 2023February 8th, 2023Army ALT Magazine, Science & Technology
Generation

 

 

 

How new artificial intelligence tools are transforming the Army.

by Thom Hawkins

Prompt: Write the introduction to an article about how the Army is using generative artificial intelligence for military applications such as creating synthetic data for model training.

ChatGPT: “Generative artificial intelligence (AI) is a type of machine learning that involves creating new data or content using algorithms. It has the ability to generate synthetic data, images, text and even music, making it a powerful tool for a variety of industries and applications. In the military, generative AI is being used to create synthetic data for model training, simulations and other purposes. The use of generative AI in the military is an exciting development that has the potential to revolutionize the way that military operations are planned and executed. In this article, we will explore how the Army is using generative AI for military applications, and the benefits and challenges of this technology. So, let’s dive in and see how the Army is leveraging the power of generative AI to improve military operations.”

(ChatGPT (Generative Pre-trained Transformer) is a chatbot launched by OpenAI last November.)

REPLACED BY ROBOTS: Prompt: Army scientists writing an article. ChatGPT, a generative pre-trained transformer, is a chatbot launched by OpenAI last November.

REPLACED BY ROBOTS: Prompt: Army scientists writing an article. ChatGPT, a generative pre-trained transformer, is a chatbot launched by OpenAI last November. (Images generated by DALL-E 2)

BUILDING ROBUST MODELS

At a Massachusetts Institute of Technology laboratory, scientists are using generative AI to replicate what a potato feels like to touch. It’s not important that the subject is a potato. The point is the ability to simulate multimodal sensory information in a virtual environment. Understanding how an anti-tank mine looks and feels allows our Soldiers to recognize or manipulate it, even under low-visibility conditions.

The variation of individual organisms is key to the process of natural selection—if all things were the same, there would be no evolution—but these differences can be hard to distill into a single model that represents all objects in a particular class—i.e., all potatoes, or all anti‑tank mines. Generative AI can be used to simulate variation, broadening the perspective on what a potato can be—in terms of how it looks or, in this case, how it feels.

One of the challenges in training image recognition models is that there can be infinite ways an object looks in the context of its environment. It might be seen from various angles, or partially hidden behind other objects. Anticipating all of these variations is impossible and finding representative images laborious. Generative AI, however, allows us to expand on a smaller data set, introducing further variation to make the resulting model more robust—more accurate at identifying the target under varied circumstances—because it can better identify which features are significant in determining whether or not an object belongs in a particular class of objects. In other words, if we know what an anti-tank mine looks like partially buried, we (or our sensors) can identify them more accurately.

SNAKE IN THE GRASS: Generative AI expands on a smaller data sets, introducing further variation to make the resulting model more robust and accurate—such as different angles of a P1ZAM munition obscured by grass. (Images generated by DALL-E 2)

GENERATIVE DESIGN

For more than a century, we’ve used a process called “design of experiments” or “experimental design” to determine the factors that contribute to performance of a process or design. Experimental design varies certain features in a design, the predictor variables, to gauge their impact on one or more response variables—for example, how the length and width of a plane’s wings affect fuel usage. However, because there may be several predictor variables (length, width, etc.), and each of those may have many potential settings (12 meters, 13 meters, etc.), running a design of experiments can be complex and costly.

However, two factors have changed this paradigm in recent years. The first is that advances in simulation allow digital engineering models to be assessed without building physical components. The second, more recent, factor is that computers can be used to rapidly generate and assess models, allowing a design of experiments to consider far more predictor variables and factors than previously possible. For the Army, we can apply this process to things like developing hardware chassis that optimize heat dispersion, or a program that identifies command post configurations that reduce concentration of the radio frequency signature.

Generative design can be used for more than hardware—for example, course of action development, which Maj. Patrick Beaudry, a science and technology analyst for the Mission Command Battle Lab, notes that his organization is looking into. The design can move beyond the planning phase and into problem solving during operations, which allows autonomous action in pursuit of a delegated objective.

ADDITIONAL TRAINING: Generative pre-trained models are surprisingly accurate, but Army operations may need post-training of the models for more accurate responses.

ADDITIONAL TRAINING: Generative pre-trained models are surprisingly accurate, but Army operations may need post-training of the models for more accurate responses. (Images generated by DALL-E 2)

OFFLOADING BUREAUCRACY

While applications like DALL-E 2 and Stable Diffusion have popularized AI‑generated images on social media, OpenAI’s generative pre‑trained models, based on massive amounts of text from public documents and the internet, have both impressed and horrified users with their capability.

Much of the focus for generative AI has been focused on the negative impacts—the end of the student essay, the loss of jobs for professional illustrators—but there are also positive impacts.

Generative AI can be used to develop document outlines (see box), draft sections of documents (such as a concept of operations) or correspondence, write abstracts or summarize documents, and even write code, reducing the amount of time and effort involved. While the AI works best on requests where the output has a defined form, the more data provided in the request, the more specific the result. The output is generally not to the point where it can be deployed unsupervised, but the models can provide a useful first draft for a human to review and revise.

Prompt: Outline a project plan for a capability that detects cyber anomalies

ChatGPT: Sure! Here is an outline for a project plan to develop a capability to detect cyber anomalies:

  1. Define the scope and objectives of the project: The first step is to clearly define the scope and objectives of the project. This may include identifying the types of cyber anomalies that the capability should be able to detect, the systems and networks it should cover, and the desired outcomes of the project (e.g., increased security, improved incident response times).
  2. Conduct a needs assessment: The next step is to conduct a needs assessment to determine the current state of the organization’s cybersecurity posture and identify any gaps or weaknesses that the anomaly detection capability could address. This may involve reviewing existing security controls, conducting risk assessments, and gathering input from stakeholders.
  3. Develop a requirements specification: Based on the needs assessment, the next step is to develop a detailed requirements specification for the anomaly detection capability. This should include functional and nonfunctional requirements, such as performance and scalability requirements, as well as any specific requirements for integration with existing systems and processes.
  4. Select a technology solution: Once the requirements specification has been developed, the next step is to select a technology solution that meets the requirements of the project. This may involve evaluating various vendor offerings, conducting proof-of-concepts, and comparing the cost and benefits of different options.
  5. Implement the solution: Once a technology solution has been selected, the next step is to implement it. This may involve installing and configuring the anomaly detection software, integrating it with other systems and processes, and conducting testing and validation to ensure that it is working as expected.
  6. Train and educate users: It is important to ensure that all relevant users are trained on how to use the anomaly detection capability and understand how it works. This may involve providing documentation, conducting training sessions, and establishing procedures for using and maintaining the capability.
  7. Monitor and maintain the capability: The final step is to establish ongoing monitoring and maintenance processes to ensure that the anomaly detection capability is functioning correctly and is updated with the latest threats and vulnerabilities. This may involve regular testing, software updates, and incident response procedures.

I hope this outline is helpful! Let me know if you have any questions or need further information.

One of the most impressive capabilities of generative pre-trained models is their ability to respond accurately to prompts. However, domain-specific applications, like Army operations, may require post-training of these models for more accurate responses. Beaudry explains that “even within the green‑suit Army, we speak different languages. If I, as an artilleryman, were to say that our objective was to ‘destroy the target,’ that creates a different impression in the minds of an infantryman or armor Soldier than it does to artillery, where artillery destruction is defined as one-third destroyed versus destroying everything using the infantry and armor mindset.”

OpenAI has at least mitigated one of the early problems with generative text models. When a model is trained on such massive amounts of data, it becomes difficult to control the data for quality. Early models betrayed the ignorance of trolls in the data they consumed, producing, at times, horrifying output. Since that time, content guardrails (ChatGPT will refuse to write an ode to your armpit, but an earlobe is fine) as well as “charm school” training, with evaluators rating the model’s responses, have made these tools viable for general use.

CONCLUSION

Looking forward, the potential for generative AI to be used by the Army, in both operations and for enterprise use, has clear benefits for transformation—in the hands of those with the experience, expertise and willingness to embrace new ways of working. That will make the potatoes feel great.

 


 

For more information, contact Thom Hawkins at jeffrey.t.hawkins10.civ@army.mil.

THOM HAWKINS is a project officer for artificial intelligence and data strategy with Project Manager Mission Command, assigned to the Program Executive Office for Command, Control and Communications – Tactical, at Aberdeen Proving Ground, Maryland. He holds an M.S. in library and information science from Drexel University and a B.A. in English from Washington College.

   

Read the full article in the Fall 2024 issue of Army AL&T magazine. 
Subscribe to Army AL&T – the premier source of Army acquisition news and information.
For question, concerns, or more information, contact us here.