
The U.S. Army Evaluation Center’s Army Evaluation Plan Ghostwriter team and the U.S. Army Operational Test Command’s Document Generator Workflow team shared the top prize for the third annual ATEC AI Challenge during the fourth annual ATEC Data Summit on Sept. 18, 2025. The Ghostwriter team from left is Anna Campbell, Matthew Pandullo, Mark Downes, Hasan Shahid, Maj. Daniel S. Bader, Brian Kelly, Jim Amato and Gabriele Chiulli. (Photo by Courtney Harris, ATEC Public Affairs)
SOLVING THE WRONG PROBLEM: LESSONS FROM THE ATEC AI CHALLENGE
by Maj. Daniel S. Bader
In 2025, my team within the Soldier Evaluation Directorate won the U.S. Army Test and Evaluation Command (ATEC)’s AI Challenge with a tool that could draft the first version of an ATEC Evaluation Plan (AEP). The tool worked—and worked well—but the experience left me questioning whether we had been solving the right problem. While our prototype made the early drafting process more efficient, the fundamental workflow behind it remained unchanged. We learned that ATEC, and the Army more broadly, must move beyond using artificial intelligence (AI) to automate legacy processes and instead design new workflows that fully leverage what AI can do today. Drawing on insights from the challenge, I outlined a vision for an AI-enabled command, explaining how the tools to build this future already exist.
THE WRONG PROBLEM: AI AS AN ACCELERANT, NOT A PATCH
The ATEC AI Challenge asked teams to use large language models (LLMs) to accelerate the workflow of their choosing. Our solution used AI to ingest source documents and produce a structured first draft of the AEP, eliminating hours of manual assembly work. When the competition concluded, however, I found myself unsettled. Drafting an AEP more quickly is helpful, but it does not address the deeper issue that AEPs become outdated as soon as they are signed. Every evaluator knows the experience: schedules shift, funding moves, test articles break and operational realities evolve. A static document cannot keep pace with a dynamic environment.
It became clear that we had built a better tool for a workflow that may no longer make sense in its current form. The real opportunity with AI is not simply accelerating familiar tasks; it is rethinking whether those tasks should exist at all. This realization extends beyond the challenge. Across the Army, we often use new technologies to reinforce old processes, digitizing forms or automate legacy work patterns. The true potential of AI is different. It allows us to redesign the workflow from the ground up—removing unnecessary steps and creating new pathways for insight, speed and decision-making. ATEC, given its complexity and mission, is positioned to lead this shift.
ATEC’S COMPLEXITY IS EXACTLY WHY IT NEEDS AI
ATEC is the Army’s independent testing organization, a description that understates the mission’s scale and complexity. Every acquisition program brings a unique set of characteristics: distinct technology, threat environments, operational requirements and testing conditions. No two programs follow the same pattern. In that sense, ATEC operates less like a standardized factory and more like a custom engineering shop where each “order” requires a tailored solution.
This bespoke nature of work is not a weakness. It is exactly what makes ATEC an ideal environment for AI. Traditional procedural tools excel at repetitive and predictable tasks, but AI excels in environments defined by complexity, interdependence and constant change. Test scheduling, resource allocation, data integration, hazard development, instrumentation planning and requirements verification are all decisions that exist in a shifting landscape of constraints and dependencies. These are precisely the problems that modern AI systems are meant to address.
Instead of expecting humans to mentally track every variable and adjust plans manually, an AI-enabled system could help reason about these relationships continuously. It could highlight risks before they materialize, detect emerging patterns and recommend actions when conditions change. The nature of ATEC’s mission is not a barrier to AI integration, it is justification for it.
THE REAL PROMISE OF AI: DYNAMIC PLANNING AND SCHEDULING
One of the clearest opportunities for AI lies in how we plan and schedule testing. Today, test events are often assigned based on long-standing categorical alignments: night-vision systems at Aberdeen, aviation at Redstone, chemical-defense testing at Dugway, and so on. These alignments make sense from a subject-matter expertise standpoint, but they are not optimized for speed, efficiency or cost across the entire enterprise.
If a cold chamber goes down at one test center, testing may pause even when an equivalent chamber at another center is available. No human staff can maintain real-time awareness of every facility, schedule and maintenance cycle; the calculation is too large for manual methods.
AI, however, can manage this effortlessly when connected to a shared data environment. As the ATEC Data Mesh matures, our test centers’ capabilities, availability and workloads can be made visible and queryable. We can visualize workloads, detect underutilized assets and identify scheduling options that human planners may not see. In such a system, an evaluation plan would no longer be a static document. It would become a living, continuously updated representation of the truth. As test data arrives, as schedules shift and as real-world conditions evolve, the plan will evolve as well.
This is the true promise of AI in our workflow. It is not about writing documents faster. It is about maintaining a real-time understanding of reality across the entire command.
THE SOLUTION ALREADY EXISTS
When our team began developing the prototype for the AI Challenge, we deliberately selected existing enterprise software for our platform. We wanted to show that these tools already exist, are accredited and are funded.
ATEC does not need to procure additional AI tools to build the future of test and evaluation. The enabling technology is already here. We can create a common data environment; a flexible, object-driven model for representing test centers, programs, requirements, test articles and results; and utilize native AI capabilities that can reason across those objects. It also empowers Soldiers, analysts and evaluators to build meaningful tools without requiring deep software engineering expertise. I learned to use it the same way many Soldiers learn new systems: through late-night experimentation, online tutorials, and trial and error. The barrier to entry has fallen dramatically. Soldiers no longer need to wait for a specialized development team to build something for them. They can build for themselves, and they already are.
OVERCOMING RESISTANCE: MOVING FROM THEORY TO PRACTICE
None of this is to suggest that integrating AI into evaluation will be easy. New technology always brings questions and hesitation. A common concern is that AI is “only 80 percent correct,” yet so are most human first drafts. AI gives us the draft for free, which allows us to invest our time where judgment and experience matter.
Another concern is that AI might diminish the value of skilled writers, planners or evaluators. But the differentiator in our profession has never been writing endurance; it has been analytical rigor, communication clarity and mission understanding. AI enhances those qualities by freeing leaders from the administrative and mechanical work that often consumes their days.
Finally, some worry that AI might hinder our development by making us dependent on tools rather than our own reasoning. This risk is real, but manageable. The true danger lies not in using AI, but in misusing it. If we outsource thinking, we will stagnate. If we free our most valuable asset, our people, from repetitive work and redirect their energy toward analysis, planning and judgment, we will strengthen the force. The key is disciplined leadership and a clear understanding of what constitutes deep, meaningful work that develops our people, and hand off the rest.
THE AI-ENABLED TEST AND EVALUATION COMMAND
With these ideas in mind, it becomes easier to imagine what a fully AI-enabled ATEC might look like. In this future, the entire workflow exists inside a dynamic, interconnected environment. Acquisition documents, test events, range schedules, hazard analyses, data streams, requirements and results are not scattered across spreadsheets or PowerPoints but are represented as structured objects linked together in real time.
When a new program arrives, the system compares it to historical analogues, identifying critical test events and likely failure modes. It drafts a plan, proposes facilities, generates hazard templates and estimates the needed test articles. Evaluators review, refine and override these suggestions, and the system learns from each interaction.
As testing progresses, data flows into the mesh. Requirements update automatically as evidence accumulates. Schedules adjust as conditions change. Leaders receive real-time visibility into program status without needing to send taskers or initiate email chains. When a test report is uploaded, the system evaluates the evidence and alerts evaluators to changes in requirement status. Evaluation reports evolve continuously, not assembled in a final, labor-intensive push.
This is not a futuristic vision. It is achievable with systems the Army already owns. What is needed now is alignment, experimentation and leadership willing to empower Soldiers who are already building the early components of this future.
CONCLUSION
ATEC is already making great strides in this transformation. The ATEC Data Mesh provides the foundational backbone for the rapid, secure implementation of AI across the enterprise. Simultaneously, command emphasis on innovation, demonstrated through initiatives like the annual AI Challenge, has empowered our workforce to create tangible solutions. This has produced tools in use today at Yuma Proving Ground that are saving hundreds of man-hours in munitions testing, and separate applications at Redstone Test Center that have automated complex video analysis. These wins are early proof that investing in our people and infrastructure will deliver a decisive edge to decision-makers and, ultimately, the warfighter.
The most important lesson from the ATEC AI Challenge was that the hardest part of AI adoption is not technical but philosophical. We do not need new tools for the processes we have; we need new processes built around the tools we already possess. ATEC, with its mission, data environment and organizational structure, is uniquely positioned to become the Army’s first fully AI-enabled command. The path is clear, and the technology is in place. What remains is the willingness to move beyond incremental change and embrace a fundamentally different way of thinking about how we evaluate systems for the Army.
My hope is that this vision encourages leaders to explore what is already possible. The Army does not need more Soldiers studying AI in the abstract. It needs more Soldiers building with it. Because once we begin building, the question is no longer whether AI will change our work, but how quickly can we evolve our work to match what AI now makes possible.
For more information, email Maj. Daniel S. Bader at daniel.s.bader2.mil@army.mil.
MAJ. DANIEL S. BADER is a military evaluator with the U.S. Army Test and Evaluation Command. He holds an MBA from the University of Michigan and a B.S. in mechanical engineering and French language from the United States Military Academy. He is DAWIA certified Practitioner in program management and a U.S. Army Acquisition Corps officer.
