Been There, Done That Preparation for Operational Testing

By April 12, 2017August 30th, 2018Army ALT Magazine, Commentary

Rapid and evolutionary acquisition bring some change to OT, but some things will remain constant—like surprises.

by John T. Dillard, Col., USA (Ret.)

Many years ago, Dr. Philip E. Coyle III, the long-experienced, former DOD director of operational test and evaluation, cautioned program managers (PMs) about program risk in the new era of evolutionary acquisition, fielding capabilities in sequential increments.

The Arkansas Army National Guard’s 1st Battalion, 142nd Field Artillery team fires an ATACMS at White Sands Missile Range in July 2015. The system saw combat service in the Persian Gulf War, Iraq and Afghanistan and is still in production. (Photo courtesy of Arkansas Army National Guard)

MISSILE MISSION
The Arkansas Army National Guard’s 1st Battalion, 142nd Field Artillery team fires an ATACMS at White Sands Missile Range in July 2015. The system saw combat service in the Persian Gulf War, Iraq and Afghanistan and is still in production. (Photo courtesy of Arkansas Army National Guard)

What he suggested back then was that even with evolutionary acquisition, we would likely be fully testing each of the system capability blocks individually. In that vein, he presented some salient points about past programs’ readiness for what he called the binary or pass/fail environment of operational testing, and how PMs often “rush to failure.”

In the November – December 2000 issue of Program Manager magazine, Coyle wrote: “Over the past three years or so, the Army has seen that 80 percent of their systems have not met 50 percent of their reliability requirements in operational tests. In the Air Force, AFOTEC [Air Force Operational Test and Evaluation Center] has had to stop two-thirds of their operational tests because the systems were not ready.”

Sadly for all of us, the numbers have not improved much in the past decade or two. Coyle’s principal advice to program managers—paraphrased and reduced to its simplest terms here—was to:

1. Reschedule program test events as the program schedule slips to avoid being ill-prepared and being poorly positioned for an adverse event.

2. Test every operational environment in advance of OT via developmental testing (DT).

3. Fully load systems in DT, especially interoperable automated systems.

4. As they become clearly defined, plan on fully testing each of the program’s evolutionary requirements as time-phased capability increments.

5. Don’t skimp on DT.

6. Use modeling and simulation correctly: to interpolate, versus extrapolate, results.

7. Coordinate with operational testers early to address all needs and avoid conflict.

Most PMs know that much if not all of this advice is easier said than done. However, with a DOD 5000.02 framework that specifies an operational assessment (OA) before milestone C, followed soon after by an initial operational test and evaluation (IOT&E) in the production and deployment phase and perhaps even a follow-on operational test and evaluation, there should be ample opportunity to exercise Coyle’s advice. Thus we facilitate our own learning and confirm system performance while accomplishing our shared test objectives.

Fortunately, despite differing programs and technologies, requirements, etc., we can often learn from the experience of others. The difficulty, of course, is in being able to share knowledge that is sufficiently useful across unique acquisitions. Following are some useful examples of what Coyle was cautioning PMs about as he prepared to leave office in January 2001, from the perspective of a PM going through operational testing of a major weapon system.

INCREMENT BY INCREMENT
The Army Tactical Missile System (ATACMS) is probably one of the most distinct major-system examples of an incremental development process. Born in 1986 from a Defense Advanced Research Projects Agency (DARPA) project called Assault Breaker, the missile was an extended-range weapon to be fired from an existing vehicular platform, a Multiple Launch Rocket System. It would initially deliver about 1,000 M74 anti-personnel bomblets per missile, with preplanned incremental upgrades to eventually enable precision anti-armor submunitions. Thus, while a desired end-state capability was identified early on, all of the system requirements were not. Future increments would depend upon threat changes, technology maturation and user experience with the initial increments.

More than a program with preplanned product improvements, ATACMS was ahead of its time in current policy terms. Initially fielded just in time for the first Gulf War, the system continued in its evolutionary development through the 1990s, went to war again in Iraq and Afghanistan and is still in production today. The program gave rise to several advanced and unplanned variants, with capability increments managed and operationally tested as unique acquisitions. There were many operational realizations about ATACMS’ advancing capabilities along the way, not the least being a necessary clarification of joint service roles and missions as the Army extended ATACMS’ battlefield reach into U.S. Air Force mission areas.

On the tactical level, what we in the PM office found out about our own system’s Block I during IOT&E was quite surprising.

We tested the first incremental block operationally in spring 1990 in a three-month series of ground and flight exercises at White Sands Missile Range, New Mexico, with an entire field artillery battalion as the test unit, the 6th Battalion, 27th Field Artillery Regiment. The battalion became the first unit equipped and subsequently the first to use ATACMS in combat operations during Operation Desert Storm. This IOT&E of a major defense acquisition program was one of the most successful ever but still managed to provide the program management office (PMO) with plenty of surprises.

Our lessons learned from an extensive IOT&E were many and as relevant today as then. I’ll frame them in parallel with Coyle’s advice to PMs.

Rescheduling test events when necessary—Our PMO actually had to slip IOT&E for six months within a 48-month advanced development program that was being executed on a firm fixed-price contract. Driven by both DT missile reliability failures and subcomponent hardware availability, the delay did not cause an acquisition program baseline breach, but neither was it inconsequential.

A Heavy Expanded Mobility Tactical Truck loaded with four missiles conducts mobility road testing and cargo handling on dirt roads at White Sands Missile Range in March 1990. Unlike with other elements of stress testing, the IOT&E for ATACMS marked the first time that the PMO transported the missiles on the truck, their designated prime mover, across rough terrain. However, DT environmental stress testing had been so rigorous that no related problems surfaced in IOT&E. (Photo by Tom Moore)

ROAD TEST, PASSED
A Heavy Expanded Mobility Tactical Truck loaded with four missiles conducts mobility road testing and cargo handling on dirt roads at White Sands Missile Range in March 1990. Unlike with other elements of stress testing, the IOT&E for ATACMS marked the first time that the PMO transported the missiles on the truck, their designated prime mover, across rough terrain. However, DT environmental stress testing had been so rigorous that no related problems surfaced in IOT&E. (Photo by Tom Moore)

We were at the very end of the contract performance period. Our periodic operational test readiness reviews (OTRRs), which began about a year before the original start date, did not predict the slip as an eventual imperative. No one wants to slip IOT&E until it is fully necessary, given the many organizations disrupted (i.e., test unit, range personnel, OT agencies, user representatives, contractor, etc.). The PMO had proposed a three-month delay to allow for completion of DT, but in fact we needed the entire six-month delay that the operational testers from the Army Operational Test Agency and DOD’s director of operational test and evaluation (DOT&E) “gave” us.

The lesson learned was that once a PMO has exceeded the allotted time, it might no longer be able to prescribe program events. It was also our first solid realization of the OT paradigm: The PM is no longer doing the testing. The PM’s system is being tested. That’s a big shift in both thinking and authority that affects approaching activities.

Test all operational environments in DT—It’s still impossible to schedule rain at White Sands Missile Range. Actually, given the sum of various range safety and availability constraints for a major range and test facility base, it can be difficult to schedule anything. We had launched only 27 missiles in DT, with just 15 more planned for IOT&E. At that point we had fired only in good weather. In fact, we had fired only on an azimuth of true north—i.e., in one direction—because of constraints at the firing range.

For environmental stress testing, we used various test chambers to the fullest extent possible to simulate heat, cold, fog, rain, vibration, etc., for weeks, but until IOT&E we had never actually transported the missiles on their designated prime mover, a Heavy Expanded Mobility Tactical Truck, across rough terrain. Fortunately, our DT environmental stress testing had been so rigorous that we had no related problems in IOT&E. We’ll probably never completely cover all of the operational variables in DT, but we have to try to minimize discovery in IOT&E by thinking critically about the spectrum of future environments and trying to include them.

Fully load the system in DT—Throughout DT, we sought to minimize variability in testing with fully charged batteries and comprehensive commercial equipment for circuit testing. Little did we suspect that run-down batteries would cause “ghost prompts” and other strange electrical phenomena, or that the simpler unit-level test, measurement and diagnostic equipment for missile testing would be a reliability and maintenance problem all through IOT&E. Nor did we fully consider tactical unit misfire procedures in a combat situation, having long been used to a tightly controlled DT range safety countdown sequence.

Contractor support personnel from SAIC install instrumentation and wiring on a Multiple Launch Rocket System launcher, the vehicular platform for the ATACMS, at White Sands Missile Range in March 1990. There were nine of the platforms to be used during the system’s IOT&E. Instrumentation delayed the start of testing because of concerns about uncertified hardware being placed on the system. Lesson learned: Instrumentation was the single most important consideration that the Block I ATACMS program had neglected in development. PMs must plan for it well in advance to prevent testing delays. (Photo by Kenneth G. Schoultz)

INSTRUMENTATION = COMPLICATION
Contractor support personnel from SAIC install instrumentation and wiring on a Multiple Launch Rocket System launcher, the vehicular platform for the ATACMS, at White Sands Missile Range in March 1990. There were nine of the platforms to be used during the system’s IOT&E. Instrumentation delayed the start of testing because of concerns about uncertified hardware being placed on the system. Lesson learned: Instrumentation was the single most important consideration that the Block I ATACMS program had neglected in development. PMs must plan for it well in advance to prevent testing delays. (Photo by Kenneth G. Schoultz)

Further, we were unable to fully load our computing hardware and software components until a few months before the test, a situation complicated by successive software releases all the way up to the final OTRR’s certification of readiness. We just ran out of time. A conscious effort to assemble an all-inclusive system-support package to accompany the test articles thus is another essential.

We got bitten by another foul-up, as well. For any spare “black boxes” that have had upgrades in the system, the package of spares sent out for OT must also have those upgrades. So one of our black boxes that hadn’t been upgraded with a circuit card modification was hastily swapped out as a field repair.

Test to the full requirements of each increment—A capability increment to be fielded to end users requires thorough verification and validation before handoff. To get the maximum benefit from DT and OT requires involving all stakeholders in joint test planning: users, PMs, DT and OT testers, system analysts and reliability specialists, contractors and others. This includes construction of the test matrix and laying down the ground rules to incorporate evolving configurations and various test scenarios.

There seems to be an inherent obstacle to learning everything about the systems we manage, even during DT—an aversion to “discovering” system failure. We don’t want to fail, so sometimes we intentionally don’t push the system, certainly not beyond what we know it will do or has to do. A target beyond estimated maximum range, for example, will not be attempted to ascertain system margin, because any miss will likely be scored a miss. The same pitfall exists for other areas of testing, such as vulnerability or survivability.

Don’t skimp on DT—As the variability of events increases in OT, you will inevitably begin to discover new things about your own system, despite years of experience in its development.

Once, IOT&E presented us with an abnormally large area target—one desirable for firing multiple rockets, the platform’s initial and primary munition, but not individual missiles. So our system used a software algorithm to automatically shift the missile aim point to obtain a better sheaf (coverage) effect, one appropriate to the outsize target. We’d overlooked the existence of this “Fendrikov algorithm” within fire control system software during the entire development effort. Fortunately we got permission to negate this in follow-on operational test launches, after negotiation with the operational testers.

A Soldier from the 6th Battalion, 27th Field Artillery Regiment, the test unit for the ATACMS’ IOT&E, operates the missile monitor test device, with which the ATACMS was supposed to be interoperable. However, not having received sufficient emphasis before OT, the device surprised the PMO by testing good missiles to be bad and bad missiles to be good. (Photo courtesy of the author)

PERFORMANCE NOT AS EXPECTED
A Soldier from the 6th Battalion, 27th Field Artillery Regiment, the test unit for the ATACMS’ IOT&E, operates the missile monitor test device, with which the ATACMS was supposed to be interoperable. However, not having received sufficient emphasis before OT, the device surprised the PMO by testing good missiles to be bad and bad missiles to be good. (Photo courtesy of the author)

During another launch, we experienced a safety delay because of animals in the impact area. The missile already had been initialized, and it remained activated and elevated in the launch position, which affected the missile’s inertial guidance set, causing it to degrade slightly. It was just another thing that hadn’t happened in over two years of DT and went beyond our system specification.

Being placed in a situation beyond any operational scenario we’d anticipated—one that limited us to only a few minutes in the firing position—showed us something new, however, albeit at the cost of an accuracy loss. Once again, we changed the ground rules for the rest of OT to re-initialize missiles if such a delay occurred.

Use modeling and simulation—Our investment in developmental hardware-in-the-loop simulation not only reduced the requisite sample size of live missiles, enabling a full-rate production decision based upon only 42 flights, but it actually served us in anomaly discovery. That brought home to a lot of us just how important our modeling and simulation investment was. The closer the model is to reality, the more we could actually learn about our own invention.

When missiles didn’t fly according to their predicted operational profile, even if they succeeded against the targets, we knew to investigate for a cause that might cascade or proliferate. (Of course, the model must not be of such high resolution that it actually incorporates the fault or deficiency!) Other unanticipated factors crept in, as well. The most interesting discovery of accuracy loss was the result of an operational stack-up from the use of three different mappings of the Earth, called World Geodetic Survey spheroids, for three different elements of testing: target coordinates, firing point benchmark and onboard navigation system software.

Coordinate with operational evaluators early—Instrumentation is the single most important consideration that our Block I program had neglected in development. The thirst for system data is unfortunately huge, and we collected more than anyone needed or analyzed.

However, in the minds of many, the need still exists to answer all possible questions that could arise from an OT. Conflict with operational testers can occur when they seek to capture previously captured developmental or technical data. Stakeholders have to draw the line somewhere. We felt we lost (broke) at least one missile because of dozens of firing circuitry interruptions to analyze a system subcomponent during OT—steps inappropriately seeking technical specification-compliant data, DT-style, rather than seeking to prove whether the system works in an operational environment.

The best way to ensure that instrumentation is reliable, does not interfere with the system’s operation, and yields valid data from the system is to “require” it in the specifications derived from the capability development document and capability production document.

The PM most likely will want to assign responsibility for this vital effort to the system contractor during development, lest the contractor later try to blame a system failure on nonsanctioned or uncertified hardware added to the product. Instrumentation must not corrupt the data as it flows through the system.

One of the last of 15 ATACMS tested during IOT&E heads skyward at White Sands Missile Range on May 24, 1990. (Photo by George Baird)

THE ULTIMATE TEST
One of the last of 15 ATACMS tested during IOT&E heads skyward at White Sands Missile Range on May 24, 1990. (Photo by George Baird)

Even the earliest coordination with other OT stakeholders still may leave some issues unresolved until testing begins. Selection of other-than-planned live-fire targets, additionally imposed target location errors and other late-breaking “requirements” should not come as surprises, but as the predictable result of players changing in the long life of a developing system. PMs don’t have to accept employment of the system outside the bounds of its expected operational use. But they should be ready for the “surprise” requests to do so, and anticipate how to handle them.

Case in point: DOT&E asked us to fire ATACMS off the side of the launcher instead of its prescribed operational mode, firing directly over the crew cab. It had never been done in DT, and the user community had no desire to do so, but the demand still came. We resolved the issue by promising to demonstrate the possibility and safety of this new mode after the IOT&E and to render a technical report once we had conducted our own pretest analysis of the factors involved.

The simple fact is, you—the PM and PMO—are the ones who care most about the outcome of OT and must resolve the anomalies that occur as the test incident reports are written. The tester simply wants to find and score the anomalies and move on.

Plan for the statutorily restricted roles of system and supporting contractor during IOT&E. We cordoned ours off into a marked, private area, even requiring that they wear red baseball caps, which alerted troops to stay away from them. They kept a “hot mockup,” a spare MLRS launcher, in their area, which helped greatly to resolve anomalies; we could easily replicate the anomalies on the spot and feed information back to the PMO over the many days of testing. A daily journal and after-action review are good ways, not only for the executors of the OT but also for the PM representative to the PMO, to recap what has happened and what is planned next.

CONCLUSION
Coyle’s advice to PMs still holds true, if we can just frame in our own minds how to apply it. The lessons we learned back then during the ATACMS tests are also timeless, as I have heard over the years from PMs for systems as diverse as underwater robotics, communications gear and ground vehicles. They have described their experiences with the OT community relationship, test range constraints, instrumentation demands, late-breaking ideas for testing changes, technical “discoveries” and the like.

Today’s PMs, take note, if an operational test is going to occur in your program in the next several years: It’s always best to learn such lessons without the accompanying penalty of failure.


JOHN T. DILLARD, COL., USA (RET.) is the academic associate for systems acquisition management at the Graduate School of Business and Public Policy, Naval Postgraduate School (NPS) in Monterey, California. He began his Army service as a Ranger-qualified infantryman and master parachutist, serving in the 1st Infantry and 82nd Airborne divisions, and joined the NPS faculty in 2001 upon retiring from the Army after 26 years of service. He spent 16 of those years in acquisition, most recently as commander of the Defense Contract Management Agency, Long Island, New York. He has also served on the faculty of the U.S. Army War College and as an adjunct professor of project management for the University of California, Santa Cruz. He holds an M.S. in systems management from the University of Southern California and is a distinguished military graduate of the University of Tennessee at Chattanooga with a B.A. in biological sciences.

This article was originally published in the April – June issue Army AL&T magazine.

Subscribe to Army AL&T News, the premier online news source for the Acquisition, Logistics, and Technology (AL&T) Workforce.