Been There, Done That: Strategic Acquisition

By February 13, 2017August 30th, 2018Army ALT Magazine

Does DOD set PMs up for failure with impossibly complex mega-programs built on immature technology? After a career in uniform supporting Army acquisition followed by a second career teaching acquisition, a former PM takes a final pulse check and outlines four best practices for keeping programs big and small on track.

(The fourth in a series of commentaries by former program managers on the faculty of the Naval Postgraduate School)

by Michael W. Boudreau, Col., USA (Ret.)

I am recently retired from government service, after 28 years in the Army and then 20 years of teaching at the Naval Postgraduate School (NPS). I have been fortunate to see acquisition from the perspectives of a military user, a maintainer of Army equipment, a builder of M1 Abrams tanks, a staff officer in the Pentagon, a project manager (PM) and a teacher of acquisition management.

It comes as no surprise to this audience that defense acquisition is multifaceted, requiring intensive management and involving three systems: the Joint Capabilities Integration and Development System, which establishes requirements; the Planning, Programming, Budget and Execution process, which provides the funding; and the Defense Acquisition System, which executes the acquisition. Unfortunately, these three systems do not interoperate seamlessly. As if this were not enough of a challenge, the Office of the Secretary of Defense (OSD) and Congress frequently change the rules by which acquisition must be accomplished, as described by John T. Dillard of the NPS faculty in his 2003 paper, “Centralized Control of Defense Acquisition Programs: A Comparative Review of the Framework from 1987 – 2003.”

There is a longstanding and continuing trend of acquisition programs failing to achieve acquisition program baseline (APB) goals; that is to say, program managers often fail to meet important cost, schedule and performance aspects of the plans they agreed on with their superiors. The Government Accountability Office (GAO), formerly the General Accounting Office, has documented this trend thoroughly in multiple reports.

Given that defense acquisition is and will remain multifaceted, imperfect and evolving, must its future be completely and irremediably bleak? I suppose the answer to this question depends on whom you ask. Our government watchdog organizations, particularly GAO, can point to many examples of management mistakes. If you look at the three metrics of every program—cost, schedule and performance—you will see that over several decades, many acquisition programs have missed or will miss achieving APB goals in one or all three of these metrics.

Put another way, lots of defense programs cost more than they should, arrive late or don’t do what they’re supposed to. GAO detailed this situation in its 2015 and 2016 annual reports “Defense Acquisitions: Assessments of Selected Weapon Programs,” which rate specific programs on the attainment of “product knowledge” and describe program status in terms of cost, schedule, performance and risk.

Sometimes PMs sign up for cost or schedule goals that are unachievable; in October 2015, GAO characterized this as a systemic problem in which the acquisition process is “in equilibrium,” meaning that new programs are initiated with slender chances of completion on schedule and within cost. In many programs, technologies have not been ready to support mature, production-ready systems, leading to schedule concurrency—for example, simultaneously redesigning, retesting and manufacturing—which often brings delays, cost increases and then more delays. It is easy to paint a dismal picture of defense acquisition.

THE BLEAK
From my perspective, the elephant in the room is DOD’s propensity to launch “mega” programs that are beyond its ability to manage successfully. The department’s really large programs, such as the Army’s Future Combat System (FCS), the multiservice, multinational F-35 Joint Strike Fighter program and the Navy’s ­Gerald R. Ford–class aircraft carrier, each reflect enormous system complexity—multiple variants, multiple new technologies and large amounts of associated software—that continues to bedevil acquisition managers. These three programs are very different from one another, but each suffers (or suffered, in the case of FCS, which was terminated in 2009) from unmanageable complexity.

This is no criticism of the management teams that guided these very important programs. Rather, it’s a criticism of leadership decisions to enter into mega-programs that risk valuable funds and, because of their complexity, are unlikely to succeed on schedule and within cost. The challenges of system complexity include immature technology, both hardware and software, which may be most intractable in mega-programs but affects programs of all sizes throughout the military services.

The timing of maturing technology may not meet the development schedule of the warfighting system; the PM needs to acknowledge this with risk management plans in place should the maturing technology not be ready to meet the timetable of the emerging warfighting system. That is, there need to be “plan B” alternatives and off-ramps for incorporating less risky hardware technology solutions in the event that the preferred technology stumbles, so as not to interrupt the completion schedule for the emerging system. GAO presented this recommendation, while not a new idea, to Congress in its October 2015 study “Defense Acquisitions: Joint Action Needed by DoD and Congress to Improve Outcomes.”

At present, the paths to improved outcomes for hardware versus software appear to lead in different directions. Technology development leading to advanced hardware solutions needs to be accomplished in the technology base before being handed over for incorporation into the emerging warfighting system. On the other hand, software must be developed or adapted uniquely for a warfighting system, using highly disciplined systems engineering processes.

This suggests to me that software development supporting a new system will normally require major up-front effort, with about half of the software development cost expended before the program’s milestone B, as described in the 2007 research report “Software Architecture: Managing Design for Achieving War­fighting Capability,” by Brad Naegle of NPS. It also suggests that software may be the pacing activity within hardware and software program developments—a fact reflected in many of the developmental programs in GAO’s 2015 “Defense Acquisitions” annual report.

DOD has a bad habit of launching enormously complex projects that become both too big to fail and too big to succeed in anything approaching on-time, on-budget delivery, the author says. (Photo of NLOS-C by U.S. Army; photo of Joint Strike Fighter by Cliff Owen, Associated Press; photo of USS Gerald R. Ford by U.S. Navy)

MEGA PROJECTS, MEGA PROBLEMS
DOD has a bad habit of launching enormously complex projects that become both too big to fail and too big to succeed in anything approaching on-time, on-budget delivery, the author says. (Photo of NLOS-C by U.S. Army; photo of Joint Strike Fighter by Cliff Owen, Associated Press; photo of USS Gerald R. Ford by U.S. Navy)

THE HOPEFUL
Acknowledging that many acquisition programs have struggled during their development, much progress has been made, particularly over the past 20 years, to help PMs successfully manage their programs. Certain established practices will help PMs and their teams understand programs more clearly and manage them more effectively. Here are four acquisition best practices and resources that are not new but can make a big difference for those who apply them conscientiously and with discipline. I offer no statistical data to support them, although some of these references contain supporting statistics.

Technology Readiness Assessment (TRA) Guidance, April 2011 (updated). Since 2001, DOD has used technology readiness levels (TRLs)—developed by NASA in the 1980s and then adapted by the Air Force Research Laboratory—in major programs, as GAO had long encouraged. Currently, DOD Instruction (DODI) 5000.02, Operation of the Defense Acquisition System, requires TRAs for major defense acquisition programs at the release of a developmental request for proposal (RFP), milestone B and milestone C.

DOD uses nine TRLs to describe the developmental progress of emerging systems as they pass through their prescribed milestones and phases. This common framework for technology development and common language to describe the waypoints are enormously useful to acquisition managers. Before the introduction of standardized TRLs, our understanding of the progress of developmental programs was significantly less clear; to characterize our progress, we used terminology that meant different things to different people. Today, the use of TRLs reduces the likelihood of misunderstanding whether a developing system has progressed to a specific intermediate milestone.

Manufacturing Readiness Level (MRL) Deskbook, Version 2.4, August 2015. The manufacturing readiness levels closely parallel the TRLs. Ten MRLs describe and guide progress in preparation for the manufacture of emerging warfighting systems as programs pass through their prescribed milestones and phases.

These manufacturing readiness metrics overlay the milestones and phases of the Defense Acquisition System, providing concrete measures of preparation and activity that culminate in full-rate production. Besides the 10 levels, the MRL Deskbook identifies nine areas of manufacturing risk that call for tracking through each of the MRLs. These risk areas, or threads and sub-threads, comprise activities that PMs must manage to ensure the thorough planning and careful monitoring of manufacturing. The threads and sub-threads are:

  • Technology and industrial base.
  • Design.
  • Cost and funding.
  • Materials.
  • Process capability and control.
  • Quality management.
  • Manufacturing workforce, including engineering and production.
  • Facilities.
  • Manufacturing management.

Knowledge Management. Since 1998, GAO has emphasized the importance of a shared understanding of critical knowledge by the PM, the intermediate acquisition chain of command and the acquisition authority at selected program decision reviews (such as milestone B) before allowing a developmental acquisition program to proceed to its next step. In 1998, three knowledge points began to take shape and have since become more detailed and useful, as shown in GAO’s 2015 “Defense Acquisitions” annual report. They are:

  • Knowledge Point 1: Technologies, time, funding and other resources match customer needs. Decision to invest in product development.
  • Knowledge Point 2: Design is stable and performs as expected. Decision to start building and testing production-representative prototypes.
  • Knowledge Point 3: Production meets cost, schedule and quality targets. Decision to produce first units for customer.

The shared knowledge is likely to improve risk reduction at the three points and increase confidence in decision reviews to consider advancing an acquisition program to its next developmental phase.

GAO is right about program knowledge point management. The definitions are clear, and the specific review points align easily to milestone B, the critical design review and milestone C. Although the terminology of knowledge point management and GAO’s specific recommendations have not carried over completely into DODI 5000.02, its companion document, DOD Directive 5000.01, is consistent with GAO’s intent, as in the following extract:

E1.1.14. Knowledge-Based Acquisition.
PMs shall provide knowledge about key aspects of a system at key points in the acquisition process. PMs shall reduce technology risk, demonstrate technologies in a relevant environment, and identify technology alternatives, prior to program initiation. They shall reduce integration risk and demonstrate product design prior to the design readiness review. They shall reduce manufacturing risk and demonstrate producibility prior to full-rate production.

The OSD policy guidance is clear, but not as specific as GAO recommends; in retrospect, acquisition leaders have a track record of too readily ignoring a lack of “program knowledge” and forging ahead optimistically, hoping that missing knowledge will somehow materialize when necessary. Ignoring knowledge points appears misguided, however; the defense acquisition landscape is littered with programs that did not have sufficient “knowledge” to support success at the next acquisition step but were authorized to move forward anyway.

Beyond poor test results, the outcomes have been program cost growth, schedule delays, warfighting systems that only marginally perform their missions, unexpectedly high maintenance and retrofit costs, unachievable readiness goals and even systems that have been produced but cannot be deployed because they are unsuitable or ineffective. GAO has described some of these problems in its ongoing study of high-risk programs.

In my opinion, the expectation within the acquisition community is that PMs typically push their programs forward unless their leadership tells them to halt. Therefore, if a program is not ready to move to the next developmental phase, the milestone decision authority has to be tough and disciplined, not approving advancement of the program to the next acquisition phase until it meets its knowledge requirements, to ensure a reasonable likelihood of success.

Reliability Growth. The OSD’s Office of the Director, Operational Test and Evaluation (DOT&E) and the Defense Science Board have clearly linked poor reliability of warfighting systems to higher sustainment costs. Research by DOT&E and the Defense Science Board pinpoints reliability and maintainability as integral parts of the systems engineering process that must be reported in connection with the systems engineering plan at milestone A, the decision point for the development RFP release, milestone B and milestone C. For Acquisition Category I programs, reliability growth curves showing the growth strategy must be part of the engineering plan and the test and evaluation master plan, to be tracked until the program achieves reliability thresholds as outlined in DODI 5000.02.

CONCLUSION
Hindsight is 20/20, as the saying goes. In retrospect, I would have applied the four best practices described here—technology readiness assessment, manufacturing readiness assessment, knowledge management and reliability growth—to my own program management during my Army career, if I had been aware of them at the time. Unfortunately they had not become part of the DOD acquisition community’s collective body of knowledge.

I can say now, though, that I would advise any current or soon-to-be PM to use these best practices. They will put acquisition developmental programs on the right track for better outcomes.

For more information, go to the NPS ­Acquisition Research Program website at http://www.acquisitionresearch.net/page/view/home/.


MICHAEL W. BOUDREAU, COL., USA (RET.), was a senior lecturer at NPS from 1995 until his retirement from civil service in July 2016. While an active-duty Army officer, he was the project manager for the Family of Medium Tactical Vehicles within the Program Executive Office for Combat Support and Combat Service Support. He commanded the U.S. Army Materiel Support Command – Korea and the Detroit Arsenal Tank Plant. Boudreau is a graduate of the Industrial College of the Armed Forces, Defense Systems Management College and the Army Command and General Staff College. He holds an MBA and a B.S. in mechanical engineering from Santa Clara University.

This article was originally published in the January – March 2017 issue of Army AL&T Magazine.

Subscribe to Army AL&T News, the premier online news source for the Acquisition, Logistics, and Technology (AL&T) Workforce.