Вы находитесь на странице: 1из 202

AL-TR-89-013

o
(,

AD:

Current Launch Vehicle Practiye


and Data Base Assessment
Volume

Final Repori
for the Period
April 1984to

September 1989

I : Executive Summary
and
Report Body

t-.-

June 1989

Authors:
J. R. Fragola
L. Booth

DTIC
F
-

OCT

bECT

2 5 1989

[J

Scientific Applications International Corporation


342 Madison Avenue, Suite 1100
New York NY 10173

F04611-88-C-0014

Approved for Public Release


Distribution is unlimited. The AL Technical Services Office has reviewed this report, and it is releasable
to the National Technical Information Service, where it will be available to the general public, includir,
foreign nationals.

prepared for the

Astronatics Laboratory "'.FSC)


Air Force Space Technology Center
Space Systems Division
Air Force Systems Command
,Edwards Air Force Base, California 93 2 3-5Q00

89-

-8o07 052

Foreword

This document represents the final report for Task 8 Propulsion System Reliability Development, of
the Research Applications SETA program. The work was performed over the period 30 June 88 to 24
February 1989.
The Air Force Project Officer lor this task was Mr. David Perkins, AFAL/VSAB. The SAIC Program
Managers were Dr. Robert Long and Mr. William Haynes. The task leader at SAIC was Mr. Joseph R. Fragola.
The other principal technical contributors at SAIC were Lewie Booth and Dr. Yu Shen. We appreciate the
assistance of Larry Quinn of AFAL in administering this task and Ms. Zun-Yan Wang and Ms. Carol
Heymsfield of SAIC in the preparation of this report.
This report has been reviewed and is approved for distribution in
accordance with the distribution statement on the cover and on the DO Form
1473.

DAVID R. PERKINS77

FRANKLIN B. MEAD, JR.,

Project Manager

Chief, Future Technologies Section

PhD

FOR THE DIRECTOR

ROBERT C. CORLEY, PhD


Deputy Director, Astronautical
Sciences Division

CLARENCE J1 C. COLEMAN, Capt, USAF


Chief, Advanced Concepts Branch

NOTICE
When U.S. Government drawings, specifications, or other data are used for
any purpose other than a definitely related Government procurement operation,
the fact th.- the Government may have formulated, furnished, or in any way
supplied the s d drawings, specifications, or other data, is not to be
regarded by implication or otherwise, or in any way licensing the hol 'er or any
other person or corporation, or conveying any rights or permission to manufacture,
use of sell any patented invention that may be related thereto.

(AFSC)
AstronauticsAirLaboratory
Force- BaLse. CA
Edwards

TECH-NICAL RE.PORT (FINAL)

/TASK 8

ICURRENT LAUNCH VEHICLE RELIABILITY PRACTICE


AND
DATA BASE ASSESSMENT
VOLUME 1: EXECUTI

SUMMARY AND REPORT BODY

PRINICIPAL INVESTIGATOR:

IReport

C.CKI4$

No. AL-TR-89-013

Contract No. F04611-88-C0014

CDRL
Sciiem

twiluoi
hterratlmal C aion

342 Madison Avenue. Suite 1100, New York, NY 10173

3003

TY CLASS;FICAT'ON OF THIS PAGE

SE2.

REPORT DOCUMENTATION PAGE


'

SECjRITY CLASSIFICATION

lb REST ,iCTIVE ,A

t ",'I.ASS IF [F~n
I

DjECLASS;FCAT;ON/DOWNGRADING

SCHEDULE

At-TP-89-013

PERFORMING ORGANIZATION
,)plicacions

-n"

6b OFFICE SYMBOL
(if applicable)

Corporation

e:u'57 ional
-)c

7a. NAME OF MONIT

AI

1Sb

OF FUNDING/SPONSORING

(i

,SVF

Edwards AF1 CA

0',,C a \ IZAT :,ON

OFFICE SYMBOL
(if applicable)

9 PPOCUREMENT

12.

:NSTRUMNT ID'-TiCATION NUMbRER

-88-C-0014

IF04e1

10 SOURCE OF FUNDING N4UMBERS

OD R- SSRS(City
, State, and ZIP Code)

PROGRAM

PROJECT

ELEMENT NO

NO.

62302F
1 TiTLE (Include Security Classification)
CUP.RENT LAUNCH VEHICLE PRACTICE AND DATP BASE ASSESSMENT
Volume 1:
Executive Summary and Report Body

1 12

ritor
16,r

ADDRES (City, State and ZIP C'xje,

7b

adison Avenue, Suite 1100


.. K NY 101N 3
'F
SA

RING ORGANIZATiCN

Astronu tics

City, State, and ZIPCode)

-LcDRESS

tributi~ ion

MONITORING ORGANIZATION RE'ORT NUMBER(S)

,
4 "..

DI1

1{1]iimi t ed

FRMING ORGANiZAThON REPORT NUMBER(Sl

REPORT

Pu
Pu_-br
1'c k,,,'ease;

Approve d

No 0704-0188

K!NGS

3 DYSTRIBUION/AVALABILDTY (7

TCLASS!IFCATiON AUTHORITY

OMB

4VO;Fr
IiT
ACCESSON NO

TA3rK
NO

00

3058

2S

PERSONAL AUTHOR(S)

Fragola, Joseph R.; Booth, Lewie" and Shen, Yu


13a. TYPE OF REPORT
13b, TIME COVERED
~Final
FROM I8/04
T033 09Q

114. DATE OF REPORT (Year, Month, Day)

15. PAG.E COUNT


202

89/06

16. SUPPLEMENTARY NOTATION

Volume 2 of this report contains Appendix B.


COSATI CODES

FIELD
21

21

GROUP
07out,

18 SUBJECT TERMS (Con.

SUB-GROUP

08

reverse if necessary and identify by block number)

reliability, launch
ehicle, propulsion development, engine
correlation, variability control, reusabilitv,
failure
risk management, reliability p f.rjapce indicators (PTO),
reliability growth analysis
.--

19. ABSTRACT (Continue on reverse if necessary and identify by block number)

This report describes the effort to identify liquid and solid propulsion design parameters.
development methodologies, and production/operations techniques related to the improvement of!
propulsion rel:.ability for future launch vehicles. Six key recommendations are made. TherE
1s a section on current practice and data base assessment.
This is followed by a section on
reliability enhancing methods. Finally, there is a suction on quantification and prioritization of methodologies.
Volume I contains Appendix A which includes an investigation of
historical failure correlation factors using the SSME test and flight history as an example,
a quick calculation of failure correlation factor versus engine out capability, a reliabilitvr
iAnalysis of current US launch vehicles, and a hitory of US launch vehicles. Volume 2
contains only Appendix B which is a compendium of trip reports accomplished during the
performance of this study.

20 D STRIBUTION/AVAILABILITY

E3UNCLASSIFIED/UNLIMITED
22a

OF ABSTRACT

SAME AS RPT

NAME OF RESPONSIBLE INDIVIDUAL

David R. Perkins
DD Form 1473, JUN 86

21

El

DTIC USERS

ABSTRACT SECURITY CLASSIFICATION

IUNCLASSIFIED
22b TELEPHONE (Include Area Code)

1(805) 275-5640
Previous editions are obsolete.

22c OFFICE SYMBOL

AL/LSVF
SECURITY CLASSIFICATION OF THIS PAGE

TABLE OF CONTENTS

PAGE

SECTION

1.0

z.0

3.0

E xe c u tiv e S u m m a ry ..............................................................................................
O bje c tive ...............................................................................................................
K ey Reco m m e nd atio ns .......................................................................................
... .
B a c k g ro u n d ............................................................................................................
(Task1)
Current Practice and Data Base Assessment
1 .1
C u rre n t Pra c tic e ...........................................................................
....
1.1.1
Current Practice Background .............................................................
1.1.2
Major Activities Constituting Infrastructures ..................................
1 .1.3
Currei Infras!ructute Activities Affecting Reliability ...................
1.2
Data Base (Historical) ........................................................................
1 -2 .1
Data Co llec tio n.................................................................................
..
1 .2 .2
Data O rg a n iz atio n ................................................................................
1.2.3
Representative Design Parameter Development .................................
1.3
Deficiencies of Current Aerospace Reliability Practice in
Application to Advanced Launch System Needs ....................
1 .3 .1
FM EA s/F M ECA s ...................................................................................
1.3.2
Quantitative Reliability Analysis ......................................................
(Task 2) Reliability Enhancing Methodologies.
2 .1
L e sso ns L e a rn ed ..................................................................................
2 .1 .1
V a riab ility Co ntro l ............................................................................ .
2 .1 .2
Re u s a b ility .......................... ...............................................................
2.1 .3
Performance Indicators ......................................................................
2.1.4
Correlation Factors .............................................................................
2.1.5
Correlation vs Engine Out ...................................................................
2.2
Historical Data Analysis (Reliability Growth) ..................................
2.2.1
Y. Shen's Methodology .........................................................................
2.2.2
D. Lloyd's Methodology ........................................................................
2 .2 .3
Cu rv e Fittin g ...................................................................................
... .
2 .2 .4
B ay e sia n ..............................................................................................
2.2.5
Classical Binomial ..............................................................................
2.3
Baseline Reliability Enhancement Methodology Identification ...........
2.3.1
Proposed Infrastructure Controls to Enhance Reliability .................
2.3.2
Risk Management ................................................................................
(Task 3) Quantification and Prioritization of Methodologies
3.1
Testing of Quantitative Methods ..........................................................
3 1.1
Comparison of the Methods of Section 2.2 ..........................................
3.1.2
PRACA/FRACA Trending ......................................................................
3.1.3
Operating Characteristic Curves and Reliability ...............................
3.2
Evaluation of Qualitative Methods .......................................................
3 .2 .1
T o p Do w n A na lysis ..............................................................................
3.2.2
Product Design FMEAs .......................................................................
3.2.3
Manufacturing Interfaces ....................................
3.3
Prioritization of Methodologies .........................................................

1
2
2
5
6
6
6
8
16
16
16
17
17
35
36
38
38
38
38
38
39
41
41
44
44
45
50
52
52
60
63
63
68
68
68
68
70
70
70

Rec o m m e nd a tio n s .................................................................................................

71

R e fe re n c e s .............................................................................................................

74

vi

;x

AAn

Investigation of Historica! Failure Correlat!(r


Using the Shuttle SSME Test and Flight Hisfeoi
.

Li
Si

~A
Quid', Calculation of the Effect of R aulure C:r.ou
..
Engine Out Capability ..............................

-'

*History

Reliability Analysis of Current US Launch VencIes


of US Launch Vehicles ......

)!X:.B (See Volume 11)

II

ufd rA&;'

.'

Jut lIC III,


jj

K.
1.

yity
li I

t1.

Corles
i

or

. ...

EXECUTIVE SUMMARY
The Air Force currently has several ongoing propulsion technology development programs including the
significant joint development with NASA of the Advarcd Launch System (ALS). Previous investigation by
Air Force Astronautics Laboratory and others has indicated that launch vehicle reliability is perhaps the
key driving parameter for development program success.
Given the key role played by reliability, AFAL requested that SAIC undertake a study of propulsion
system reliability development. The objective of this study was to identify, and where possible quantify and
prioritize, propulsion techniques related to launch vehicle reliability.
The study was to include visits to an engine manufacturer, a launch vehicle systems contractor, and NASA
sites to develop information to supl Iment literature searches and independent research to provide a base
of information sufficient to allow SAIC to:
*

Assess Current Practice and the Resulting Historical Reliability Data Base

Investigate Potential Reliability Enhancing Methodologies and to

Quantify and Prioritize the Methodologies

The Study results indicated that current launch vehicle reliability levels are in the order of 90 - 950,'.
This is substantially below future Air Force system requirements of 99 - 99.9%. Investigation into how
these historical levels of reliability could be significantly improved resulted in the development of the
following six key recommendations for the consideration of the Air Force and AFAL.
1. Failure correlatiorlia.L.
are key factors of interest to design decision makers. Specific
studies,
which address what factors have been achieved in the past and what design trades have been made to ensure
the low factors quoted by contractors are achievable, appear to be lacking. The Air Force should consider
requiring that such studies be undertaken.
2.Variability Control, especially of residual variability, may be the key barrier to high launch
reliability achievement. The Air Force should consider requiring that some specific program for
variability control be included in future propulsion technology development programs.
3. Reusability has been shown to have indirect, potentially negative, impacts on high reliability
achievement. The trade-oils which exist between high reliability and reusability should be clearly
identified and included in propulsion programmatic decision making.
4. Risk Management has been shown to have potential benefits in maintaining the high reliability of
oroarams in other industries. The advisability of risk management being included as an integral part of
propulsion system development should be considered.
5. Reliability Performance Indicators should be developed whose trend trajectories lead, or presage,
the occurrence of reliability problems so that program management action can be taken prior to the
development of reliability problems.
6. Reliability Growth Forecasting is important during the development of systems with high reliability requirements. This is especially true when program economics prohibit extensive development test
flights. Reliability growth approaches should be investigated and applied as appropriate to propulsion
system development programs.

OBJECTIVE
reobjective of this effort was to identify, and where possible quantiy and prncritize. liquid and soiiv
,;,cision design parameters, development methodologies, and production/operations techniques related
~Ur~ch vehice reliability.
KEY RECOMMENDATIONS
The following areas have been identified as having significant reliability impact. These areas eac;h
._-rant further in-depth study if the high reliability goals of the Air Force advanced launch veholt
ams are to be achieved in an operational system.
-ajuwe Correlation*
ne percentage of failures which are likely to impact more than one er-irie in _nmulti-erigine design ic
-irtical design import. This percentage, or "failure correlation factor, must be well beiow 20% for
~
~
l~
r)!h'y oriented design approaches such as engine out capability tc be
' he more effective is this hueristically pleasing design optio r)
t
p
_.:r new engine design characteristics quote extremely low factors
.3
o)ut of 100 do not seem consistent with other design paramet2K-.
o
i 3 .7
4S and are considerably lower than factors achieved on recerat eicn ne
ralion
aivttc main engine test program). Finally, there did not appear to be any significan-t conFclde
-V esow
' factors would be achieved in practice.
oc a:r
')Mlme~diojn 1 Failure correlation factors are key reliability paramT)ers Int
what
address
whicn
design
studies
parameter
such
as
studies
maikors.
Specific
decision
design
:.ush~ve
eenac.ievd i the past and what design trades have been made to ensure. It e'wfactors ouoc
eviicen, in the reliability designs appear to be lacking- is, 6ci:rrirended that thesi nesisjc
dreprior to the selection of any dlesign alternative.
riability Control
rrntly achieved launch vehicle reliability has been shown by this investigatio-i to ua bt ccw C
investigaion~ uncovered examples of reliabilities in other somewhat simnilar - sferis sx
alnssite 3ystems, which routinely achieve 0.99 and sorn which anproach 11.999. Ttiese 'sm
the Ai
reliabilities currently meet or exceed the reliability rei'necfor
otrational
Q
'"e ~aii.,,ch system have achieved these high reliability levels "hrougH tic u - i'l
Icclrns
f.~
r~a iis. Whfile it woulId be i nappro prlate to make any di rect co re al't, Ie
r,
-'S
, v4
i' hiclc's, itis also clear from a review of the failurfe d,, t , o' Fl
fcrtyhigher reliabilities may be the residu,,i Nthe

'S

cr~o rv revi ew&(,!oil e scmewW...

6 .urbines and recent Air Force variability reduction studies prfoine


p m, provide further support for this argument.
,Q,,pmendwalg2

Residual variability may be the key barrier Io la!h Iii

_v. ment. For this reason, it is recommended that inventigations h;rd


(,vir;,ibility control programs such as Taguchi methods or aiiern nv_-

cited kre is
.'"itiritn
i n of the ditference

broader than that used traditionally by propilson

rch

' Vstl'r,.dr4~

.- hi- In'I e'ia';!

rvt'y

'

apnartI of 1;1r

be directed at determining the applicability of the methods to the launch vehicle production process. It is
further recommended that some specific program for variability controi be included throughout all phases
of the advanced launch system program.
3.

Reusability

Reusability is, on the surface, a design goal of significant program benefit. However, the benefits of
reusabiity are significantly compromised if the reliability of an engine is adversely affected by the
requirement. Besides the direct costs involved in developing a reusable design, there also appears to be
significant indirect costs which are required to maintain reliability in a reusable design. For example,
reusability by its very nature tends to decrease the production run. When production runs are decreased,
investments in automated production equipment become less economical and the production process
therefore tends to become more prototypical. Prototypical production, especially of complex equipment,
increases the problems associated with variability control and therefore substantial postproduction testing
may be required to ensure high reliabilities. A good example of such an indirect impact on reusability was
seen at the Rocketdyne SSME production facility in Canoga Park, California.
Recommendation 3 - Reusability has been shown to have indirect and potentially negative impacts on
the achievement of high reliabilities at reasonable cost. The indirect impacts of reusability on reliability
and cost through such mechanisms as variability control problems sho-d be thoroughly investigated and
the results of this investigation included in the programmatic decision making related to reusability.
4. Risk Management
Achievement of high operational reliabilities in such areas as nuclear power plant safety systems have
teen i,y
by i"4."d
nlir_-:!lv ac t iv' prcqram that attempts to identify the risks to reliable
operation and to address them according to their importance. Such a risk management program has been
investigated and recommended by NASA SRM & QA for future projects, but it is not clear whether a risk
management program is planned for the acquisition of advanced launch systems.

Qnecommento0aA - The Air Force should investigate the advisability of incorporating a risk
management program as an integral part of any :aunch system program.
5. Reliability Performance Indicators and Trending
For high reliability programs it is important to identify, early on, symptoms of the process which presage deterioration in performance. This has been done in the financial community, in the commercial
aircraft community and in the nuclear power safety community by the development of a set of 'leading"
performance indicators and developing performance trends based upon the indicator trajectories through
time. If such a set of indicators could be developed and trended for advanced propulsion system development
programs, the indicator trajectories might provide early warning of problems arising during development
and operation. This early warning could provide the time required to institute corrective action before
actual program reliability performance is affected.
Recommendation 5 - The Air Force should develop as part of advanced propulsion system development
programs a set of potential indicators of programmatic reliability performance. This indicator set should
be based originally on historical information, but later updated and validated as advanced propulsion system
development programs specific information becomes available.

'i-ibihty

Growth Analysis

n ail developmental systems a certain degree of reliability growth ;s to be expected However, program.'
Adesneed to know the pace of the expected growth so that they can detcrrvune if tne program is lilely
-lept the operational reliability goals within developme!,!al tim? ccnst,,2i2ts A.- undersla~aiimg 0" ' I
p4h
rocess is tnerefore essential to the determination of the proper role to be prayed by history n Itiv;
?zastingq of future system reliability. If an historical failure has been analyzed and its cause determir, ,,
'sudtable corrective action is implemented to prevent its recurrence, it is recognized that it would ha,.
s irobability of occurring again diminished when it is utilized for predicting future performance BL,1 t
.c:v much'? The determination of how much each failure should be counted is important in order to estani: ;
Cpue "calibration" for the reliability growth characteristic to be used to determine 'now
~b J evelopment is proceeding. Several approaches have been developed to adore ,s the Sstir' r
alh. Among those developed are the early works of Duane at GE, that of David Lloyd of I RW and 111),j
,,.-Iored by Dr. Yu Shen of SAIC as part of this study. In addition, Bayesian. aPproaclie may show rc
!Mproved growth forecasting.
-

'-,<crnmendation 6 - Reliability growth forecasting is important during tnr,~reliability requirements such as ALS. Accurate growth loreicasls al'
,'~
'T-e
arly on if reliability requirements are likely io bie met
Y.....~.
ecz~nomics prohibit extensive development test flights ac i
s*.
e
y'voxist to allow for forecasts to be generated- however, turtner CteiQ
c'p '3 7fl 1,eL,-jIte
lepealt Proqi~in
,jt-sc nable growth forecast is developed for advanced propulsion sy~te'
p
*.Oue recommended that the concept of reliability growth be, fu,;,her ce'e'e I;
-.- '--ulsionsystem development progr,-ms.
,'

BACKGRC JIND

-dS SCY'v

rimC u

pl p.,;.

:,'t'c ,, d- vCyouriiunt piog,rir s that are ainied


.,,
cri ,.r ,ea.,oiicat:ons A tl.n amenta goa to at, rw la,.h
vehicle is low cost One element of cost
that s recev ing
incfeasng levels of national attention is 0Qe cost of unrehabihty. This issue was
highlighted by the recent series of catastrophic launch failures. These failures included two Titans, a Delta,
an A las, and a Snbt'le Ail were lost in a period of 2 years Historical data bases indicate that in general
'aunch vehicle reiability against catastrophic failure is approximately 0.92. This value is dominated by
ro~ouls:a' system failures and is unacceptably low for any ;uture launch vehicles

-:!nr

The :raditional methodologies for the development of propulsion systenis ha-e involved lhe use -f
'.adtona! ranufacturing methodologies coup!ed with tfaditional design

sethodolocv'et assume sore-.

eaL.,e ol >afety factor '- the design process The traditional issur that was fu'ndr*,,r'1.; to launch veh,,:,,
app: canions was that the vehicle payload capability was h:ghly sensitive to the rmss properties Hence,
mar)-ns were decreased to tha maximum extent possible during the design phas.
'here remains a distinct
-e.eopmenttransition to flight weight hardware in most aerospace develcfprcnts, Reliability ,,as only
uc -equeitlv evaluated as a secondary concern. Point estimate techniques for estimating application
t!,ab
hty were employed rather than rgcous statistical testing Manufacturing process control was
.nsi ,,uict alter development 'r order to qualify vehicles for manned flight or higher confidence of success
VoTrw -q catastrophic failures Itis apparent that in order to achieve higher levels of -eliability :n
ci,' ,,.,-s.,inc hen e in the launch vehic!es alternative development approaches need !o
r
e~p ore d
Tnere nave been severai suggested approaches to achieving higher reliability. Design for reliability
pni,losuphies include redundancy techniques and higher design margins. Process control advocates point to
human error contributions to failure and article to article variations, proposing that more automated
production and higher levels of quality control and non-destructive testing will achieve desired reliability
It is fundamentally assumed that design engineers should be more aware of ultimate reliability and
oroducibility issues as they pursue designs Inevitably, the greatest stumbling block to achieving higher
re-lability goals is hmited funding available for development and qualification programs and the historical
'eabiity approach perspeclive, wnich consigns probablistic techniques to only the top most levels of
program analysis arid evaluation
While history has shown it to be true that in the ultimate design
rhiatility not only costs nothing but will produce significant cost benefits, this is not true in the near term
,'-sign development phase Here reliability tasks increase, at feast initially, the cost and they do so in an
ernvironment where funding is scarce and where reliability needs must compete with other more visible
programmatic needs (such as performance upgrades). In such an environment of new program development
within strict resource constraints reliability resources can be eroded in favor of programmatic needs
considered more immediate un!ess investments in reliability are "fenced in" early and not confused with
management reserves.

1.0 (TASK 1) CURRENT PRACTICE AND DATA BASE ASSFSSM_-N F

C urrent Practice
1.1 I Current Practice Background
Corporations involved in the design, manufacture, test and operation of propulsion systems generally
ave infrastructures that result from specific government agency requirements. Those controls which
- xis! vihin any given infrastructure that nave an impact on reliability also exist largely due to government
..:(,;irements. At the highest level these controls consist primarily of Failure Modes and Effects Analysis
. As arid Problem (or Failure) Reporting and Corrective Action Systems ( PRACAs/FRACAs). Al1houg.-I
,ese controis have had a positive impact on reliability the impact, because it is often somewhat indirect,
-ct readily measurable. Thus, it is difficult io ascertain quantitatively that spending a given amount of
.. , rceson FMEAs or PRACAs will in fact pay off. In addition, there are, at least in the initial phases of
.'arn development, few financial incentives for "better" reliability even though the costs of failure
".e substantial down stream benefits from investing in reliability. Furthelmrre even i! there wcr,
.:tv ncentives, it would be difficult for manufacturers to know where io spen, 2W; -car(
:},L
s to obtain the best reliability returns. This is primarily clue to inconsistent r iori ex
r
i ,,-ty data bases.
problem is further compounded by the constraint -)f sample size on the measurcment of acniui,,e
L
: n highly reliable systems. In other words, to demonstrate that a given reliability has been
- a: a rusonable confidence level, a large number of systems must be tested. It is obvious triat in
tn-s
MEos
approacn is not practical from a cost/schedule standpoint. This is not to say that presenti,,
ici relabilitles are inadequate to satisfy the previously and currently existing requirements. In fact.
att;ned reliabiity of any propulsion system is generally based on relatively small sample sizes anc
incleriying assumption that each propulsion system tiring is -dependent of all others. This smplv
that while we may not know precisely (from a reliability standpcnt) where we are now, we ao know
;:e we are well enough to understand that we are far from the high reliability goals desired of future
ropulsion systems.
S

,- ,
.

,: r the relevant question is not where we are now, but how can an improvemeni in -e'iaiinty bBecause of the relative nature of this question, it may turn out ihat accurmtely prtd.cb,
, T,,
provernents is easier tha: measuring attained reliability.

?.

S m'.ilor Activities Constituting Infrastructures


22 ufdinf,]limitations it was not possible to revisit
..

rOm the initial visit. The revisits wou!Oi


"-...
btt cr- ransoortation and storage

h 2..

crc Inanul3i',
p o"dr to
'-.c-,)ncr
1

er;
-i

'm hi':
r.

vrci
iisits would be used to form a clearer picture of the detailed approaches taken by licuid and

r'uacturers.

v. .L'er, based on the initial visits taken, six major activities have been identified in the life cycie
S

K:,g,

rDui'on system: design, manufacturing, test, transportation, storage, and operation.


The
_r:, that has evolved has certered on design, manutacturing aod iest. Activiftes related Io
, n, storage and operation tend to be restricted to probiem co; &o'
rather t-an a planned
anticipate problems.

LQ
ie Desinn acti ~ty pr; marilv involves the creai 'or'Cof a sYst , m'i me-ets lthe speclfied
requireme nts C acconract Typicziiy i desinn is genetrated anncgus Ih;ou- h a des qn feviewpirocess usuallIy
-

i fo~ew of -eilahihity requiremeni


i vevvs.
consisting otpreiniuiary, critical, arid thnal da
lc)3th0're lac of -3detidled historical data base
achevecit J~i~.cthese ieviews is clurrnily oasiod t
injfOer;rig juot'gern ci or, qbala!ive review of design
to, propusion systems). upon the manuiactuleis S
Spei';tfic aUie m-odies vwr-ose elimination or miigation i, -,Z- n L~as(.d upon manufacturer's
assess as to ',heir
reco.-mmendationis which are jUdger-entally based and tneretore cifflcilli toD3t~,
des!(j(
in
the
rnpiE.rnented
acvaieved
SUCceSS!Li;
oi~
lein_
Probabiltv
fi~e
ocus on how
rig-Oncia the raraitio is made fromrr design) to tnanLt'acun , trce
ality
ri ~mz:
~T~wc ac mterisrcqaired to p-ciduc 0,'-,,
t
i''cl;ti
.aiSm
SrdY for t1:s
a.
yan
cs and
t.'r'i pg
". l2X1c

~ ~''- ~pro~sc'ib"'

0 1i n!ts,

-13
Tocrs

'

_ctors are also

ac.
nn
tn'
y are priria-rily
~
san
~ ri-iiv
~ 1 rvivi uafioc
_4J -,3 test In e functIor-,.i adequacy or the potentia! of a giver! design mnn3rJaom In tlrii v ay they
r
I u we igOt ratio , a
,ma: nce snc:V ication such as mut
1
prop:- lson syStern perfor
032,
'
4etha-1
ci ~has
-rot bee.n a-chieved. but they only indire_-ly indicato la-ck of r- !iability acnievernent.
i

Ti

'

OC-i

true of new designs

These tests do not usually involvc t,.non'rn tns, time ior numbDers of

sysecis tl' :roduce a statistically significant! indication of system re~biycapaohty When failures
dc OCCUr ihe may havc been. inducea by consciousli over extendifg the design limits. Infact, the tests may
BToucdettermine design weakness ,s through lest failure so lhit tne faiiures ca3r be examined and
corr I~v act ,cn taken to improve the design. These tests therefore irray not always provide usetul
informaticn concerning the assessment of system reliability capability althoucirn they certainly do produce
inlcrmation useful to reliability improvement.

Transportation - Trans ' ortation activities can have obvious negative impacts on reliability due to the
influence of shock, vibration, humidity, and thermal transients. These and other environimental, actors r ,f
act independently or synergistically to decrease reliability, Controls are in place .:ctating packaging and
handling requirements primarily through specifications. Unfortunately, not all problem (or failure)
reportirng and corrective action systems feedback problems that occur because of inadequate package and
haridaing ire'Quiremnents,. Such a .losed loop system 'would provide a mechanism for rewriting of specificaItio nsF
Strce- Like transportation, storage activities can also have a negative impact on relipbility. This is
true not only from the standpoint of environmental conditions, but storage time as well. When rocket booster
dependent programs experience a delay, then all limited life items become factors aftecting reliability
ating ,me for booster rockets is a matter of a few hundred seconds with the proOperation - The
ket engines or solid booster casings are reusable. Achieved reliability is measured
viso that somc of a
crating data and applying statistical distributions such as the binomial. As with the
classically b., esi:
testing activity, whien ;i"-es occur, the devices are reexamined and corrective action is initiated followed
..ctive action taken obviously is intended to eliminate failure mechanisms and
by retest. Since the
thereby impro', relia-,lity, it is difficult, it not impossible, to usp, a classical approach to measure
reliability achieverne'rt in developmental systems with high reliability goals and limited operating
histories.

2 Current Infrastructure Activities Affecting Reliability


Although there are some specific differences between prime contractors and major subco ntr actors, in
,-ealhe
controls affecting reliability which are the responsibility of the reliability ciscipine are
.Itiiity Predictions, Fai!ure Modes and Effect Analysis (FMEAs) and Failure Reporting and Corrective
systems (FRACAs). The quality control discipline has a direct impact on reliability but is not
zrmally a part of the reliability discipline. 'Lessons Learned" is often a semi-formal approach to
eliabilitv improvement and when used, it is as likely to be found in the design group as the reliability
r.c~
?:1' he

ElRACA system provides a closed loop means of correcting problems. In their present form,
AK
s arc not structured to quantify reliability or to become a proactive part of measured fquar "fif-,
i!tY e nh a ncemne nt.
PRACA'FRACA___

.Ki3MANU

TETTRANSFACTURING

T)AV
PORTATION

~M~i~s

.'

~
--

1. Achieveed
Rekabilry

red~ctrens,'

rade Off s
Figure 1. Existing infrastructure controls intended to
.1
.i

Prcereiiab-ity

liustrates the six infrastructure activities as they rel-te to reliability activilie-; Th rno_ '
spd reliability activities are:
*FMEAs
*Reliability Predicons/Trade-offs
-PRACA/FRIAGA

Systems

~ ~uwmentof Achieved Reliability


~are.and how they are used is described in the following section under "Reliability
Ai.ng/nalysis' Reliability Predictions/Trade -offs are aiso discussed in the same e(.ion ndr
* ,ii-go Qatttv
eiblt niern
Design lools' alono-with other tools hat r availa-hio
-

A,~J

A Systems are discussed in detail in Section 1.1 3.2.

Measurement of Achieved Reliability due to its complexity is treated separaely in Section 2.2,
"Historical Daa Analysis (Reliability Growth)" and in Appendix A.3. Reliability Analysis of Current
US Launch Vehicles"
The purpose of Figure 1 is to illustrate the limited use o! presently available reliability engineering
techniqLes and tools as well as the limited use of information from activities such as transportation and
storage.
It is clear, based on the information gathered to date, that no single company has utilized all the tools
and techniques available to reliability engineers on any given project nor has the information from transportation and storage been fully utilized. The fact that all the resources of reliability technology have not
been utilized is =ot a result of negligence on the part of manufacturers. Often they may not be provided with
specific requirements to address all these issues by their government customers and are not normally
funded to conduct these types of analyses.
Athough not directly related to launch vehicle reliability, a recent example of how the storage activity
can affect reliability is given by the recently launched TDRSS spacecraft. After the Challenger accident
the spacecraft spent an extra 2 1/2 years on the ground. Deterioration was suspected in the bolt cutter
ordinance and for this reason a reliability study was conducted by the contractor. The study resulted in
the determination that the bolt cutters required replacement. The successful launch of TDRSS is now a
matter of record. Total credit for this success cannot be taken by the individuals involved in this reliability
analysis, but a significant contribution was made to this success as a result of diligent ordinance and
reliability engineers taking the initiative and going beyond typical practice. The only way to make such
protection "routine" is to expand current reliability practice so as to create an infrastructure such as the
one depicted on Figure 8 in Section 2.3.
Reliability Engineering Analysis - There are a number of tasks that are specifically related to reliability as shown in Figure 2. It is not the purpose of this report to fully describe each technique and
design tool but to highlight those most commonly used in the rocket industry. The two most commonly
used methods for reliability analysis are;
. Failure Modes and Effects Analysis (FMEA) or Failure Modes, Effects and Criticality
Analysis (FMECA).
Quantitative Reliability Engineering Design Tools such as predictions or Trade-offs.

Failure Modes and Effects Analysis is a "bottom up" method intended to identify, classify and document
failure modes and their effects as well as possible corrective actions or compensating or mitigating
provisions.
The purpose of an FMEA is to:
1. Assist in selecting design alternatives with high reliability and high safety potential during early
design phase.
2. Ensure that all conceivable failure modes and their effects on operational success of the system have
been considered.
3. List potential failures and identify the magnitude of their effects.
4. Develop early criteria for test planning and the design of the test and checkout systems.
5. Provide a basis for quantitative reliability and availability analyses.

~E
II

I
P

NEW DESIGNS

AALi

OUTPUT

AND DESIGN
ALTERNATIVES

SREQUIREMENTS
CLIENT

REQUIREMENTS

RELIABILITY DATA
S

.
TECHNIQUES

DESIGN TOOLS

DECISIONS

Uf
FAILURE REPORrING &
'CORRECTIVE ACTICN

ENGINEERING

TESIGN

TLS

GJANTITATIVE ANALYSIS

ENGINEERING DECISIONS

1. COMPARATIVE ANALYSIS

* FORMATS-RELIABILITY GRAPHS
RELIABILITY BLOCK DIAGRAMS,
FAULT TREE DIAGRAMS, MARKOV
TRANSITION DIAGRAMS. DECISION
TREES. TRUTH TABLES, DIGRAPHS
* ANALYTICAL METHODS-BOOLEAN
ALGEBRA, MARKOV MATRIX
ALGEBRA, EVENT SPACE ANALYSIS,
MINIMUM CUT SETS. TIE SETS,
MONTE CARLO SIMULATION, PATH
TRACING, DECOMPOSITION

ENGR. TRADE OFF'S


SENSITIVITY
OPTIMIZATION STUDIES

2. ABSOLUTE ANALYSIS
APPORTIONMENT
PREDICTION/MEASUREMENT OF
ACHIEVED RELIABILITY

1 RECOMMEND DESIGN
ALTERNATIVES
2. MAINTANINABILITY
RECOMMENDATIONS
3 PREVENTIVE MAINTENANCE
PROGRAMS
4. SPARE PARTS PROVISIONS
5. RECOMMENDED TEST
INTERVALS

QUALITATIVE ANALYSIS
- CORMATS-FMEA'S, CRITICAL
TE1,1S LIST. FAULT TREE DIAGRAMS
I
A'
ATTICA
METHODS-FAILURE
,,'I, YSIS ROOT CAUSE, COMMON
.-.iT;CALIlY RANKING

Figure 2. Reliability engineenng tasks typically used in design

10

6. Provide historical documentation for future reference to and in analysis of field failures and
consideration of design changes.
7. Provide input data for tradeeff studies.
8. Provide a basis for establishing corrective action priorities.
9. Assist in the objective evaluation of design requirements related to redundancy, failure detectior,
systems, fail-safe characteristics and automatic and manual override.
When considering reliability analysis of a design, one usually thinks of all the analytical steps leading
to an estimate of the reliability of a given item. A complete analysis requires comprehensive input data that
include material properties, design details and component failure rates; however, it is not necessary to wait
until all of these are known before much can ie determined about the reliability of the design.
Failure Mode Effects and Criticality Analysis (FMECA), is essentially similar to a Fai!jre Mode and
Effects Analysis but in this case the criticality of the failure is analyzed in greater detail (and may in some
instances be quantitatively evaluated) and assurances and controls are described for limiting the likelihood
of such failure. The four fundamental facets of such an approach are (1) Failure Identification; (2)
Potential Effects of the Failure; (3) Existing or Projected Compensation and/or Control; and (4) Summary
of Findings.
The most hazardous pitfall is the potential of mistaking form for substance. If the project becomes
simply a matter of filling out the FMEA forms instead of conducting a proper analysis, the exercise will be
ineffective. For this reason, it might be better for the analyst not to restrict himself to any prepared
formalism. Another point: if the system is at all complex, it is risky for a single analyst to imagine that he
alone can conduct a correct and comprehensive survey of all system failures and their effects on the system.
When applied to complicated systems, tl ise techniques call for a well coordinated team approach.
Comparative Analysis and Absolute Analysis are the two general types of quantitative reliability engineering design tools.
Comparative Analysis - When alternative designs for achieving given (or desired) levels of reliability
are under consideration, characteristics fo
.ch design are expressed quantitatively as a means of
comparing the relative reliability of each design alternative. For this particular type of analysis, failure
and repair data need not be exact since the purpose is to compare alternatives rather than to obtain estimates
of absolute values.
There are three types of comparative analysis commonly undertaken:
*

Trade-offs

Sensitivity Studies

Optimization Studies

Trade-offs, among various design alternatives, are conducted so that the alternatives with the best
Benefit to Cost Ratio may be selected. The Benefit/Cost Ration is determined by incorporating the effects
of reliability factors, installation and operating costs, degraded modes of operation, etc. Trade-offs involve
achieving the proper balance among reliability, performance, and cost.

11

Sensitivity analysis involves the variation of input parameters to mathematical models in order to
assess the relative effect of component characteristics and data accuracy on a given system's reliability
")a results are used to idenrtify areas where improvement in design will have the greatest potential impact
.
r eliability.
Optimization studies carry the concept of sensitivity analysis one step further by varying the input
irameters until a set which appears best from a reliability perspective within the system constraints is
octaned.
Absolute Ana!ysis involves the use of numerical results of an analysis in an absolute sense (Design A
.i reliability of 0.90"). It results in a *stand alone" number, not a 'reitive comparison" type

The two types of absolute analysis are:


Apportionment
Prediction
z- t nment is used when a specific level of reliability is presc h-,e2 F,T ir'stacE, a client rnpy
. ,-,. a certain percent increase in the reliability of an existing propulsion system. The procedure
simplified) is:
-,-

i oportion the reliability of the system to each subsystem based on past performance.
2 Idlentif thuse subsystems which have the least desirable rei.ability performance. Include all factors
.,ichaffect this peformance such as random failures, common cause failures, distribution of downtimes,
: an reliability, etc.
3 Determine what corrective measures may be taken to increase the reliability of each subsystem.
Fredicoon requires utilizing mathematical models, input data, and probability theory for predicting
Moity
taking design actions based upon the predictions, measuring (or gaining new knowledge)and
i-:n repredicting, and acting again or remeasuring continually throughout a program of development or
_p,f
:ing and Corrective Action Systems- "Failure Reporting and Corrective Action": (FRACA)as well
....'erm Reporting and Corrective Action" (PRACA) are the two types of reporting and corrective
' ,teims that presently exist in the rocket industry. The FRACA system is required by the Air Force
:11- FRACA cstemis required by NASA. Although these two sys, nls may differ ir minor detail, the interl
" rsdmonts
, methods used by manufacturers to carry ihem ou !s very ifrnila, The foilow:
S,;:,:
a,typical manufacturer.
CJompany XYZ maintains a closed-loop failure reporting and corrective action system to ensure
,rvestigation of the cause of faiiures and to provide appropriate corrective action and failure recurrence
":oulrc'. The FRACAs place emphasis on analysis of failure data to provide early detection of defects.
J.equent investigation and corrective action attempts to find and correct failure causes early in the build
" e n order to minimize costs associated with higher level failures.

12

FRACAs incorporate the following features


1. Use of a failure report form which provides a failure description, analysis and corrective action, as
well as basic information inciuding hardware name, operational level, type and environment; hardware
identification number; date of failure; name of responsible unit engineer and failure reporting engineer.
2. A project failure reporting procedure or RAM program plan section which defines:
- The level at which failure reporting begins.
The types ol anomalies for which failure reports will and will not be written.
-

The flow of hardware and paperwork associated with failure analysis.

The responsibilities of the R&M and CA organizations.

3. The completed failure reports incorporate the corrective action implemented both immediately (e.g.,
part removed and replaced) and long term (e.g., engineering order to implement design change).
4 Every failure report requires a close out.
5. The program/project maintains a current list of all failures and the status of those failures.
Basic terminology used in FRACAs is as follows:
1. TRS - lest Becord .sheet - Running log of spacecraft area test events; initiated by test inspector.
2. SQUAWK (Log)- Narrative which records spacecraft or space propulsion system area assembly and
test problems; initiated by test inspector.
3. TDR - Jest Discrepancy Beport - Records test failures at various levels of assembly and test; initiated
by test inspector.
1. TRF --Test

a~lure

leport

Records the problem decr'ptions, failuro, analyses, ar.J ,oiective

actions; initiated by reliability engineer.


5. RAR - Be liability Analysis Report- Computerized output of combined information from TDR and TFR.
6. FRB - Eailure Review Board - Joint meeting of Contractor/Customer personnel to review and
closeout failure.
Sequence of Activities - A typical flow of failure reporting paperwork and the associated hardware is
shown in Figure 3.
Although FRACA/PRACA systems are intended to be a "cradle to grave" system, manufacturers tend to
emphasize manufacturing (using Q.C. as the control and corrective action system) and test (using the
process of Figure 3 as a corrective action system). This is primarily because these are the two areas over
which they have complete control. Feedback from the customer (except, of course, for catastrophic
failures) is often inconsistent.

i3

"OLO
F

TOP
CLOSED

ThO On
90UAWK

yes3

40
HARDWARE
T-;tFAILURE
rp~q
7

COMPLETE
?

MAN FOR PR3

YES

FAILURE

I11PIATTD

Figur3.

INVEM

w
ES
YD/F/A

CLOE

YE

R14

For example, ifa failure occurs and the equipment is pulled for repair, the paper work often does
not state why tue equiprnent tailed. In the case of the SSME, a recent review of non-conference reports
(UCRs)indicated that only 20% were included in the NASA PRACA system according to one contractor. 80%
were excluded oy the reporting requirements. These requirements are intended to limit the reporting to
serious problems and to prevent the system from becoming overwhelmed by problems of a minor nature.
Such a system serves well to aid in serious problem tracking and close out, but can sometimes eliminate
the detailed background information required for definitive problem analysis and root cause determination.
In the case of the SSME, such investigations required use of the UCRs combined with the contractor's
'rlocketdyne) ,nhouse problem tracking systems. Thus, the pro .!o
..,s-f.ld no
.problems are
reperted and those that are reported are not always adequately described.
C .m
-Failure repo!iny is i-iost effective when viewed - an engineering activity rather than as
a bookkeeping funclion. Opportunities exist for failure reporting personnel to enhance screening
effectiveness, identify potential trends. and to minimize costly downstream anomalies. Increased
computerization of FRACAs allows for rapid information dissemination and less time spent on routine
paperwork tasks, as long as increased use of computers must not be made by sacrificing detailed problem
desuriptions
The FRACAs begin with procurement and continue through receiving inspection manufacturing, test,
launch-site activities and mission operation. Control of discrepancies found in receiving and in-process
inspection, all non-test discrepancies and Material Review Board (MRB) activities are primarily the
responsibility of Quality Assurance and are described in the Quality Assurance Program Plan. Reporting
of parts and materials problems (including Alerts, etc.) is the responsibility of Parts, Materials and
Processes (PM&P) personnel. Test discrepancy control is primarily the responsibility of the reliability
organization. In the course of performing this function, a reliability engineer may encounter conflicting
priorities within the project in assuring that proper failure analysis and corrective action occur in
response to test discrepancies. Examples include:
1. Manufacturing personnel want units repaired and out of their hands.
2. System Integr...... personnel want units back into stores or back into their hands.
3. The unit engineer wants a test discrepancy to be due to a manufacturing defect or a parts problem,
and he may now, due to the passage of time, be assigned to a new project.
4. The project manager doesn't want to spend any more money on the situation.
5. The project engineer believes whatever the unit engineer tells him.
6. The system engineer is worrying about link performance or something of the sort.
In the face of these conflicts, the reliability engineer's objectives must prevail. The Failure Review
Board exists to help assure that each failure is properly closed out. Satisfactory closeout of a failure will
occur when:
1. A failed unit is fixed and has pass;r1 the lest which it f3ilpd
2. The probability of the problem recurring in the unit is negligibly small.
3. The problem has been shown not to exist in any other unit.

A computer system is often used to record and track test discrepancies from the time of occurrence
through Failure Review Board closeout and beyond. The computerized system provides:
1. A reporting vehicle for alerting Quality Assurance, Reliability, Engineering, Manufacturing, rest
and Program Management of failures and need for action.
2. A permanent record of the cause, significance, effect, and corrective action for each failure.
3. A vehicle for requesting remedial action of the procurement, design, manufacturing, test and handling
organizations.
4. A retrieval system for identifying failure trends, providing status summaries and locating historical
failure information.
Whilk PRACAs/FRACAs perform well in the failure tracking and problem close out system mode for
which they were iiitended, they were not designed to be reliablity data bases even though they may contain
information considered for this latter purpose. It should therefore not be surprising that PRACAs often lack
mne information required for reliability analysis and prediction. The reasons for this vary but the primary
reason is as follows. PRACAs are intended primarily to keep reliability management and program
management informed that serious problems have been identified and are being attended to. Including minor
problems or supplemental information which is not critical to management tracking (such as the part
exposure time at failure) may overload management and therefore this information is screened out of the
system by the reporting requirements. While this may be desired from a problem tracking standpoint, it
eliminates the precursor information essential to a reliability data base. For example, the SSME PRACA
system only ircludes 20% of the UCR information which would be required for a reliability data base and
it includos almost no exposure at time of failure information.

1 2 Data Base (Historical)


1.2.1 Data Collection
The objective of this subtask was to collect the material necessary to understand the present state of
Jesign, the current manufacturing techniques and the operational parameters of solid and liquid propulsion
rockets. Collection methods included visits to NASA and Air Force sites responsible for solid and liquid
propulsion rockets. Collection methods also included visits to the sites of rocket manufacturers and users
and access to in-house publications, technical and public libraries for text books, reports and articles on
-ockets.
T'rip reports (see Volume Ii: Appendix B) documented the names of contacts made, insights gained
t),"Y2;qh formal or informal question and answer sessions with these contacts, the type of information
oc,~ce
(nard copy reports, historical data sets and for which rockets and time frames) and the type of
process viewed during facility tours (production, maintenance, design). Information gathering focused on
the retrieval of sets of historical rocket launch and test performance data, textbook discussions of the
physical attributes of solid and liquid rockets and subtypes, and studies conducted to evaluate design and
perormance tolerances of individual or collective rocket performance parameters. The output of this task
was a set of rocket characteristic and performance data.
1 .2.2 Data Organization

The data gathered from the site visits and the information collection process described above was
organized to facilitate its use. For hardcopy material and site trip reports, a filing system was constructed,

16

separating solid from liquid rocket data, then categorizing by rocket use (booster, strap-on, Orbit Adjust,
Payload Assist Module), followed by sorts on fuel type and rocket type. The Data Summary Sheets that follow
were constructed tc allow at-a-glance raview of the data ?a'i!!ab!e on the various rocket types in these
rocket use and fuel typo categories. Historical data or'. rocket test'launch wero organized by entering it into
a computerized data base system, DBasa 1i+, eiien the data was available, to allow data to be more easily
stored and sorted.
1.2.3 Representative Design Parameter Development
Using the information gathered and organized, a candidate design configuration was selected for solid and
liquid rockets as a baseline case This baseline was useo to, establish d -rructure of rocket mission and
performance characteristics which also define a strucure for data entry a.nd storage. The rocket mission
data vector, or th.o. column headings for a data table, reflects data categories from historical performance
sets, such as data of test/launch, success or failure, rocket designation (riaine, pioduction lot), and type
of mission (R&D, space Mission). Rocket performance vector entries were de!:.rmined by the technical
literature search and s;te vsit discussions citing rocket attributes such a fuel, oxidizer, thrust and
diameter The baseline structure which these vectors constitute w-s expanoed and defined further as more
insight was gained into the characteristics which drive rockeft reliability.
Fol!owinq the Data Summary Sheets iq a matrix containing the reiabiiy of U.S. launch vehicle
fallurns, labulated in Table 1 and la-if. The details of how this natrix was _Generated are contained in
Appendix A 3.

1.3 Deficiencies of Current Aerospace Reliability Practice In Application to Current Advanced Launch
System Needs
Current Aerospace Reliability practice has not been able to affect the high reliabilities specified for Air
Force advanced launch systems. Current practice, as it seems from the investigations undertaken as part
of this effort, is relevant toward the production of launch vehicle systems whose range of achieved
reliability is upper bounded at 95%, and these levels have been achieved only after significant development
programs over which significantly lower reliabilities were the norm (80% - 90%). Many of the
deficiencies in current practice are a product of the developmental history of aerospace reliability
technology and its resulting evolution rather than direct misapplications of reliability techniques. It has
taken almost 30 years for a systematic reliability discipline to be developed since its early beginnings in
the Titan and Apollo programs. At the time of its creation, the US and world industrial base was quite
different. Failures of small electronic components because of their use in great numbers in complex
aerospace designs had a tendency to defeat the best efforts of system designers and render embarrassingly
useless, expensively developed systems. In the case of early launch vehicles, national prestige and
credibility of ICBM deterence required that these problems be eliminated quickly. The electronic systems
were the roots of aerospace reliability, especially in the era when quantitative information was completely
unavailable (if not unheard of).This tended to influence reliability technology development toward the
generation of techniques which could help quickly to improve the performance of systems without
undertaking the long term development of more reliable individual devices. Papers which touted the
development of reliable systems from less reliable devices, the initiation of qualitative investigatory
techniques such as FMEAs, and the use of redundancy to shore up the areas of weakness graduated from the
academic classroom of the 50's and early 60's to become the inuustrial practice of the late 60's and eariy
70's. Finally, they became institutionalized in the late 70's and 1980's.
While exposure of component functional failure effects through FMEAs and their elimination through
redundancy works, and works well for electronic systems where weight and operational constraints are
minimized and the effect of a single failure is to some degree IOc;li7ed. thp ,sefulne ,s of this approach has
always been limited in propulsion systems. In fact, the use of this currently institutionalized qualitative

17

,DATA
SURIMARY
M0';:
MJTOR

iNAME

MOTOR 3

MOTOR 4

UA 1205

MOTOR 5

UA 1207

THIOKOL TX526-2

THIOKOL TX-780

THIOKOL STAR 48

,'

iot

UrCCtu

er

1 ,A1

INA.A

IUIC

lThiokot

1r-rion

FISAf

ICastor 4

' -- 0eor mOt o r)


or 9lotor
.

PAGE

MOTOR 2

'USAI , Cou r't


.-

Sil"T

MOTOR 1

INAA

[Thiokol

lThiokot

castor 4A

PAM-D

(Ib) )
lr
l,'nt
-eight

13,22

1400

120,556

14,4o

1232,000

* b)

I0

"i t, r

10

itid
....
,
'.

oj nct~

I1

0
oiddt

I13

1
dootid

sotid

sot 'id

(0/F)
0 F

O"
.

1r
90.4/10.2

..... '-

197,556

570

t
I230

"'f
-it

'v'lq (nCO)
,t J
h
tim,"

'.

183,640

36.,'3,3

I 15,090

.ur

'1

1159,700,000*

36.6/3.3

I92,400

.,m)

112.9/10.2

fz) I
.. ,,
t' ) (tb) 123,144,000*

?56

1293

I56

185.3

-I- oqan ion


.',e 'rit area

18.0

1.' .9

6.1

4- 5

c arn
t arg l e

10

"-', *ioi.-nI

t riumbnr
,-tn-n

.Spn Stbitized
"' C

di .rharge

'

'."e

:ff-i -- t

*1
'"
1

:
*n

'

[d g

'fie

nt

16.13E-3

1363.7

.COcSt

I
jiabiity

n',<

17.?F 3

hrge

11.4112.

151.2

0.9975

0.9993

09769

ITitnn 340

Itnn 4 C.G P.

lDptn

ITitan 3

Titan 4 IUS

13910/3920/PAM-D

18

3914/3924

jDelta 6925

l t, 31 nPr AM 0,
T

0.119,89
CIN

SIM

.r

DATA
MOTOR

NAME
1

S;eF Agency

2 ":','nrt rr"

SURMARY

SIET

PAGE 2

MOTOR 6

M07OR

THIOCCL -R

HFRCLES GEM

USAF

LSAF

INASA

NASA

jNASA,USAF

MCD/q!as

IThiokot

lUTC

ITE-M-364-4

IAI;,t 3A

I ive)

MOTOR 8

MOTOR 9

SPACE SRS

THIOKOL

MOTOR TO
TE364-4

UIC ALGO 5

II
3

CDe;

'vi

%AM0

Gr-Ep lter

(GEM)

(.tae or motor)

r Motor

.-

e ght

I
1200

1285,000

14,000

( b)

.et tat weight

11,115,000

12,300

lAhlenir powder;

loot ;d

:e, 'fei
'3 "'.

.e

ra:

,I

cotld

jI
Aimonum perch! orat

(0.'Fl I

I .I

I
Km.r

28,000

II
X6 ,,3.3

1149.0/12.0

130.S/3.74

6.8,3.2

,t)
0
"I h.

I,

12,650 , 00

I
1229
1253

Pe l

'te-,m'ptls
S

OIf[

1283

283

bturn time

1120
Icc

I
7

Nzzt

156.11
I

expansion

133

rVI
h.
4-zzke exit area
13

20

16.48

I
13.
14

(fr t)
Engine cant angle

15.6,"

10

10

Can mter il{

2 '..," segment nlumber


'i ;ct vector
I
control (T.V.C)
23 Thr t Coeffiecnt Ct
.2

74

ISpin
I

Aeiodynamic Fins

land Jet Vanes

11.96

11.645

Nozle discharr

19E
1nzedscag
6 94.E 3

cceffcent Cd g

2' Engine cycle


26

Stabilized

7.!18E 3

Mass rischarge Rate

152.3

1407.2

0.9769

( Ih/; ec
/Egine 'oStI

28
29

Engine Reliability
Vehicle Name

10.9813

II
Delta 6925/7925

1elta 7925

Space shuttle

19

1I
De(ta
I

391413924

IScout launch Vehicle


(SLV)

,
Y: 'i F

SUMtARY SiirT

,AT.s

1'3

1 (CR
'P0
iR I I

M)l'R

IHIOKOL 1X3,,-3
T'.'
' ASA,U A
Th
3

'

A tii

1t

I'

MOI{I

THIOKOL TE-M-364-4

IHIOK)t

USAF

USAF

Tokokol

lhiokol

ihi

F M-616

SGO 81k. 1OIS

3A

I
SW,)[

MOTOR

iWASA,USAF

AIA'.;ro"

O1600
'.]

t:
!
-

IP

IHIOKOL IE H 640

I hIhi O

r3"I1

.L
..
w'l'p

, ; "

MuInk

IF H-762

NASA,USAF

C"'302Q
t, yr

,'st

THICKOL

PAGE 3

.3" ",t''

it

4
I so

':

~t

upstage 2
I

2l,(3/F)
r*

upstage 1

isaL id

I
-=,

~ -1'.

t r

SII

'+:),

'3

"6 ;.-,

'r'

,,'
l

120. 3

5 2 = 6/

.5

.5]

b95

35.32

f3T71

3'

'r1

, ,

i l.

--3

0n,

"3

.5

1288

329.30
.396

170
I

67011.8

201

~0
I

I6.OAF -3

.+ 'J:zzrei.',scharqg"
rrW

,'# et
3 -

W'

b't:3harg

ate 1223.5

E"q,,
", 'hcc

I6.89E-

i63.4.

119.8

II

(1I:l,,,'

73

I'3.46F-3

II

C-d q

Oo

III
I!

ab,',i,

N."'e

3, "cat

lC'.,

Scout

III
20

Stage Vehicle System Orbilt3InsertioAn Sys

SHEET

DIATA SUIMIARYI
2

04/T
N''"1

14INi

M I,'

Mille)R I,

M0lTON ito
40K~

NAMI
1

ArF

Oriy
It:

INS
M TIIICACI

lscc

INASA
I,
n

[Thiokol

MI/ON
AM D)II

MOTOR 19

1F

UtC 3AM 1

1It

J'USAF

jThik,
ec

[TC

iiriror[AM-A

or

kstiqe

Orbital

rnreor Motor

$I~

eight
1

*nrOae ter
f'!7 It

1e,
I1'

'hr.:

T2
hru t

I'. t

/5

1I

I
13

II
[1
3I1

Isol id

Io',,I1

[sa Ia

[sol id

[7.5/5.0

[6.5/5 .

110.7/9.5
[

[5.7/9.51

Ij

[ 3 5 ,20

["17, 60n

144,100

[I

[,;i 'd

rati I C/F)

'1 1i. lure

10

upstage 1

:Upstage 1

tw:I(,'~l

[ca.("

II

Sciences

IIII

I
Stare?nmbe
mF

tlhiu4 a

III

fiih t (I,

Caster

NPA?!, UJSAF

[OTC

II
Btoeing

hiot~ 1

PASA
N

IPAM nI IBIoeing

flot-r)

MnfOri 20

UT? SAM?2

III

(l[

;i,aum)

Ir irul S
(vrihi

I
116,500

[61 ,8q0

se)

(serIII
17 Nozzle expansion
ratito
13

Nozzle exit

[I
I

areaI

I'9 tng I'- cloit


angleII
Idel I

tI

Ia

Case .-qment numsberI

71

22

ThruistvectorII
c on trolI( t .V .C)

'3 thrust Coefflecrnt Cf


24Nzl

aischargeII

25

caeff'cent Cd 9
Ergi1ne cycleII

26

Mass Disharge9

PatIe
I
b/

.9 1-i1- .
29

F'iabil
P
ItyII

v~h Ir te Name

I STIS/PAM-A

ISI/PAMOil

tlitan
4 tus

Ititan 4 ]US
t7ransfer orbit

21

stages

;Scant Sly

TA

DATrA SIUhfKARY

04/ I/ 89
HI M

SG INE n,

MOTOR21

MIUI

'lAME

IOtOR

Thiokol Avtafns 3

Ac).ncy

INASA, USAF

M~n,,ltfuro~r

Altair 3

JPJASA, USAF

IThiokoi
l

3 Ors ignat ,on


(,tag, o motor)I
4
-,gin,or Nbc

"2

thickol

io

jAntares 3

ol

eIxair 3

SHEET
MOTOR 2S

Hercules X259-94

PAGE 5
MOTOR24
thicikol Castor 4H

MOTOR
25
Thiokol

Star-3D

INASA, USAF
lHercutes

Thiokol

IThiokol

Artaros 28

ICastor 4H

jStar-3D

13

J1,2

14

Vsolid

Isol id

Isolid

woight 0lb)
ant weightI

Propel

(1h)
Stigo number

II
13

7C, idi zer /Fucl


PMixture

sal id

[solid

ratro

(0/F)

Iicirh/

I mto
01 Coc(
h j

t vI

boor

thrwutt (c.bumi)

13 Fhl, our

5. 700

2)000

128,0001

0/dre

138,000

17,500

(fea leon)
1;
I

Sf-t.

irpuic

,otIc'-.il

(.x

I-or)I
I %zzle expansion

raitio

18 Nozzle exit areaII

I
I

(ftxft)I
10

Engine- cant angle

20

C-ise materialt

21

fl>; .- %qmeflt

22

Thrcr--tvertor

conitrol

nuiitjerII

11.V.CI

23

thrust (oofficnrt

24

Nazzl"

Cfj

coefficent Cd g

II
F

25

Engine c/cle

26

miss Discharge Rate I

27

Fugume

discharge

costFII

2P Engine Reliability
29

Veohicle Name

III
IScout

SLVT1A

IScout

SLvT1A

]Scout SIN-TA

22

IConcstoge IT

IConestoge TI

04/19/89

DATA

NUM FNGINE or MOTOR


NAME

MOTOR 26
Thiokol FE-M-364-2

PAGE 6

SUMIARY SHEET

MOTOR 27
Thiokol TE-M-442-1

Motor 28
Thiokol TE-M-360 4

I User Agency

IUSAF

IUSAF

USAF

2 Manufacturer

IThiokol

1Thioke

IThiokot

II
3 Designation
(stage or motor)
4 Engine or Motor
weight (Lb)
5 Propellant weight
(b)
6 Stage number

lupstage (varies)

upstaoe 2

7 Oxidizer/Fuel

Isolia

solid

Burner 2, 2A

1Burner 2A

IS65

jupstage 2
solid

8 Mixture ratio (O/F)


9

Coolant

10 Length/Diameter
5.8/5.2
(ft)/(ft)
I
11 Thris t(sen tev) (tb)IIII
12

*
Ib.sec
Thrust (vacuum)

15.8/5.2

110.0/4.5

I
10,000

8,000

7,325

tMb)

13 Chamber pressure
(psia)
74 Spec impuls
(sea level)
15 Spec. imputs
(vacuum) (sce)
16 Total burn time
(sec)
17 Nozzle expansion
ratio
18 Nozzle exit area
((Taft)

19
20

Erigine cant angle


(deg)
Case material

21 Case segment number


22
23
24
25
26

Thrust vector
control (T.V.C)
Thrust Coeffiecnt CfI
NZle discharge
co'fficent Cd g
Engine cycle

27

Mass Discharge Rate I


(lb/secI
Engine cost

28

Engine Reliability

29

Vehicle Ne-ie

Burner 2, 2A

Inurner 2A

Stage Vehicte sys

I
23

D)ATA SIMAiIY SHIE'I

04/ 1919)
or MOTOR
NUMlENGINzE
NAM4E
I User Agency

ENGINE I
Aerojet LR-87-A.)11
(USAF, Coeanercial,

ENGINE 2
Aerojet LR-91-AJ-11
(USAF,

Cozurrrcial

(Aerojet

,q,
3 D-pte
(stage cr motor)
4 Engrinc or Motor
wetqgit 1Ib)
5 Propellant weight

((Ib)
6

Stage nuviebr

0-i,,r~r

Mixcre r.itio (0/F)

Coolaint

10

(USAF

ENGINE 4
Aerojet LR-87-AJ-5

ENGINE 5
Aerojet LR-91-AJ-5

(USAF

(USAF

(Aerojet

(Aerojet

IN204/N2H4-UDMH

IN204/N24.UMN

I1

II
2 Manufacturer

ENGINE 3
Aerojet AJ10-138

PAGEI

jAerojet

(Aerojet
frr.r-19C

I
jI
1294,000

169000

II
1

14204/N1ll/.-0IOM

19,000

12

13

1004//i111-.lO4

jN204/N2H14-UDMH

II

Ln~/~oiee
(ft)I(f t)

11 Thrusl~sca (ev) (Lb)I264,500 /273,000


*

1
101,000 /104,000

Thrust (vacjuffl)

12

18,000

I
1

100,000

0(b)
13

215,000

(b.oer

Chniihpr prc,-sure
(p5(ia)

14

Sp~c

irputl

(sea tce")
15

Spec .

16

r,)Cil

im",uI s

(vacurn)

(SCe)

burn time

(sec)

I/ Nozz[C ex~paniiSonl
ratio
18

I
I

Nozzle exit area

(ftvft)
19
20

Engine c,-irt angle

(deg)

I
j

Casec material

n~apber

21 Ci-"eiei

22

Thrust vectorI
(t.v.C)
j
Thrust Coeffiecnt Cfi

control

23
24

25

Nozzle discharge
Cd 9
Engine cycle

?6

Mass Discriarge Rate I

coefficent

27 Erlrre cost
28

E p"

29

/,h'

fi
el ability
Name

10.9800

(Titan 340, 3, 4CGP,

11itan 34D. 3, 4CGP,

ITitain 34D

24

(Titan 2 SLV

(Titan 2 SLV

(ATA SUMMAIRY

04/19/19
NUM ENGINE or MOTOR
NAME
I

User Agency
Manufacturer

Designation

ENGINE 7

ENGINE 8

ENGINE 9

ENGINE 10

Rocket. YLR-89-NA7

Rocket. YLR-TOS-NA7

P&W RLIOA-3-3A

Rocket. RS-27

TRW TR201

NASA

NASA

NASA

NASA, USAF

NASA

I11

IRocketdyne

IRocketdyne

Staqe number

111,506

77,825

I
I

1/2
lOX/RP-1

7)aldrzr/Funl
3

Mixture ratio (O/F)

Coolant
Length/Diameter

11

Thrust(sea 1ev)

,Delta

I
I

Propellant weight

10

ELT Thor

IRW

Engine or Motor

(ib)

lRocketdyne

lCentaur

weight (Ib)
5

IP&W

MA-5

(stage or motor)
4

PAGE 2

'rI

ENGINE 6

II
2

114,867

110,000

12

LOX/LH2

II/RV I

2.25

1175,000

LOX/RP

12.22

15.0

160,500

205,000

116,500

229,600

12.23

IN202iN2H4 lODMH
11.6

(ft)/(ft)
=
12
13

(lb)1188,750

b.sec
b

Thrust (vacuum)
(Ib)

1733

1474

Chamber prcssure
(psia)

14

Spec

1259

1220

15

(sea level)
Spec. impuls

1292

1312

(vacuum)

16

17
18
19

impuls

1650

1650

19,530
100

1261
446.4

294

1303

1227

1318

1153

1283

(sec)

I
1404
I

Nozzle expansion

18

125

161

18

146

ratio

Nozzle (Prit area

111.24

111.56

I
18.22

112.0

117.4

(ftxft)

Engine cant angle

I0

10

10

I
I

10
I
I

10

(deg)

IGimbaLled Engines

IGimbatted Engine

IGimbalted Engine

Total

(sce)

burn time

20

Case material

21

C,-e segment number II

22

Thrust vector

II

I
I

II
control

IGirebalted Engines

(T.V.C)

IGimballed Engines

IeandVerniers

and Verniers

23

Thrust Coeffiecnt Cfl1.44

11.24

11.79

11.46

11.75

24

Nozzle discharge

15.64e-3

14.Ole-3

15.59e-3

15.78e-3

1785.4

131.45

15.54e-3

coefficent Cd g
25

Engine cycle

I11

II
26

Mass Discharge Rate 1728.8


((b/sc)

27

Engine cost

28

Engine Reliability

10.9907

29

Vehicle Name

lAtlas G Centaur

1275.0

137.0

I
I

II

II

D 1A/Atlan

10.9905

10.9854

Atlas G, Centaur
D-1A/AtIn

II

10.9833

0.9F74

Atlas G,Centaur D-IAIDelta3914/3924/6920/Delta 3914,'3924


I/D-I,

25

Titnn 4CCP

16925,3910/3920/PAM D

3910139201PAM 0

DATA SUMIARY SHEET

0.., 19/89
NUM ENGINE or MOTOR

ENGINE

NAME

11

Aerojet AJ10-118k

IU,,.r Ajency

INASA,USAF

ENGINE 12

ENGINE 13

ENGINE 14

ENGINE 15

Rocket. RS51

P&. RLIOA-3-3B

Rocket. LR-89_NA5

Rocket. LR-105-NA5

IVaries

I
2

Manufacturer

Aerojet

IRocketdyne

I
3

ocsiqration
(stage or motor)

Delta

Engine or Motor

JAMS
I

Prepellant weight

Stage nurmber

12

Cxidizer/Fuel

JN202/N2H4-UDMH

Mixture ratio (O/F)

Coolant

10

Length/Diameter

11

(ft)/(ft)
lhru.t ra 1ev)
*
Ib.s c
1

14

Spnc

11.9

(sce)

burn time

(Orr)

Nzile exit area


(ftxft)

1/2

LOX/LH2

LOX/RP-I

LOXIRP-I

165,000

60,000

I11

12,650

115,000

320.2

143532
I
65.2

I
119.9

(deg)
Case material

21

Cone scment number

2?

fhr,,t

Erigino cant angle

vector
rrll (T.V.C)
r~

Gimbatled Engine
I

2S

Thrust Coeffiecnt Cfjl.93

24

Nozzle discharge
coefficent Cd g

25

Engine cycle

26

Mass Discharge Rate 130.32


(lb/sec)
I
Engine cost
I

29

11

upstage

20

28

IMA-3

1114

I
10
I

27

IMA-3

IN204/1MM.H

II

19,710

nozzle expansion
ratio

19

Centour

,uts
(sea le e[)
Spec. imputs
(vacuum)

11

jRocketdyne

Cha,0)r prosu,-,re

17

IRocketdyne

(Ib)I

vacuum)

13

Iotal

IP&W

upstage

h)

16

USAF

15

IUSAF

113,200

1(b)

Thrust

IUSAF

2,790

weight (lb)

12

PAGE 3

niino Rd inbility
Vehicle Name

16.03e-3

10.9774

IOetta3914/3924/7920/IStage

ISIS/Centaur 9

1 925,3910/3920/PAM.Dl

26

Atlas E

Atas

DATA SUNNARY

04/19/89
ENGINE

NUM ENGINE or MOTOR

2
3

ENGINE

16

USAF, NASA

User Agency
Manufacturer

17

NASA

II

IAGC

Belt
IYLR-81BA-11

Designation

PAGE 4

AGC Trnstge

Belt 8096

NAME

SHEET

Delta

I
I
I

(stage or motor)
4

Engine or Motor
weight (Ib)

Propellant weight

Stage number

(Ib)
fupstage (varies)

I
7

Oxidizer/Fue

Mixtufe ratio (O/F)

Coolant

10

Length/Diameter
(ft)/ft)

11

Thru;tlsca lev)
* = Ib.ec

12

Thrust (vacuum)

IIRFNA/UDMH

12

I
N202/A-50

(b)
16,000

10,000

(tb)
13

Chamber pressure

14

(psia)
Spec inpluts

15

(sea level)
Spcc. imputs
(vtituum)

(see)

16

Total burn time

17

Nozzle expansion

18

Nozzle exit area


(ftxft)

19

FErqmne cant angle

(set)

(d-j )
20

Cae material

21

Case segment number

22

Thrust vector

23

(T.V.C)
Thrust Coeffiecnt Cf

24

Nozzle discharge

control

coefficent Cd g
25

Engine cycle

26

Mass Discharge Rate I


(b/sec)

27

Engine cost

28

Engine Reliability

29

Vehicle Name

Agena 0

Delta 3920/PAM-D

II
27

;1 -

-m
id

!
a

oa

aaa

0 0

U.

0~

i~

l0

000

>

ON 3

V IS

R UJG

28

TABLE IA: RELIABILITY OF THE THOR/DELTA FAMILY


Thor Delta
Vehicle Name

__________________________

Thor

Delta

Combine

57-83

60-87

57-87

0.8982
0.8750
0.9181

0.9402
0.9110
0.9615

0.9192
0.8789
0.9551

0.9965

0.9950

Stage 1

0.9346

0.9850

Stage 2

0.9764

0.9746

(0

Stage 3

0.9877

0.9843

Propulsion

0.9568

0.9701

Guidance

0.9830

0.9950

Flight Control

0.9907

0.9851

1,U)

Structure

0.9969

Electrical

0.9815

0.9950

Separation

0.9969

0.9950

Other or (UK)

0.9923

Data Collection
Period

Success
Ratio: Mean
5%
95%
Stage 0
Stage 1/2
0

z
w

Stage 4

29

TABLE 1B: RELIABILITY OF THE TITAN FAMILY


Titan
Vehicle Name_______

Data Collection
Period

Titan I

ucaa5"-5

Ratio:

Mean
5%
95%

_______

0.6427
0.5585
0.7202

_______

Titan 11

_______

__

Titan 34D

Combine

62-76

64-87

82-87

59-87

0.8864
0.8323
0.9272

0.9406
0.9055
0.9651

0.7355
0.4978
0.8990

0.8013
0.6075
0.9546

0 9946

0.8678

Stage 0
Stage 1/2
0
z
w

Stage 1

0.8214

0.9574

Stage 2

0.7825

0.9258

Stage 3

0.8476
0.9783
0.9667

Stage 4
0.9290

0.9622

Guidance

0.9929

0.9892

Flight Control

0.9858

0.9946

Propulsion

a
0

Structure

0.6725

_____

Titan III

0.9946

0.9702

Electrical

0.9929

Separation

0.9858

Other or (UK)

30

0.7355

TABLE ICQ RELIABILITY OF THE ATLAS FAMILY


Atlas
Vehicle Name---------------------------------------Data Collection
Atlas A
Atlas B Atlas C
Period
58-59
57-58
58-59

Atlas D

Atlas E Atlas F

Atlas SLV Atlas G

59-67

60-88

61-81

67-83

Ratio: Sa.

0.4219

0.5558

0.5833

0.8401

0.7426

0.8883

0.9445

5%
95%

0.1827
0.6977

0.3010
0.7896

0.2642
0.8585

0.8015
0.8734

0.6454
0.8240

0.8359
0.9276

0.8736
0.9652

Stogs 1/2

0.8713

0.9573

0.9861

Stage 1

0.8523

0.9279

0.9719

Atlas H

84-87

83-7

Atlas/
Centaur
62-87

0.7883

0.8450
0.9489

0.4761
0.9953

0.6313

0.6313

0.9814

'U

0.9856

Stage 2

______

10.9810

J0.9420

Stage 3
Stage 4
Propulaion

0.8844

0.6667

Guidance
:1

Flight Control

0.7688

0)

Structure

0.7688

0.8889

0.8713

0.9212

0.9571

0.9869

0.9428

0.9869

0.9824

0.9535

0.9824

0.9907

0.9857

0.9814

0.9857

Electrical
Separation

0.9934

Ottw. or (UK)

31

57-18

no failure no failure 0.9069

Stage 0

Combine

0.9824

0.9907

0.9824

0.9907

TABLE 1D: RELIABILITY OF THE SATURN FAMILY


Saturn "Family"
Vehicle Name
Data Collection
Period

Rato:cea
5%
95%

Jupiter

Juno

Saturn I

Saturn lB

Saturn V

Combine

58-58

58-61

62-65

66-75

67-73

58-75

0.3611
0.1026
0.6879

0.4300
0.2135
0.6743

no failure
0.7943

no failure
0.7743

0.9822
0.8180
0.9997

Stage, 0
Stage 1/2
0

0.11575

Stage 1

Lu
0.574 1

0. 7009

Stage 4

0.6290

0.9378

Proputlon

0.7870

Stage 2

0.9822

Stage 3GAS90.9822

Guidance
U

Flight Control

U)

Structure
Electrical
Separation
Other or (UK)-

10.5741
i--

32

0.7547
0.2652
0.9935

TABLE 1lE: RELIABILITY OF THE SCOUT FAMILY


Scout "Family"
Vehicle Name

Data Collection
Period

Vanguard

Scout

Combine

57-59

GO-88

57-88

0.3388
0.1555

0.9420
0.9023
0.9683

0.6404
0.1821
0.9744

Stage 1

0.8347

0.9917

Stage 2

0.5049

0.9875

Stage 3

0.8039

0.9746

success
Ratio: M~ean
5%
9%0.5723
Stage 0
Stage 1/2
z

Stage 4

2
10U0

_____

______________________

_____________

0.9870

Propulsion

0.7521

0.9793

Guidance

0.9174

0.9917

Flight Control

0.8347

0.9917

Structure
Electrical

0.9876

Separation

0.9959

FOther

or (JUK)

0.9959

0.8347

33

TABLE 1IF: REUABIUTY OF THE SPACE SHUTTLE


STS
Vehicle Name
Data Collection
Period

Space Shuttle
81-88

Success
Ratio: Mean
5%
95%

097
097
0.8147
0.9806

Stage 0
Stage 1/2

0
z

0,9275

Stage 1
Stage 2
W

Stage 3
Stage 4
0.9275

Propulsion
Guidance
:1

Flight Control
Structure
Electrical
Separation

Other or (UK)

34

system of reliability techniques can lead designers and decision makers to make incorrect decisions even
it correctly applied as is demonstrated below. Finally, it appears that the currently institutionalized
reliability technology base, because of its qualitative nature, will be unable to address just the residual
reliability related issues such as residual variability reduction, risk management and human reliability
that limit launch systems to their current operational reliability levels.
Here are some examples why. The examples fall into two broad categories: either they are the result of
performing FMEAs/FMECAs or quantitative reliability analysis.
1.3.1 FMEAs/FMECAs
FMEAs/FMECAs are structured to detect single point failures. When single point failures are identified
they are either controlled or compensated for by use of redundancy.
Redundancy and CorrelationFactors - When applied to electronics, redundancy can be a very effective
way to enhance reliability. However, as Section 2.3.1.2, "Product Design FMEAs" points out, even
electronics can be susceptible to "common cause" or "correlation" failure. These are the types of failures
that can negate the benefits of reduidancy due to a single event. Product Design FMEAs have proven beneficial
in reducing vulnerability to correlated failures in electronics systems and may prove to be beneficial in
the analysis of propulsion systems. None-the-less, propulsion systems, like any high energy system, are
inherently more vulnerable to correlated failures. This is supported by the study of the shuttle main engine
development history which is summarized in Section 2.1.4 and provided in detail in Appendix A.1 "An
Inestigation of Historical Failure Correlation Factors Using the Shuttle SSME Flight History as an
Example."
Controls and Variability - When redundancy, for whatever reason, is not in option when conductng
an FMEA, the failure mode is "controlled" either by designing tha failure met -anism directly out of the
system or by placing more stringent controls on manufacturing and/or testing. Designing a failure
mechanism out is usually not a viable option because it requires a physically different way of obtaining the
same function. Thus, manufacturing or testing is the most practical way of constraining the failure mode.
The only problem with this approach is that if methods are not in place to measure the effects in terms of
reduced variability, there is no way to measure the impact on relability.
Reusabiliy - Another potential problem with FMEAs is that they tend not to be "living" documents in
the sense that if a system is reused or is reusable, the FMEA is not structured to handle the potential results.
For instance, weld failures on the Space Shuttle Main Engine can result from thermal cycling and fatigue
through reuse. The FMEA is not structured to conveniently handle this situation.
"Bottom Up" Methodology - As has been previously discussed, FMEAs/FMECAs are "bottom up" methodologies and as such are not designed to list all potential malfunctions of a system, only those which
propagate from known failure modes of components within the system. Witnout a comprehensive way of anticipating system or subsystem malfunctions in a global sense, the analyst can never be comfortable that
the FMEA/FMECA is exhaustive. A "Top Down" methodology as described in Section 2.3.1.2 would help
overcome this "Bottom Up" obstacle.

35

1.3.2 Quantitative Reliability Analysis


In order for quantitative reliability analysis to be effective the three following constituents must be
present:
1. Meaningful Reliability Data/Issues
2. Proper Reliability Analysis Tools
3. Risk Analysis and Management Capabilities
Meaningful Reliability Data/Issues - For the current generation of launch vehicles, the historical data
set (see Appendices A. 1 and A.3) appears to be both meaningful and capable of addressing the key reliability
issues. To be meaningful, the reliability data must:
1. Be complete for both success and failure.
2. Have failure causes consistently identified.
3. Have chronologies of failure history established.
4. Have design change chronologies established.
In order to be effective, however, the following issues must be resolved:
1. How relevant is history in predicting future performance in a developmental system?
2. How is historical reliability growth to be accounted for?
- old failures less than new?
- How are design changes factored in?
3. What effect does hold down time just prior to launch have on prevention of failures which otherwise
would occur after launch?
These issues can only be addressed by applying the appropriate quantitative reliability models using
a properly developed and structured historical data set.
Quantitative Reliability Analysis Tools Specifically for Propulsion Systems - Until now the only
quantitative methodology available for propulsion systems which addresses the developmental nature of
such systems have been traditional reliability growth methods (such as the Duane approach and Weibull
methods) and D. Lloyd's methodology (see Section 2.2.2). Even if these methodologies were adequate in
addressing overall launch vehicle reliability, three other areas should be considered in order for a
quantitative reliability analysis to be fully effective.
They are:
" Estimation of Stage Reliability
" Estimation of System Reliability
" Estimation of Engine or Motor Reliability
A method of estimating launch vehicle reliability is summarized in Section 2.2.1 and all four methods
are described in detail in Appendix A.3, 'Reliability Analysis for Current US Launch Vehicles".

36

Risk Assessment/Management - Section 1.3.1 has described the limited value of FMEAs/FMECAs in the
quantification of reliability. Although they are useful in constructing logic models (see reliability
techniques, Figure 2), strictly speaking they can only be used to quantify consequences. For instance, they
can be used to quantify total number of welds whose failure could cause loss of an engine, cluster, stage,
or vehicle (consequences), but this approach does not provide the analyst with the quantitative risk
discriminating information required of a decision making tool. A decision making tool allows the analyst
to rank individual weld failures, for example, with other sources of propulsion system failures in order
to determine where to best expend resources. If a decision is made to expend the funds, the funds must
be dedicated or "fenced off" and made distinct from management reserve funding. Even well developed
criticality ranking techniques do not do the job sufficiently because they do not develop rankings at the
system level but only at subsystem or lower levels, since their system level rankings are often developed
only on a near relative basis. This approach can give the impression that a thrust vector control system
single failure is just as important as other propulsion system elements such as a heat exchanger or turbopump, even though the latter may have several orders of magnitude higher failure probability. The solution
to this problem is to use the quantitative reliability analysis tools of Section 1.3.2.2 in conjunction with
Risk Analysis/Assessment techniques as described in Sections 2.3.1.3 and 2.3.2.
Figure 8 (Section 2.3) shows the relationship of risk management and assessment to infrastructure
controls that have an impact on reliability.

37

2.0 (TASK 2) RELIABILITY ENHANCING METHODOLOGIES


2.1 Lessons Learned
This section is concerned with lessons learned either as a direct result of plant visits or from a related
analysis.
2.1.1

Variability Control

Variability control was highlighted as a reliability enhancer at Hercules (West Virginia) and
McDonnell Douglas (Huntington Beach, CA), as noted in Appendix B.
The Hercules trip indicated that solid rocket motors can achieve high reliabilities (>.999) and maintain
these reliabiiities over reasonable production runs (as many as 1000 units/year), if the proper
reliability considerations are included in the design and development phases of the program and the proper
process controls are in place, and if the proper test program remains in place. The process control system
must be able not only to detect penetrations of the Upper Quality Limit (UQL) and Lower Quality Limit
(LOL), but also trends toward unacceptable quality These trends must be thoroughly investigated and tied
to causes, the causes addressed, solutions derived and implemented, and control mechanisms directed at
controlling key process parameters verified as being reestablished.
2.1.2 Reusability
Reusability is, on the surface, a design goal of significant program benefit. However, the benefits of
reusability ais significantly compromised if the reliability of an engine is adversely affected by the
requirement for reuse. Besides the direct costs involved in developing a reusable design, there now appears
to be a significant indirect cost required to maintain reliability in reusable design. For example,
reusability by its very nature tends to decrease the production run. When production runs are decreased,
investments in automated production equipment becomes less economical and the production process
therefore tends to become more prototypical. Prototypical production, especially of complex equipment,
increases the problem of variability control and therefore substantial post production testing may be
required to ensure high reliabilities. A good example of such an indirect impact on reusability was seen at
the Roc' etdyne SSME production facility in Canoga Park, California.
2.1.3 Performance Indicators
For high reliability programs it is important to identify early on symptoms of the process which
presage deterioration in performance. This has been done in the financial community by the development
of a set of 'leading" performance indicators and developing performance trends based upon the indicator
trajectories through time. If such a set of indicators could be developed and trended for advanced propulsion
system development programs, the indicator trajectories might provide early warning of problems arising
during developmcnt and operation. This early warning could provide the time required to institute
corrective action before actual program reliability performance is affected.
2.1.4 Correlation Factcrs (See Appendix A.1)
Given the current state of rocket engine technology, there exists a finite orobability of catastrophic
engine failure during a vehicle launch. A catastrophic engine failure is considered one in which the engine
does not shut down in a controlled manner and includes uncontrolled f:re, explosion, breach of the pressure
boundary, shrapnel, complete loss of fuel or oxidizer supply, or a combination of these. Given that an engine

38

has failed catastrophically in flight, an immediate concern is for other critical hardware in the vicinity
of the failed engine. For vehicles configured with multiple engines in a cluster, the question becomes
whether the catastrophic failure of one engine will result in the catastrophic loss of the entire engine
cluster.
In the present study, the correlation between a catastrophic failure of a Space Shuttle Main Engine
(SSME) and the propagation of that failure to include the entire SSME three engine cluster has been
developed based upon the SSME Test History.
Cnlusions - In the development of future launch vehicles, the potential benefit of engine out
capaoilities must be weighed against the risks that if an engine fails in an uncontrolled manner, it will
result in the loss of the entire engine cluster. This study evaluated the SSME which is flown in a three engine
cluster. No uncontrolled SSME failures have occurred in flight. Only a limited amount of ground testing has
actually been done in a three engine cluster and although failures have occurred, none have propagated to
involve the entire cluster.
However, the test data evaluated here indicates there is a reasonable probability, approximately 17%,
that an urcontrolled SSME failure will propagate to the adjacent engines given that an uncontrolled failure
occurs. The confidence interval is between 4% and 41% that a failure will propagate to the cluster (at 95%
confidence).
A summary of the results of the data review is given in Table 2.
2.1.5 Correlation vs. Engine Out Capability(See Appendix A.2)
A preliminary correlation factor vs engine out capability study was conducted using the following
assumptions:
" Smaller engines are more reliable than larger ones.
Increased plumbing due to a larger number of engines decreases reliability.
The results of the study indicate thdt a four engine configuration would be the most reliable if correlation
factors are not taken into account.
When correlation factors are between 20 and 27% the four engine configuration is no better than a
single engine configuration. Section 2.1A indicates that the 95% interval for correlation failure is 4 to
41%. Therefore, there is a substantial probability that correlated failure on an engine design which is
comparable to the SSME could negate engine out capability.

39
I

=MEMO

CTV
0

118H

~~II

IZK

~+

<I
2~--~,

IJ
I

I1

Li

-j

t*~L a:~
W

Hu

>

Li

1~
~II~

-J

11111cc

~J~

~,

a:

oor

'I

IL

un~

IU

I~lILi
r!

IU

2.2 Historical Data Analysis


'Historical D-ata Analysis" is intended only to acquaint the reader with the various analytical options
presently available. In fact, as is discussed in Section 3.0 (Comparison of the Methods of Section 2.2),
there is insufficient information, as well as limited time and resources available to the study, to make
a thorough comparison of methodologies. Further studies are, however, recommended as stated in the
Recommendations Section.
2.2.1 Y. Shen's Methodology (Reliability Analysis for Launch Vehicles)
The performance history of any launch vehicle can be considered as having two time periods, the early
development period and the stable performance period. During the early development period, the
unreliability of a launch vehicle is generally high and unstable. After a 'failure analysis and fix" process
in combination with technical and design improvements, the unreliability of a launch vehicle goes down
and stabilizes.
This effect of early transient behavior followed by stable reliability behavior is indicated in Table 1 a
for the Thor/Delta family and Table 11 b for the Titan family. In both cases, oscillating reliability
histories are observed early on with later stable performance. It is also interesting to note that Titan I
appears to have never reached stability and the Delta, being based on the significant Thor history, reached
a stable, high level of reliability very quickly.
These historical reliability growth curves are developed according to the following method.
The maximum-likelihood estimator (failure ratio) for unreliability can be defined as:

U = F/ L
Where F is a cumulative failure number, L is a cumulative launch number and F is a function of L.
The easiest way then to estimate the average unreliability of a launch vehicle is:
(1)

U,,= F/L

where U. is the estimated average unreliability, and F and L are the cumulative failure and launch numbers.
As was mentioned before, the reliability growth effect must be considered to get a more realistic
estimation of the unreliability. In the present model, the average unreliability is defined as
U=U 9 - AU

(2)

where AU is the correction reliability caused by the reliability growth effect and can be explained as

AU = AF/L
or
AF = AU

(3)

where AF is the correction cumulative failure number.

41

Averaging both sides --f eq1!aiion ()


,AF

we get

\~LJL

AU~

AF

(4)

Substitute equation (1) and equation (4) into equation (2)


U =

-F

(5)

The estimation of the unreiiability of the launch vehicle at the nth launch can then be approximated as
NF

rn

Un=L

Fn

Li)

i
N(6)

where L, is the it" launch number, and F, ;s ti,

cumulative failure number at it launch.

The reliability R, at the n" launc~h is

f
~Fl
-

2
L
Ln

--n
Li)

Ln

N_

c il-e value of average reliability from equation (7) are now


on~c

The concepts of confidence ievel


illustrated as the following.
Let N be the launch number, then X
confidence is given by -

Fi

N -R, is the, success number. In this case, the 5th percentile

F- qirz 2 n~;j~2 x+ 2, ?x)

and tne 95th perce~ntile confidence is given byR09


5

x+, 1) F0 .9 5 (_2x-t-2,2n- 2x)


ri- x) +t ( x+ 1) FO.g5( 2x+2, 2n- 2x)(9

where F,(n,,n2 ) is the 100 C", percenli'2 ~

of freedom.

-distribution with

111 numeratorand

n2 denominator degrees

TABLE 3:

Test
no.
(N)

AN EXAMPLE OF A TEST SEQUENCE PERFORMED ON A SOLID ROCKET, ITS RESULTS AND RELIABILITY COMPUTATION USING D. LLOYD'S METHOD

Months of Retesting* suits

Value of failure

(1-)

un"
D_

f2

f3

R=

0.000
0.000
1.000

1.000
1.000
0.667

S
S

1
0.900

1.000
0.900

0.750
0.820

12
13
14

S
S
F

0.684
0.536
0.438

0.684
0.536
1.438

0.886
0.923
0.820

9
10

16
18

S
S

0.369
0.319

1
0.900

1.369
1.219

0.848
0.878

11

20

0.280

2.280

0.793

12
13

21
23

S
S

0.250
1
0.226 0.900

1
0.900

2.250
2.026

0.812
0.844

14
15
16
17
18

25
28
29
30
32

S
S
F
F

0.206
0.189
0.175
0.162
0.152

0.684
0.536
0.438
0.369
0.319

0.684
0.536
0.438
0.369
0.319

1
1
0

1
0

1.574
1.261
2.051
2.900
0.790

0.888
0.916
0.872
0.829
0.956

19
20
21
22
23
24
25

32
33
35
37
39
40
42

S
S
S
S

0.142
0.134
0.127
0.120

0.114

S
S

0.109
0.104

0.280
0.250
0.226
0.206
0.189
0.175
0.162

0.280
0.250
0.226
0.206
0.189
0.175
0.162

0
0
0
0
0
0
0

0
0
0
0
0
0
0

0.702
0.634
0.579
0.532
0.492
0.459
0.428

0.963
0.968
0.972
0.976
0.979
0.981
0.983

1
2
3

0
3
5

S
S
F

4
5

8
11

6
7
8

Remarks

1-f/N

f5

Successful test
Successful test
Failure mode, fI
case burnthrough
Successful test
f1 corrected, internal
installation added,
success
Successful test
Successful test
Failure mode, f2
TVA failure
TVA not tested
Successful test of
TVA fix
Failure mode fI recurs, f3

TVA not tested


Successful test of 2nd
TVA fix
Successful test
Successful test
Spec. violation,f4
2nd spec violation, f1
Spec. change eliminates f%,f5
Successful test
Successful test
Successful test
Successful test
Successfultest
Successfultest
Successful test

Number of months after start of test program, not length of test.


Notes:

Test no. 4: failure from test no. 3 (f,) is not yet diminished because corrective action is not implemented until test no.
5 f,continues to diminish in all subsequent tests since it does not recur.
Test no. 9: failure from test no. 8 (f,) is not diminished because the thrust vector actuator (TVA)subsystem is not
*hooked up' until fix is implemented and successfully tested in test no. 10.
Test no. 11: failure from test no. 8 (f,) recurs: therefore, fix implemented in test no. 10 is not considered successful, and
both TVA failures are reinstated as full failures.
Test no. 12: TVA is not tested while failure mode is undergoing engineering analysis, therefore, fland f, are not diminishea;
Test no. 13:successful test of new TVA fix applies to both failures (f, fj): therefore, values of both failures are diminished.
TVA failure does not recur in the remainder of the example and, therefore, both failure values continue to diminish.
Test no. 16: small performance anomaly occurs; however, it is outside current specification limits andtherefore, must
be considered a failure (f,).
Test no. 17: same as test no. 16 (f,).
Test no. 18: Corrective action for f, and f, is to change specifications/conditions (with customer approval). With this
change, tests 16 and 17 become "non-failures" and fI and I, immediately become zero.
Test nos. 19-25: all are successful, demonstrating a lower probability of failure for f, I and 1, failure modes.

43

For a complete discussion of this methodology, see Appendix A.3.

2.2 2 P. Lloyd's Methodology (Taken from Reference 14)


D. Lloyd developed a mthod for estimating and forecasting reliability from attribute data, using the
binomial model, when reliability requirements are very high and test data are limited. Integer dataspecifically, numbers of failures-are converted, using this approach, into non-integer data. The rationale
is that when engineering corrective action for a failu,e is implemented, the probability of recurrence of
that failure is reduced; therefore, such failures should not be carried as full failures in subsequent
reliability estimates. The reduced failure value for each failure mode is the upper limit on the probability
of failure based on the number of successes after engineering corrective action has been implemented. Each
failure value is less than one and diminishes as successes continue. These numbers replace the integra!
numbers (of failures) in the binomial estimate.
In Lloyd's research, this metnod of reliability estimation was applied to attribute data from the life
history of a previouly tested system, and a reliability growth equation was fitted. It was then "calibrated"
to allow for reliability projections to be d,6,alo'_-r for a new similar system. In this way, the model allows
for management to discern early on whether the system's ultimate reliability requirement will be met and,
if so, when is it likely to be achieved. By comparing current estimates of reliability with the expected value
''imp.ed from the model, a reliability growth forecast can be obtained by extrapolation.
An example application of Lloyd's method to a solid rocket program is shown in Table 3. As can be seen,
the methodology predicts a significantly higher success ratio (.983 vs .80) than would be obtained without
considering growth.
2.2.3 Curve Fitting (Polynomial)
Polynomial trends are of the form
3
2
Y=A+BX+CX + DX +...

+JX

The straight line is a special case, having only the first two terms on the right of the equality sign. With
three terms on the right, the polynomial is of quadratic form, and so forth. Typical forms are shown in
Figure 3. Generally speaking, it is unwise to fit a high-degree polynomial to the data because doing so almost
assures the mixing of trend and cycle. Also, a glance at the figure below will show that none of the
polynomials, other than the straight line, can be extended or projected very far without going off the page.
Keep in mind that only a portion of the curve is used to represent the trend.

44

Y=A + BX +CX

2 +D X 3

" ,,BX4CX

!og Y=A+BX

J'= A+ B

Y=A +BCX

Y=ABC a

y= _
A+BC'

Figure 3. Typical forrs of some trend equations.


Of course, a polynomial can be forced to fit the data quite closely by adding enough terms. A,well known
theorem in algebra states that a polynomial of degree k can be passed through k+i points -n a plane.
Accomplishing this, or anything near to it, does not contribute any information about trend. Thib becomes
evident when it is recalled that 1 degree of freedom is lost for error for every parameter that is estimated
from the data. Thus, it there are n observations and n degrees of freedom are lost in fitting a polynomial
of degree n-l, 0 degrees of freedom left for error.
All polynomials can be fitted utilizing the method of least squares.

2.2.4 Bayesian (Reference 15)


Suppose a propulsion system is being built with a 0.95 reliability requirement at the 90% confidence
level. The system goes through a number of tests: component, environmental, subsystem, system, extended
time, etc. There are failures which are corrected (permanently, it is hoped). A final configuration is
attained. It is also assumed that the project is at least 50% sure that a 0.95 reliable system has been
achieved If thirteen tests are run with no failures, has the 0.95 requirement been met? The classical
binomial approach (see section 2.2.5) would indicate that the requirement has not been met.
This problem is typical of today's work in the aerospace industry: few systems, few tests, compressed
schedules and high reliability requirements and costs. The limited number of samples for test permit no
failures since even one failure would imply an intolerably high failure rate. Indeed, all "hi-rel" programs
have "failure recurrence prevention" systems. All failures are "fixed" and "closed". These activities,
in effect, imply that at time of "buy off," no failures should occur on qualification or demonstration tests.
Hence, any solution to the reliability demonstration problem should, as a practical matter, address itself
to zero failures and few trials.

45

Bayes Theorem, in the continuous case, states:


Irg,4,I p)

p) dp

gqr Ip)w(p)dp
Here,

=lower (Bayesian) confidence limit of the true reliability, p;

=observed number of failures in n trials;

g,(r Ip)

the conditional probabi~ty density function of r given p; and

W(p)

the a priori frequency function of, p.

In the binomial case,


(2)

q~~r
Here ()

The number of combination,; ct n things taken f t a ti:,

It is asSL'med that the engineer Is capable of assigning a probability, P (degree of belief) to the event
that the requiredi reliability, or more, has been attained prior to test. Ifis also assumed that this prior belief
dlecliritc :;-'aaly tr, er
F'= 0 and P = 100%.
Thus, w(p) fakes the form of the trangle distribution as foliows.
W(

2(-

?P--p

R
P)
,w

for Osp R

(3)

o R: p 1

(4)

P_P
R)~

Here, P = prior probability -.-f having the raquired reliability, R.


That w(p) does have the proper values can be seen t.jy obtaining the required heights at R and multiplying these frequencies by tfI Oaes Fiai (1 -H.) of the triangitc of (3) and (4). Then for the left hand
interval, 10, R), we
ita"= R,

w(R)

Area over

2 ( 1-P)

2 1P I

(,OR)

--

I1P

46

Similarly at p = R for the right hand interval, (R, 1), -"e have
R) = p
R) P(
2(1-R)2
2(1-R)

R)
2,1
2 w(2R)

Area over

Note: The discontinuity at R is of probability measure zero.


Inserting (2), (3), and (4) in (1) yields, after cancellation and simplication,

P r ob ( R<_p

1)

1
1_

_ 2f
_ ( _r_(
R

(1- P)(1- R)

n-

5)
q dp

-0

+
(P)(R)

dp

Figure 4 graphically displays equation (5). Note that in this case. thirteen tests with zero failures
are adequate to demonstrate a reliability of 0.95 at 90% confidence (given a 0.5 on the Bayesian Prior
scale).
While there can be no doubt that Bayesian methods, as can be seen from this example, can provide
significant test reduction to demonstrate a reliability requirement, performing the analysis requires the
development of a prior distribution which is, at least to some degree, subjectively based. Also, Bayesian
approaches are highly sensitive to the prior distributions used. If no meaningful estimate of the prior
probability of success can be made, none of the above conclusions apply. Particularly, one must be wary
of consistent optimism or pessimism when records of success do not support the prior probabilities.
For example, if optimism about a new design is guarded and feasibility tests are few or non-existent,
then the analysis is driven towards a rectangular prior (equally probable prior intervals), and the
results are just as unfavorable (in terms of the large number of tests required) as they are for the binomial
distribution. In other words, since one cannot be over 0.5 on the prior scale, 11 tests are required with
zero failures to be .90 reliable at 90% confidence, the same as the binomial. This defeats the purpose
of the Bayesian approach.

47

N
-7

N.

7
'NN

"N
//
/

/
7,
'I

_______________

_______

____________________________

N
N

/,
N
--- 477
/

____

N..
N

7,
/

/
7,

_______________________________________________

______________

N.

'N
.'.

'U*N

IJ~
.-..

A7

71E

__

Figure

with

4.

Number of trials

failures to achieve 90% confidence


attained when a Bayesian prior is used.

with 0

48

that reliability R

has been

of applying Bayes Theorem.


The following are two examples
i6,
9-

/\

V
LUJ6-

POSTERIOR

PRIOR

5\

/J

cn
0
3
(C_

PRICR
MEAN

2"

POSTERIOR MEAN,

010

020

00

Figure 5. The prior and posterior distribution in example 1.


Figure 5 portrays the results of applying Bayes Theorem to estimate the unreliability of the material
(LX-13 or Exter) which is an extrudable high explosive used in a variety of systems (Ref. 15). As can
be seen, the posterior distribution is not much different from the prior distribution. In this case, the
present observed data (failure numbers, test numbers) is relatively small compared with the previous
data, and the prior distribution is given great weight in the final unreliability estimation.
150

1I

POSTERIOR

F-

V) 100U./
F-

F.
.1
0
Cr

POSTERIOR MEAN

50

PRIOR MEAN
I" PR I OR

002

006

004

008

Figure 6. The prior and posterior distribution in example 2


Figure 6, on the other hand, portrays the results oi applying Bayes Theorem to estimate the annual
pump unreliability for pressurized water reactor (PWRS) in commercial operation in the United States
(Ref. 15). It is observed that the posterior distribution is much less diffuse than the prior distribution
as a consequence of incorporating the obeserved data. In this case, the present observed data set is large
and it is given much weight in the final estimation.

49

2.2.5 Classical Binomial Approach


The "traditional" approach to reliability demonstration in a go-no-go type environment is the well
known Binomial distribution.
Stated mathematically the Binomial Distribution is as follows:

N x

1)=1-C,if N S
O

where;
S = number of successful start tests
N = number of trials
R = reliability
C = Confidence level
where it is assumed that
" Tria!s or tests are independent
" Each trial results in success or failure
" The reliability (probability of success) of each system is the same on each trial
" The number of tests is fixed in advance of the demonstration test
Note that it would take 45 tests with no failures to demonstrate 0.95 reliability at 90% confidence (see
Table 4).

50

%~0

O% -7

',0% Lnc

tO

CON
I.Oe0%

(NC %( 0

kL*

%0

M O NO%
r-4

00L
-(7%O
\0

>

&

co=~u"
OD%

n
O4
-Le)CYC7
C',r-

0%
as
0.

nr

0%

V)
.

,4 C~(t

%CI

-7 0

Dk

en-T

'r) r

0%tssOLQ

0~%
"%N%
0%
Lm%0
'Tt~*C
%( 'T
en(I-

(14 '(

~&C

MOO(
OC)

0*
40

Lo.Ou',e

rNOON

LM%

0'

C4

(NNC4

t-44

-4

en

eq0

VN 0% 0% %7

0% LI%4

0) u

"0

'01%%N 40%

41

0)

% ouuC4%Vr
% I 1

V)~f

0 % r-

ae

%017r

0000
p*-

ui c

*4P
(

aD%

0)

UNr'0%

%0 -TMM

(-0%~ 00f,((
%0.--

GOC%%0'T

4NN

100
0

'T0 as u,
-0%0
0
0%OO4D

-40

r.-go

M O~
U

Q00 en

(,-C'(

cycN0% tIt
-

r-

Ln .. 7("

C'C(

-4

oNC

00

f o0

0%C0

%O~ (N

-0 m 0%UP.in m %

enr-.

C4

t~N(

E-4-

fn

0%4G
r4. No0%0%% ?

di

-4~

%%0%
m oI
M (N
0%0'%'
0000

enC

',( a
0%'m%

0%

000

we

0C

* m

(N,0%0

0% C% mm
0

00000

-4

D4

2.3 Baseline Reliability Enhancement Methodology Identification


2.3.1 Proposed Infrastructure Controls Affecting Reliability
Figure 7 illustrates how various activities related to the categories of design, manufacturing, test,
transportation, storage and operation can have an effect on reliability. Each category has listed underneath
it examples of reliability enhancing technique and tools. They represent a cross section of ideas accumulated
during the site visits of Task 1. Some of the techniques are well known and proven, such as reliability
predictions/trade offs. Others are not, such as operating characteristic curves vs. reliability.
The following is a discussion of proposed infrastructure controls intended to enhance reliability. The
discussion is divided into quantitative and qualitative approaches followed by a discussion of risk assessment
as a decision making tool.
Quantitative ADroaches - Analysis of Historical Data (See Section 2.2), PRACA/FRACA Trending In order for a Problem/Failure Reporting and Corrective Action system to be suitable fo maihematical
trending, basic changes must take place in the way information is recorded and tracked (see Section
1.1.3.2). These changes include as a minimum:
" Recording total operating times on failed as well as unfailed components
" Total number of cycles or trials (both successes and failures)
* Inclusion of reports of all component malfunctions, even those which were noncatastrophic and occured on non-critical components.
Operating Chazacteristic Curves Correlated to Failure Modes/Rates - The example that follows
illustrates one method of connecting defect rates from Q.C. sampling plans to reliability calculations for
hardware. Although this example is for solar array calculations, there is ev;ery reason to believe that a
similar approach could be used for propulsion systems.
. Data
- If entire population had random defect rate of 0.65%, one would expect to reject 10% of lots due to
the randomness of sampling process. Figure 12 (page 73) illustrates the use of MIL-STD-414 for the
purpose of determining the 10% reject rate. The 0.65% defect rate corresponds to a 90% confidence for
the lots expected to be accepted or, conversely, 10% are expected to be rejected. Assume that the MIL-STD
414 plan has thus far rejected 58/434 = 13.4% of lots
- This result is indicative of non- homogeneous population wherein some lots are worse than 0.65% and
therefore have a higher probability of being rejected; clustering of bad lots is also indicative of nonhomogeneous population

- Thus, residual defect rate in the accepted lot subpopulation will be less than 0.65% per test; assume
observed rate in lots accepted to date is 0.65%
. For purposes of an example, consider estimating solar array reliability, a failure
probability of 0.25%, will be assumed for each interconnect over the course of the three year mission
(conservative)
" Each quarter string consists of an average quantity of 39 cells
" Power margin allows subsystem to accept 22 quarter string failures in each of two sets
of 992 quarter strings

52

PRACA/FRACA

DESIGN
1.
2.

Top Down"
Analysis
Reliability
Predictions/

Trade Offis
3. Lessons
Learned/

MANUFACTURING
1. O.C. Curves
vs.
Reliability
2. Trends/

TEST

TRANSPORTATION

1. Qualification
2. Reliability
3. "Hybrid Tests"

1. Shock
2. Vibration
3. Thermal
Transients

STORAGE
1. Shelf Life
2. Dormancy
Effects

OPERATION
1. Achieved
Reliability
2. Operational
Constraints

Variability
3. Q.C. Testing

RISK
ASSESSMENT

RISK
MANAGEMENT

F1

Assessing the chance and consequence of being


unable to obtain higher reliability, when it is needed,
within the allocated financial resources.

The process which encompasses the identification, assessment, tracking, control, and mitigation of risks related
to reliability and results in overt actions to accept known risks orto make adjustments which control their potential
consequences.

Figure 7. Infrastructure controls proposed to enhance reliability.

53

- Two of four interconnects on a cell are pulled as part of sampling test; failure probability
per pull estimated as 0.25%; two tested interconnects are from either end of cell; data from immediately
adjacent interconnects is not available.
-Correlation between pull strength values from same cell analyzed and found to be .38 for all lots tested,
.54 for the ten "bad" lots tested, and .32 for "unknown" lots; value would be somewhat smaller yet if
attention were restricted only to lots accepted by sampling process
- Correlation of .32 means that knowing the strength of one interconnect helps one predict the strength
of a second interconnect on the same cell (.32 .32) = .10 or 10% more accurately than one could predict
it without knowing the first value; the square of the correlation is known in statistics as the coefficient of
determination.
. Probability of both interconnects failing is:
PR (first failing)

* PR (second failing/first fails);

PR (A/B) read as probability of A given that B is known to occur


If totally independent, PR (Second Failing/First Fails) = 0.0025
If totally dependent, PR(Second Failing/First Fails)=1.0
Since the 10% factor developed above measures the strength of the dependency which exists, it may
be used to irterpolate between .0025 and 1.0 to estimate PR (Second Failing/First Fails)
.0025 = .10225
(1.0 - .0025) *.10 -,-

.0025

- Probability of two interconnect failures out of two on same cell is thus estimated at
* .10225 = .00026

Since adjacent interconnects are probably somewhat more correlated than those at either
end of cell, and since degree of correlation is not known, if we assume that interconnects fail at both ends
of the cell, then the cell will fail totally . Using this assumption will, of course, produce somewhat of
an overestimate of probabilities. This overestimate is, however, small compared to the effect being
observed.
-

This means we will estimate the mission failure probability for a cell to be .00026.

This equates to a cell failure rate ni:


-LN(1-.00026)/26298 = 9.9E-9/HR

- A quarter string with 39 celks wi! tnus have a failure rate due to interconnects conservatively
estimated at 39 * 9.9E-9 or 386E-9/HR

- The impact of tilis new cell failure mode on the array is to change the failure probability
from 6.25 x 10 -6 to 6.21 x 104, an approximate two order of magnitude change.
Qualitative ApDroaches - Product Design FMEAs - Although Product Design FMEAs are not unheard of
in the aerospace industry, very few companies perform them. In essence, product design FMEAs are
structured lo identify sources of common cause failures (sometimes called 'coverage factors" or
"correlatior factors" by propulsion manufacturers).

54

Although the following product design description is directed towards


components, a similar approach could be used for propulsion system components.

electrical/electronics

Product design Failures Modes and Effects Analyses (FMEAs) are performed to verily that hardware
reliability and integrity is maintained when electrical/mechanical designs are implemented as hardware
during the product design phase. This type of analysis is typically done between PDR and CDR after drawings
become available, but before they dre released.
This analysis is particulaly appropriate for examining areas where redundant or backup paths are in
proximity.
When redundancy is implemented by using separate units, there is generally no need to do a product
design FMEA inside each unit. However, this may not be true for high energy systems such as propulsion.
In either case, unit external interfaces, e.g., input/output cross-straps, should be examined. Example:
product design criteria are listed below. Results are documented on Product Design FMEA Forms. Where
negative findings occur, remedial action is recommended. Adverse conditions are to be justified at design
audits.
The following Reliability Criteria for Product Design are applied in performing product design FMEAs
f",r printed circuit 1-ards, cenne_"terc, and wiring interfaces:
Cabling. Hardnesses. and Wire Bundles
a) Assure that fault isolation exists.
b) The routing of all wire bundles shall be such that all possible locations where wire pinching or
chaffing could occur are eliminated to prevent shorts to ground or shorts to different voltage or signal
source.
c) Assure that the design prevents screw threads from coming into contact with wire/leads during
assembly.
d) Provide for special sleeving where wire routing is adjacent to sharp edges.
e) Prevent excessive pinching of wires by cable clamps by properly dressing bundle and sizing clamps.
f) Spot bond or tie wire adjacent to standoffs and with reasonable distance between supports such that
loads/joints are not degraded during exposure to vibration or shock.
g) No single wires or single solder joints shall be system single point failures.
Connectors
a) Similar connectors on a unit shall be keyed, color-coded, or have other mismating protection.
b) Physically separate power and ground pins.
c) Different polarity signals shall not have adjacent pin assignments (Vis.; +28Vdc, -15Vdc).
d) Sensitive low level signals should have pin assignments physically separated from high level power,
high level signals, or ungrounded returns This should also apply to grounds.

55

e) Critical power or signal lines shall not have adjacent pin assignments.
f) Redundant puwer or signal lines shall not have adjacent pin assignments.
g) Review pin and slip ring assignments to assure that shorts between
adjacent pins will not result in single point failures.
h) No single connector pin shall be a system single point failure.
Printed Circuit Boards
a) Review that redundant paths are kept physically separated as much as possible.
b) Traces carrying heavy current loads shall be verified as having adequate load carrying capacity per
MIL-STD-275.
c) There shall be no open daisy chains for power or ground paths.
d) Sufficiency in the spacing between traces depends on trace voltages and conformal coating provisions.
These should be reviewed against Standard Engineering Design Systems to confirm that trace-to-trace
shorts will not occur.
e) A grounding circuit trace leading to board edge common ground should be filleted at the lead-in line
to prevent development of cracks in circuit conductors.
f) Check that redundant paths don't go through the same piece part, e.g., a dual transistor or quad IC.
g) If there are any single PC traces or plated-thru-holds where an open would result in a system single
point failure, hardwire should be added.
h) Care shall be taken to assure that high heat generating parts are isolated from cirtical signal paths
(via distance/shielding) to preclude burnout of PC traces, etc.
i) Ensure that solder joints are inspectable. Avoid soldering flush-mounted parts near heat sinks or
other items which might make the presence of solder balls undetectable.
j) Ascertain that the block diagram or schematic-illustrated redundancy is reflected by the wiring
diagram.
k) Assure that solder reflow practices for boards (or within parts) will not reflow or degrado prior
connections.
I) Handling and installa:lon loads for cards and assemblies must be reviewed to ensure that stresses
imposed on joints are withir their load-carrying capability.
m) PC traces and wiring should be physically separated such that a fault is isolated and will not cascade
to redundant or adjacent elements.
n) Verify that PC boards which contain redundancy or cross-strapping elements are adequately
protected against shorts to ground (internal and external to the board) which could represent a system
single point failure.
o) Plated-thru-holes shall have an aspect ratio (board thickness to hole diameter) or no greater than
3 to 1.

56

Function Expected*
0

or Output Required

5_..._

a-

z
-J

0-

a:

0
n

01

More of
a-

S Less
Than
~~ < ~(n+2)
0- As Well
C
0
As
z
0

_____

____

_________

(n+3)

____

Part of
(n+4)
Reverse
(n+5)
Other
Than
____
___

___

(n+6)

*Obtained from a clear, concise, unambiguous set of Engineering functional descriptions

Figure 8. Top down matrix.

57

(See Appendix B: MCDAC Trip report) - In the case of the


Manufacturing Control FMECAs
manufacturing control FMECA, the FMECA should be conducted incrementally by reliability engineering
during the design pnase to identify single point failure modes. The FMECA should be used in the Critical Item
Control process by identifying critical items and the causes of critical failure modes. Proper design
controls would then be implemented for each critical item and can be verified by a Manufacturing Control
Plan. FMECAs should be supplemented with failure history prior to FMECAs of related designs and, along
with ti,e failure history, should be made available to designers.
A manufacturing Control Plan should contain as a minimum the following task:
" identify flight critical items (FCts) using FMECAs
" Determine flight critical characteristics for each FCI
" Identify specific manufacturing methods for each FCI
" Prepare Manufacturing Flow Chart and annotate
Identify Process Control for each select manufacturing method
" Identify test and/or inspection methods for each seiect manufacturing method
Top Down Analysis (L. Booth Method) - The most common criticism of FMEAs is the possibility that
not all conditions ca,,ing system anomalieo, nalfunctions or failures are attributable to inherent
component failures.
One way to audress this concern is by conducting a "Top Down" analysis. A Top Down analysis is conducted by accomplishing the following tasks* Obtain a clear, concise, unambiguous set of engineering functional descriptions
" Form a matrix as shown on Figure 8
" For each intersection (square) on the matrix, describe the system condition (i.e., 0=
nominal thrust, n+6 (other than) = wrong direction). Therefore, (0, n+6) means correct thrust, wrong
direction. The square (0,0) indicates correct nominal thrust was required and correct nominal thrust was
delivered.
" Each square of the matrix is a potential "Top Event" (undesirable condition).
" FAp ,,
dac: ,op event fLsing fauilt trees, event trees or similar techniques) until all
b, ,c. c:',iausted.
conditions leading to the tcp -vL: ;,
Risk Assessment (reference Figure 7) - Risk assessment can be characterized as follows:
. Risk assessment is the process for estimating the risk associated with a particular
alternative course of action
- Risk assessment considers probability of failure and consequence of failure as they relate
to technical performance, schedule, aro cost

58

Where
Risk is the probability and consequence of not achieving some defined program goal and is a function of:
. Probability of failure
- Consequence of failure
-

Increased Cost

Extended schedules

Reduced performance

Risk assessment involves these steps indicated in the rollowing diagram:


IDENTIFY
PO TEN TIA L
RISK ITEMS

2!IARACTERIZE
RISK ITEM

DETERMINE

PROBABILITY OF FAILURE

DETERMINE
CONSEQUENCE OF FAILR

COMPUTE RISqK FACTOR

IS

RI:

LOW
RISK

MEDIUM
RISK

HIGHI
RISK

Where risk levels are defined as:


1 lie problem is obvious and there is a high probability of failure to meet reliability,
High
performance, schedule or cost objectives. Monitoring and control must be rigorous, with frequent update
of risk status. A fall-back or alternative system or plan is mandatory.
schedule,
;y, pr.,,,.e,
;;a
Medium - The problem is identifiable and would impact ,.,
factors,
all
contributing
of
control
close
or costs. The probability of occurence is high enough to require
position.
fall-back
acceptable
an
establishing of risk management milestones, and
Low - The problem is identifiable and would impact program objectives, but the probability of
occurrence is low as to cause no concern other than normal monitoring and control.

59

2.3.2 Risk Management


Risk management is the process which encompasses the identification, assessment, tracking, control
and mitigation of risks related to reliability and results in overt actions to accept known risks or to make
adjustments which control their potential consequences.
Risk assessment assesses the chance and consequence of being unable to obtain higher reliability, when
it is needed. within the allocated finncial resoLJ.Q,
Establishing Factors - In order to assess and manage risk, factors must be established based on tech-

nical risks Fat;ors can be uiiarc:erized by using the two following matrices (Figures 9 and 10).
As ssing Economic Risk- Given the #Tormation of sections 2.3.1.3 and 2.3.2.1 are available, the most
efficient way to assess economic risk is to use an established model tailored to the rocket industry. The model
accounts for both production and operational processes that would be impacted by unreliability. Additional
economic modeling of the cost of unrelaility to customer comi-unities is essential to gain a meaningful
estimate of economic risk. Economic models must evaluate the actual cost of finite activities required to
reduce risk by finite amounts
In the case of launch vehicles, most individuals recogni7e the direct costs of unreliability such as
residual hardware that is scrapped due to more rigorous inspections or redesign effort. The incorporation
of additional quality control that slows producion rates -nd operational process timelines while increasing
the total amount of personne! and facilities that are required to support the vehicle is a more subtle cost
effect of unreliability. The largest cost is related to payload commurities that suffer direct losses in the
form of lost hardware and higher insurance rates, as well as launch schedule backlog effects that result in
program slippage that has cost of money and cosi of storage implications. Actual costs of unreliability are
difficult to estimate accurately, but the costs may be boundcd from documented historical events that give
a real estimate of cost risk exposure.
Perhaps the greatest single "cost" of unreliability can be related to loss of strategic capability at critical
time windows. For the military, this may be the absence of reonnaisance capability during evolving international crises or a less capable navigation or communications environment for operations. For the private
sector, the strategic loss rosy be i. t
fo rrm of lost opportunity to penetrate specific markets at
advantageous timc w;fdows. U,-relaoil v -vso r.suits ;n loss of national stature and a hinderance in the
ability to successfully compete witf, the interr ,tionai community.
The economic risk of unreliability is but one element of the overall risk assessment. The overall risk
is a combination of economic riJk, schedule risk, and mission capability risk. In essence, the approach
would be to assign relative figures of merit (ranging from 0 to 1) of each of the risk factors of Figures 9
and 10, then compare the summed risf .,aciors a.ainst - ccst ol reducing the overall risk. The program
manager can then icok
the revit v, cust -on il of risk redu,-ic .;n investment options that assures ultimate
program v1--t;-li,;l

Maturity

Factor

Complexity

Factor
I

Hardware

Software

Hardware

Software

Existing

Existing

Simple design

Simple

Minor

Minor

Minor Increases Minor Increases


In complexity
in complexity

redesign

redesign

design

Dependency

Factor

Independent of existing system,


facility, or associate contractor
Schedule dependent on existing
system, facility, or associate

contractor
Major change
feasible

Major
change
feasib.,j

Moderate
increase

Moderate
increase

Performance dependent on ex!sting


system performance, facility, or
associate contractor

Technology
a v a II a b I e ,
complex
design

New
software,
similar to existIng

Significant
Increase

Significant Increase/major
Increase In a
number of modules

Schedule dependent on new system


schedule, facility, or associate
contractor

State
never
fore

Extremely
complex

Extremely
complex

Performance dependent on new


system schedule, facility, or
associate contractor

State of art,
some research
complete

of art,
done be-

Figure 9. Typical top-level factors


contributing to probabilty of failure.

61

Typical Top-Level Factors


Contributing to Consequence of Failure
Technical Factor

Cost Factor

Schedule Factor

Minimal or no corsequences,
,mportant

Budget estimates not exceeded, some transfer of


money

Negligible impact on program,


slight development schedule
change compensated by available
schedule slack

Small reduction In technlc'al


performance

Cost estimates exceed budget


by 1 to 5 percent

Minor slip In schedule (less


than 1 percent), some adjustment in milestones requaed

Some reduction In techic?&


performance

Cost estimates increasnd by


5 to 20 percent

Small slip in schedule


(1 to 10 percent)

Significant degrada-ion In
technical performance

Cost estimates Increased by


20 to 50 percent

Development schedule slip


(10 to 30 percent)

Technical goals cannot be


achieved

Cost estimates Increased In


excess of 50 percent

Large schedule slip that affects


segment milestones or has possible affect on system milestones
(greater than 30 percent)

Figure 10. Typical top-level factors


contribiting to consequence of failure.

62'

3.0 (TASK 3) QUANTIFICATION AND PRIORITIZATION OF METHODOLOGIES


In many cases, there is insufficient information to completely quantify and prioritize the methodologies
that have been identified. Irother cases they are difficult to prioritize because of the qualitative nature of
the methods. In any case, thorough testing of the various methodologies should be the subject of future
studies (see recommendations).
3.1 Testing of Quantitative Methods
Three areas of study appear to be promising. They are:
. Comparison of the methods of Section 2.2
. PRACAJFRACA Trending
. Connecting Operating Characteristic Curves to Reliability
3.1.1 Comparison of the Methods of Section 2.2
Section 2.2 includes a description of a selected number of quantitative methods intended to indicate
reliability growth as well as demonstrating the achievement of a prescribed reliability goal.
Four of the methodologies - Binomial model, Beta-Binomial model (Bayesian Estimation), Lloyd's
model and Shen's model for estimating reliabilities of launch vehicles from attribute data are introduced
and compared in a preliminary manner.
Binomial Model - The simplest way to estimate the reliabilities of launch vehicles is to use the Binomial
model. It is easy to perform the calculations, but a large size sample is required to demonstrate high
reliability. The results obtained by applying this model do not account for the reliability growth effect
expected during the developmental history of the launch vehicles.
Beta-Binognial Model (Bayesian Estimation) - The Beta-Binomial model is based on the Bayesian
Estimation. In this model, several similar components are treated as a single class. The probability p of each
component in the class is assumed to be constant but will have different values from component to component
[i.e., g(p)]. If the Binomial distribution is used to obtain the probability of K failures in n tests for each
component, the conjugate distribution g (p) for the class is the Beta distribution. This model weights the
reliability growth effect and can be applied to forecast the reliabilities of launch vehicles. The detailed
theoretical analysis ran be found in Ref. 19, "Bayesian Reliability Analysis" by Harry F. Martz and Ray
A. Waller, 1982. The disadvantage of this model is that it is very difficult to separate the total sample data
into several similar compone-its, unless we have the detailed engineering analysis and each failure model
at the different periods of the launch vehicle developmental history.
Lloyd's Model (Ref. 14) - In Lkjyd's model, the rationale is that when engineering corrective action
for a failure is implemented, the probability of recurrence of that failure is reduced; therefore, such
failures should not be carried as full failures in subsequent reliability estimates. The failure va!ue for each
failure model is assumed to be
f

11- ( 1 -y) /n(

where y is the confidence !evel and n is the number of successful tests after corrective action.

0 1

Based on a detailed engineering analysis for each failure mode, the result of failure number for each
failure mode can be obtained by solving eq. 1. The final result of the reliability estimation is
R = 1 - if/N, where , .f is the cumulative failure number of all failure modes. N is the test number.
This model weights the growth effect and can be extended to forecast the reliability. For this model one
needs to know not only at which launch number the failure occured, but also at which launch number the
failure was corrected. The confidence level chosen in eq. 1 directly affects the final results and is difficult
to justify. The confidence level for the final result R = 1 - jf/N is not clear.
th
Shen's model (Ref Appendix A.3) - In Shen's model, the reliability Rn of a launch vehicle at the n

launch is obtained as
N

R,= 1 - U,= 1

[FrLn- 2/L

"(F,- FlyL oL)/NJ

(2)

1=1

where U, is the unreliability at nh launch


F is the cumuiative failure number at n th aunch
L, is the n t h launch number
F is the cumulative failure number at if h lau,ch
L is the

unch number.

The term F,'L, in eq 2 is the estimated average unre!iabilit, at the nt h launch. The term
N

2/ L,,-

Fr Fr/ Le L) / N

in eq.2 is the corrective unreliability caused by growth effect.

This model is simple and easy tr apply. It weights the growth effect and can be extended to predict the
future reliabilities of the launch ve icles. Thc final results of the model are obtained directly from the
collected data in which only the launch numbers at which the failures occured need to be known.
However, since this model does not assume any knowledge of what changes were made subsequent to
failures, it does not directly incorporate the effects of engineering analysis and corrective action taken
after each :adlure For this reason, ifs reliability growth forecast lags that of Lloyd's method.
Fm-, l ie abcove analysis of these four methodologies, the Lloyd's model and Shen's model are considered
to be the het!er models ;or estimating reliabilities of launon vehicteE.
Fig.1 1 illustrates the results by aplying Lloyd s and Shen's models to an example from Ref. 1. As we
can see. the tendencies of the results for both models are similar, the values of estimating reliability from
Lloyd's model are higher then those from Shen's model.
In the present study, based on the collected data, the Shen model is used to estimate the reliabilities
for twenty-four U S launc, vehiclen. The growth trends obtained from the model are shown in Figures 11 a,
and 11 b for the Delta and Titan families of l',. , vehicles

1.0-

--

- ----

0.9

oe

0.7

0 .7

--------

0.5
0.4

0.2

----- --------

--- -------

---- -

EN

---

0.11

05

10

15

TEST NO.

Figure 11. Lloyd vs. Shen comparison.

65

20

25

1~~-

1.0-

0.8

DELTA

9---OR

0 .7

-.

. ...-..
..-....
.

..........
...............
.
.. ......

0.7

0.5

0.3

20

40

60

80

100 120 140 160 180 200 220 240 260 280 300 320 340 360 380

Launch Numbers

Figure 1la. Reliability estimation of Thor and Delta.

66

1.0-

0.5"

0.1

-A

-v

Figur "

---------

0 .

__-

.ReibltyetmaioTitan I

-------

67--

------

,II.

--------

__

3.1.2 PRACA/FRACA Trending


An additional dimension could be added to PRACA/FRACA system to allow trending if a "cradle to grave"
concept were established. Under the current circumstances PRACA/FRACA systems frequently report only
through the testing phase (except for reusable systems) and do not always report on total time and cycles
on both failed and unfailed components.
In addition, PRACA/FRACA systems should include not only failure phenomenon but precursors to
failure problems as well. Such precursor problems should include unexpectedly low margins or largerthan
expected variability. The corrective actions should be accomplished interactively with system functional
descriptions and the FMECA to insure that those efforts are up to date while the search for root cause is
pursued.
In order for an evaluation of PRACAs/FRACAs trending capabilities to be affected, a pilot program needs
to be established using the trending techniques of reference 2.
3.1.3 Operating Characteristic Curves and Reliability
An example was given in Section 2.3.1.1 correlating operating characteristic curves to failure modes
and failure rates. Reference 17 illustrates some recent work in this area. In this work an effort was made
to tie safety factors developed in the traditional engineering approach to resulting structural reliability
using a probabilistic representation of these traditionally developed factors.
Figure 12 illustrates the relationship of defect rates (quality of submitted lots) to operating
characteristics (OC) curves. In this way, changes in sampling plans and procedures could be linked to
criticality ranking in FMECAs.
For example, suppose the amount of moisture in a bonding liner polymer used in solid rocket motor cases
is linked to poor quality of bonding, thus to separation. A change in the sampling procedure could reduce
the defect rate and reduce the potential for failure by a similar amount.
A study should be undertaken to test the validity of such a link.

3 2 Evaluation of Qualitative Methods


As was noted earlier, it is difficult to prioritize qualitative methodologies. However, the three methods
that do show promise based upon the information obtained from this study effort are:
. Top Down Analysis
. Product Design FMEAs
. Manufacturing Interfaces
Tests of these techniques could help to more firmly establish these capabilities. Suggested tests are
defined below:
3.2.1 Top Down Analysis
In order to rate the value of "Top Down Analysis" when conducted in accordance with the method

68

-I.

SAMPLE SIZE CODE LETTER


F
I.

f*,

''??

100Iii
50

p'on

s-I-

illhI I~
I

bos.d

an

aoqs

meIthod and km~w vorebtlify

are

*ss.ntially

qu

ln

-*ThjK;
U~..
TITIjf'

41
*

0.6

deec

li

rateI
.npsn1Ill

5069

T.k

..

t______________

described in Section 3.2.1.2, the results of an FMEA should be compared to the results of a Top Down
analysis.
3.2.2 Product Design FMEAs
Product Design FMEAs have proved to be valuable in identifying and eliminating sources of common
cause failures in electrical/electronics applications (see Section 2.3.1 .2). A study should be undertaken
to see if a Product Design FMEA would be fruitful when applied to the non-electronic propulsion subsystems.
3.2.3 Manufacturing Interfaces
Flight critical item and manufacturing control plans have a great deal of potential for controlling
critical items as described in Section 2.3.1.2 "Manufacturing Control FMECAs". The effectiveness of such
an appfoach remains to be demonstrated, however. A study should be undertaken to demonstrate the
effectiveness of manufacturing control FMECAs.

3.3 Prioritization of Me 'iodologies


The prioritization of methodologies cannot be completed until the studies described in this section are
completed.

70

KEY RECOMMENDATIONS
The following areas have been identified as having significant reliability impact. These areas each
warrant further in-depth study if the high reliability goals of the Air Force advanced launch vehicle
programs are to be achieved in an operational system.

1. Failure Correlation,
The percentage of failures which are likely to impact more than one engine in a multi-engine design is
of critical design import. This percentage, or 'failure correlation factor," must be well below 20% for
reliability oriented design approaches such as engine out capability to be effective. The lower this
percentage the more effective is this hueristically pleasing design optioi.. Not surprisingly therefore,
contractor new engine design characteristics quote extremely low factors (as low as 1%). Correlations as
low as 1 out of 100 do not seem consistent with other design parameters specified (such as high chamber
pressures) and are considerably lower than factors achieved on recent engine designs (e.g. 17% for the
shuttle main engine test program). Finally, there did not appear to be any significant consideration given
to how these low factors would be achieved in practice.
Recommendation 1 - Failure correlation factors are key reliability parameters to Air Force launch
vehicle design decision makers. Specific studies such as parameter design studies which address what
factors have been achieved in the past and what design trades have been made to ensure the lew f3ctors quoted
will be evident in the resulting designs appear to be lacking. It is recommended that these investigalions be
made prior to the selection of any design alternative.
2. Variability Control
The currently achieved launch vehicle reliability has been shown by this investigation to be below 0.95.
However, the investigation uncovered examples of reliabilities in other somewhat similar systems, such
as tactical missile systems, which routinely achieve 0.99 and some which approach 0.999. These systems
whose operational reliabilities currently meet or exceed the reliability requirements for the Air Force
advanced launch system have achieved these high reliability levels through the use of intensive variability
control programs. While it would be inappropriate to make any direct correlation between tactical missiles
and launch vehicles, it is also clear from a review of the failure data of mature launch systems that the
barrier to significantly higher reliabilities may be the residual variability inherent in the current launch
vehicle production process. A cursory review of other somewhat comparable products, such as commercial
jet engines and gas turbines and recent Air Force variability reduction studies performed as part of the R&M
2000 program, provide further support for this argument.
Recommendation 2 - Residual variability may be the key barrier to high launch vehicle reliability
achievement. For this reason, it is recommended that investigations be made into the effectiveness of
specific variability control programs such as Taguchi methods or alternatives. These investigations should
be directed at determining the applicability of the methods to the launch vehicle production process. It is
further recommended that some specific program for variability control be included throughout all phases
of the advanced launch system program.

* The definition cited here ib broader than that used traditionally by propulsion system designers
discussion of the difference

71

See Appendix A.1 for

3. Reusability
Reusability is, on the surface, a design goal of significant program benefit However, the benefits of
reusabiity are significantly compromised if the reliability of an engine is adversely affected by the
requirement. Besides the direct costs involved in developing a reusable design, there also appears to be
significant indirect costs which are required to maintain reliability in a reusable design. For example,
reusability by its very nature tends to decrease the production run. When production runs are decreased,
investments in automated production equipment become less economical and the production process
therefore tends to become more prototypical. Prototypical production, especially of complex equipment,
increases the problems associated with variability control and therefore substantial postproduction testing
may be required to ensure high reliabilities. A good example of such an indirect impact on reusability was
seen at the Rocketdyne SSME production facility in Canoga Park, California.
Recommendation 3 - Reusability has been shown to have indirect and potentially nogdtive impacts on
the achievement of high reliabilities at reasonable cost. The indirect impacts of reusability on reliability
and cost through such mechanisms as variability control prob;ems should be thoroughly investigated and
the results of this investigation included in the programmatic decision making related to reusability.
4. Risk Management
Achievement of high operational reliabilities in such areas as nuclear power plant safety systems have
been significaotly supported by a continually active program that attempts to identify the risks to reliable
operation and to address them according to their importance. Such a risk management program has been
investigated and recommended by NASA SRM & QA for future projects, but it is not clear whether a risk
management orogram is planned for the acquisition of advanced launch systems
RecommendaticrJA - The Air Force should investigate the advisability of incorporating a risk management program as an integral part of any launch system program
5. Reliability Performance Indicators and Trending
For high reliability programs it is important to identify, early on, symptoms of the process which presage deterioration in performance. This has been done in the financial community, in the commercial
aircraft community and in the nuclear power safety community by the development of a set of "leading"
performance indicators and developing performance trends based upon the indicator trajectories through
time. If such a set of indicators could be developed and trended for the Advanced Propulsion Systems
program. the indicator trajectories might provide early warning of problems arising during development
and operation. This early warning could provide the time required to institute corrective action before
actual program reliability performance is affected.
Rj.1jme
. - The Air Force should develop as part of advanced propulsion system development
programs -i ,Pt ot potential indicators of programmatic reliability performance. This indicator set should
be base :J o -gral;y cn istorical information, but later updated and validated as advanced propulsion system
development programs specific information becomes available.

6.

Reliability Growth Analysis

In all oeveioprnenta systems a certain aegree of :eiabiht growth is to be expected However, program
managers need to know the pace of the expected growth so that they can determine if the program is likely
to meet the operational reliability goals within developmental ime constraints An understanding of the
growth process is therefore essential to the determination of the proper role to be played by history in the
forecasting of future system reliability If an historical failure has been analyzed and its cause determined
and suitable corrective action is implemented to prevent its recurrence, it is recognized that it would have
its probability of occurring again diminished when it is utilized for predicting future performance. But by
how much? The determination of how much each failure should be counted is important in orderto establish
the proper 'calibration" for the reliability growth characteristic to be used to determine how well
reliability development is proceeding. Several approaches have been developed to address the issue of
growth Among those developed are the early works of Duane at GE, ;hat of David Lloyd of TRW, and that
developed by Dr Yu Shen of SAIC as part of this study. In addition, Bayesian approaches may show promise
for improved growth forecasting.
Recommendation 6 - Reliability growth forecasting is important during the development of systems
with high reliability requirements such as ALS. Accurate growth forecasts allow program managers to
determine early on if reliability requirements are likely to be met. (This is especiatiy important when
program economics prohibit extensive development test flights as is the case with ALS). Several methods
currently exist to allow for forecasts to be generated: however, further development is required to assure
that a reasonable growth forecast is developed for advanced propulsion system development programs. It
is therefore recommended that the concept of reliability growth be further developed as it applies to
advanced propulsion system development programs.

Further Recommended Studies - The recommended studies as discussed in Section 3 0 are judged to be
Nonetheless, the following
somewhat less in importance than the Key Recommendations above.
recommended studies could have a significant impact on reliability.
1 Detailed Comparison of the Methods of Secion 2 2
2 PRACA.FRAUA Trending
3 0 C Curves and Reliability
4 Top Down Analysis
5. Product Design FMEAs
6 Manufacturing Interfaces

REFERENCES
1-

A Historical Look at United States Launch Vehicles 1967-Present" by R. Barrett, et. al. STDN 883 prepared by ANSER, Suite 800, 1215 Jefferson Davis Highway, Arlington, VA 22202.

"Performance Trend Analysis System Technical Requirements Guidelines" prepared by Pan American
World Serviccs, under Rockwell Test Integration Contract WBS 19.1.2 dated Sep 30, 1988, for NASA
Johnson Space Transportation Systems.

3.

"Low Cost Reliable Access to Space-A Solids Approach" Prepared by the Solid Rocket Technical
Committee of the AIAA July 1988.

4.

"Advanced Launch Systems Operations and Cost Analysis" Contract No. FO 4701-85-C-0055.
Prepared by L. Systems Inc., 2250 East Imperial Highway Suite 600, El Segundo, CA 90245. Prepared
for Air Force Systems Command Space Division.

5.

"An Assessment of Liquid Propulsion in the United States" by Liquid Propulsion Technical Committee
of the AIAA.

6.

'Launch Vchicle Reliability Study Data Base" compiled by C. T. Clague of Aerospace Corporation as
transmitted to G. Platt of Marsidll Space Flight (enter in a memo dated Juiy 18, 1988.

7.

"Space Shuttle Failure Mode and Effect Analysis (FMEA) and Critical Items List (CIL) Ground Rules"
Prepared by Marshall Space Flight Center Reliability and Quality Assurance Office, EGO1 EG 5320.1.

8.

"Problem Reporting and Corrective Action (PRACA) System Requirements for the Space Station
Program" by .!ohnson Space F;ight Center Product Assurance, November 26, 1986.

"Defences Against Common-Mode Failures in Redundancy Systems" by United Kinqdom Atomic Energy
Authority . Wigshaw Lane, Culcheth, Warrington WA 34 NE.

10. 'Assessment of MethoJs for Implementing Availability Engineering in Electrical Power Plants" by
Holmes and Narver, nc., EPRI Report NP-493 TPS76-662, May 1977.
11

"Fault Tree Handbook" U.S. Nuclear Regulartory Commission, NUREG-0492, January 1981.

12

"Hazard Analysis of Commercial Space Transportation" Vols 1,11, Ill by DOT Transportation Systems
Center. Office :Df Commercial Space Transportation Licensing Programs Division, May 1988.

13

"'EL 'i Gu;dtc ior General Principles of Reliability Analysis of Nuclear Power Generating Station
ProteCli, 1 yser-s 'iEEE Std 352-19 87.

14

- ,< rj(i
abiHty Groth" oy . K. Lloyd TRW Inc, Electronics and Defense Sector, One Space
Park, ,ddo
aeacn, GA 90278, h-om Proceedings--Institute of Environmental Sciences 1986 John
Wiley & Sons, Ltd

15

"Why Bayes is Better by A. J Bonis from the 1975 Proceedings of the Annual Reliability and
Maintainability Symposium(Including notes taken from Dr Bonis' lectures at the Eleventh A. 'ual
Reliability Engineering and Management Institute, November 11 -20, 1973, University of Arizona).

16. "Techniques for Reliability Assessment and Growth in Large Solid Rocket Motor Development" by D.
K Lloyd el al., AIAA Paper Number 67-,135. Presented at AIAA 3rd Propulsion Joint Specialist
Conference, Washington, D. C.. July 17-21. 1967.

17. "Safety Factors OC Screen and Failure Rates" by Sid Lishman, NASA MSFC, June 22, 1988
18. "Statistical Analysis" by E.C. Bryant, McGraw Hill, 1960
19. "Bayesian Reliability Analysis" by H.F. Martz and R.A. Waller, Wiley, 1982

75

Appendix A.1

An Investigation of
Historical Failure Correlation
Using the Shuttle SSME Test and
Flight History as an Example

1NIlROMI C!I

Given the Current Ltc of wk r


te1 fi'
hr xi
Ii :itc
n
pr,;ihbiity -'' i catastrophIC
endueII liirc during a 'eoi.l liaunch. A cacistrophiceire fAiultrc is in. erl one Ii khich Ohwengue
doi~,c
loit
, i n i coltiolle'd Inuitnnir and includes uinntwrolled lire, explosion, breach of the pressure
Niiruiry, shrapnel, or atcombfination of thesc. Oivein that Ii cngine has failed catastrophically In flight an
IMIiE dl~itc ceinxrn
Fi If11i other c.ritical hardware Ii the vicinnus of the failed enine. For vehicles configured
oIh niulup1.111Cilie III:a,clitstr the,, question bcoie. As
helfhe; the catastriiphic failure oii one engine will
resutlt Ii the cataistroptie loi ol fi hentire engine cluste-r.

IThis svllAs dee i le c iorltiion bctw\.cr a catastrophic fadi r of a Spaice Shtutle MIarl Engine
I>s %11: and the prolpagatiir', II thait faiure to inclJudo the enhtire SSNML Oiroe C'iginl. cluiictr.'

SSIL~ FAILURE D)NlAHASF~

[ir,

I Ile SS IL iit
e h April Li I

)88.

j;7.,
tIinh dj ' !r,I:! %11 I
,o c~Ijxid tc,[ r~ 1101
ii ,ee
d tkr Lhl i dl j1,111,1 'os~
1
, Jt Ars;11 1heI-; fIt :%ih:th
re
h
IF
ep nr histirn, t, test

Sierliticarat SS \evnt
itih
vise re',lted in 'taiiage or thfe huss, of hardkxare are classified by
N ASA ats riajor incidenits. Catastrophic- engine evttsare at subset iif the event, classified as miajor
ifiiditns. A cata:-triphi: eveilli is onle ill wAhich its occLurrece, in flight resuts Iii sudittcant uncontaineif
eninre dalnagc and skibseque-tl\, Ii the los)s of ciecw and sclfioce 1[he consideratiiii of inaior incidents is the
ham,, fi r developing the correlation of failunre factor for the C5 ME.

Wihin tue tou, "S1011 cxps rueiC1,thero base been - 6 rniajor Incidents. Of lic~e )2 of these inuxcnts
hCetee diig
Siiigle Cltcrin tind test-s, which nilust be- judged as applicable to this soudy and( whether
the event would have reSUIttd inl dariage to an engine cluster. Three of- the mnajor incidents occurred during
thr(,. engine cluster stile ii
nls it should be noted that none of these events resulted in damrtage to tie
other engines, in-the c In~ter. The rernaininrig ajor iiicident. occuried ';%fighit during th 2 STS - I I mission
and aeain did not result in dt et'o'the c luster, however, this, event occurred late fintue cnr~ine burn with
HO einsecqtience It the eniturie Involved and the engine shut dowkn at its piograintued tine.
lie ijlire events included Ii the siotil consider all SSME history. iid has nut benfiltered- Sinc the
rnun r consideration 11sto dICterniine [the probabilits of clus"ter failure gIvcn atli nin failure has occurred, all
ot[ the SSNI 'L
e'xperiecec is con~idcredl. 'Flu-, engine configuration, test objiectikes, powkrer level, subsequent
hdurIlware red,-signs, etc.. are considered irrelevant. 'Ihei objecct of this study is niot t i determine whether the
So5 ,I I. will fil. bIll! ,1 kin that on, hasfl euid
'''n
the prnhabifits of aii entire cluteIr failure.
aj~~~j'1
Li(
al (,i icthe
najor iueufeuv are applicablec tui this studk, Sitnce the studsv
in'.% )Ises filuIiires wh ich cmtldp)If fitnti:y af
itIfec t or (t i : 'Cother engines of ',hte lutster ant appropriate
lcrkrinin:i criteria is rrtliiirk-l fi order to ofeterm-ine vslhich of the nia or inciifents it the database are
appilicable. '11heritcricl iu'ed (I develop the correlaitioii (if failure,, musjt ciinsider iiilv those events which
eitherd~recl aniage thw c luster diu to shrapnucl, for examiple. or Ashil it Indirectly' resn i lse alr
i li-rupting t i hlic tiwk, to ;1 fl il

* I

in~rci'ti~nts
li'

.. ip~u'

ui.
wl'hii il
I

fitihirc'.

Ihriii

cI Irr,-I , -iii

:r. iii'

Jfii;

ihtirinie Illi''1hi
r~it,- un
I I y t I ;I I,.rI

d~xl

1t;,,in Jmn
' I

Id

ii :or
% ,I,
ave
ctxwei
l-f, e the tN AITI ,ir~ci
The- pi'puii

(,rrc~;:,;

I, m
,

ur.i

I:'IiI.ofTue
C

k,t fiiliu~jr'v

hneIl
h vuII

r,

I111f

';),I #

oI

,,t~tt,

it
10

The following criteria were used to determine which of tlc major incidents should be considered
applicable for this study:
Uncontrolled SSME Shutdown - The event occurred in such a way that the SSME controller was not in
control of the shutdown sequence. That is, the failure mode is one which can not be or is not redline
protected; or even though redline protection exists and may have been activated, the action of the controller
is insufficient or is not fast enough to maintain control of the event.
Uncontained Hardware Failur - The failure of an engine component results in uncontained damage or
damage propagation to other major components such as in the case of an uncontrolled oxygen fire or in the
event of an explosion in which debris and shrapnel cause subsequent hardware fal!ures. Of primary concern
to the surrounding engines of the cluster is breach of the engine pressure boundary and the release of hot
gas, fire or shrapnel.
Retirement of an Engine from Further Testing - Due to the limitations in some of the failure
descriptions additional data is required to make a judgement as to the applicability of an event. One readily
available piece of information is the subsequent disposition of an engine following an event. Retirement of
an engine from the test program is generally a good indication that the damage to the engine resulting from
the incident was severe enough to preclude "iseof the hardware in the future. It is recognized, however, that
this is not a definitive indicator of severe engine damage since engines are retired as a function of their firing
exposure as well as according to damage resulting from testing.
The above criteria are thus used to determine if a major incident should be considered an applicable
failure to consider in developing the correlation of failure factor. Once the event is judged applicable a final
criteria is used to determine if there is the potential for damage to the engine cluster.
Damage to Surrounding Hardware - Only in the flight configuration and in the three engine cluster
static firing is direct indicatieon of damage to an adjacent engine available. Thus, for single engine test
firings an indirect indication of propagation of the failure to adjacent engines is damage to surrounding
hardware, particularly the test stand itself. The extent of damage to the test stand is generally available and
provides a good indication of the severity of the failure.
Due to the limited data available at the time of this study, for incidents in which the available failure
description is not sufficient to determine the extent of damage to the surrounding hardware one available
piece of data is the test stand down time following an event. Note that a long down time following an
event is not necessarily an indication of damage to the test stand, but may indicate a lack of available test
hardware, schedule considera :ons, ongoing failure investigation, or the installation of the next test engine.
However, a short down time following an event is a definite indication of little or no damage to the test
stand.
If essentially no damage to surrounding hardware resulted from the incident then propagation to
cluster is not considered likely. If damage was done to the surrounding hardware or the test stand
severity of the event is considered and a judgement is made as to whether 'he event would propagate to
cluster. Events in which the effect on adjacent engines is not clear are ranked as mot propagating to
cluster.

the
the
the
the

Application of this criteria thus provides a framework withi.: which to judge the 36 major incidents as
to whether they are applicable to this study. Given that a failure is considered applicable for final
consideration, and based on the severity of the post event damage, it is ranked as to whether the event would
propagate to a cluster failure.

7,1

SSME FAILURE SUMMARY


There are a total of 36 major incidents in the SSME database which were evaluated for the purposes of
this study. Of these, 18 are considered to be applicable to this study in that they meet the criteria described
previously. They are indicated in the failure summaries by an aste'ik (*) following the lec, n,,mber. Of
these 3 major ilmidents are considered failures which would have propagated to adjacent hardware and would
result in failure of the entire cluster. These are indicatid by an additional asterik (**).
fable I summarizes all 36 of the major incidents considered in this study. In addition to providing
information about the event, such as test number, test date, engine number, configuration, the table details
the results of implementing the criteria evaluation.
SSME MAJOR INCIDENT DESCRIPTIONS
The SSME major incidents are discussed chronologically in the following paragraphs. The event is
described and the rationale for its use in developing the correlation of failure factor is discussed.
Test 901-110" - During test 901-110 (11CR A005353) rubbing in the HPOTP of engine 0003 caused
fa&i!ure of the primary lox seal and an uncontained engine fire. The redline cut was set by a HPOTP
overspeed. This failure resulted in an increase of the intermediate seal purge pressure, revised redlines, and a
design change from a lift-off seal to a labyrinth seal design.
This failure resulted in uncontrolled engine shutdown and uncontained engine damage. The engine was
retired following this event. Thus, this failure is considered applicable to the study. Since there is no
indication of significant damage to the test stand in the available documentation this failure would not have
propa.,,ated to an engine cluster failure.
Test 901-133 - Test 901-133 (UCR A005072) experienced a burn-through of the FPB wall during

testing of engine 0004. The test was cut by an observer. This failure resulted in uncontrolled engine
shutdown and damage to the engine. The engine survived this event and was used for later testing. Since
the engine was not severely damaged and there is no indication of test stand damage (operational again in 6
days) this failure is not considered applicable to the study.
Test 901-136* - A failure of engine 0004 HPOTP turbine end bearings occurred during test 901-136
(UCR A005350) which resulted in.
an uncontained engine fire. The test was cut by an observer. The failure
resulted in design changes to heavy duty 209 series bearings, improved bearing mounts and modifications to
the coolant circuit orifice.
This failure resulted in uncontrolled engine shutdown and uncontained engine damage. The engine was
retired following this event. Thus, this failure is considered applicable to the study. Since there is no
ii,dication of significant damage to the test stand in the available documentation this failure would not have
propagated to an engine cluster failure.
cst 9,02-095 - During test 902-095 of engine 0002 (UCR A008624) a leading edge airfoil crack
rcsulted in blade failure, however, the engine damage was contained. The redline for the test cut was from
the HtPOTP radial accelerometer. Design and process changes have been implemented to increase blade life.
This failure resulted in uncontrolled engine shutdown, however, damage to the engine was contained.
The engine survived this e,,cnt and was used for later testing. There is no indication of damage to the test
stand (operational within IIdays) in the available d'vcumcntation. This failure is not considered applicable
to this study.
Test 901-147" -HPFTP turbine blade failure of engine 0103 during test 901-147 (UCR A005094)
resulted ina rapid power loss, reduced fuel flow and LOX nch operation of the engine. The test was cut by
the HPOTP radial accelerometer redline. As a reslt, HPI-11P turbine blade and damper redesigns were
initiated.

This failure resulted in uncontrolled engine shutdown and uncontained engine damage. The engine was
retired following this event. Thus, this failure is considered applicable to the study. Since there is no
indication of significant damage to the test stand (operational within 11 days) in the available
dccunentation this failure would not have propagated to an engine cluster failure.
Test 901-173* - Main injector lox post failure, cut off by HPFTP turbine discharge temperature.
This failure resulted in uncontrolled engine shutdown and uncontained engine damage. The engine was
retired following this event. Thus, this failure is considered applicable to the study. Since there is no
indication of significant damage to the test stand in the available documentation this failure would not have
propagated to an engine cluster failure.
Test 901-183 - Main injector lox post failure occurred during test 901-183 (UCR A018710) of engine
0002. Cutoff was by the HPF P turbine radial accelerometer. The failure resulted in the incorporation of
lox post flow shields.
This failure resulted in uncontrolled engine shutdown, however, damage to the engine was contained.
The engine survived this event and was used for later testing. There is no indication of damage to the test
stand in the available documentation. This failure is not considered applicable to this study.
Tot 902-112 - During test 902-112 (UCR A019208) of engine 0101 on June 10, 1978 a blockage of
the fue, supply resulted in a HPFTP turbine overspeed. The redline cut for the test was the HPFTP turbine
speed.
This failure resulted in uncontrolled engine shutdown, however, damage to the engine was contained.
The engine survived this event and was used for later testing. There is no indication of damage to the test
stand in the available documentation. This failure is not considered applicable to this study.
Test 902-120 * - During test 902-120 (UCR A005745) of engine 0101 structural failure and rubbing
of a capacitor position instrumentation sensor in the HPOTP resulting in engine fire and uncontained
engine damage. The test was cut by the PBP axial accelerometer redline. The capacitance device is no
longer used.
This failure was uncontrolled resulting in destruction of the engine and damage to the test stand.
Although the capacitance device is no longer used it does demonstrate the result of a HPOTP failure,
subsequent fire and shrapnel. This failure is considered applicable to the study and although some damage
was noted to the test stand it would not have propagated to a cluster failure.
Test 902-132 - During test 902-132 (UCR A005780) of engine 0006 . f
.,curred as the result of
the MOV being clocked wrong. The test was cut by the low chamber prt ..... Aine. The failure resulted
in a guideline for the first test of a new engine to be only 1.5 seconds.
This failure resulted in uncontrolled engine shutdown, however, damage to the engine was contained.
The engine survived this event and was used for later testing. There is no indication of damage to the test
stand in the available documentation. This failure is not considered applicable to this study.
Test 901-222 - During test 901-222 (UCR A017972) of engine 0007 a failure occurred as a result of
undetected internal HEX damage caused during arc welding which resulted in an engine fire. HEX coil
leakage resulted in an uncontained engine fire and cvere damage. The test was cut by the HEX discharge
pressure redline. The leak was caused by vail thinning of the HEX coil which occurred during welding and
reaming operations. The failure resulted in increased HEX proof test requirements.
This failure resulted in uncontrolled engine shutdown and uncontained damage to the engine. However,
the engine survived this event and was used for later testing. There is no indication of significant damage to
the engine in the available documentation so that this failure is not considered applicable to this study.

Test 901-225* - During test 901-225 (UCR A01816) of engine 2001 flow induced fretting of the
MOV sleeve resulted in autoignition, fire and explosion. The test was cut by the HPFTP turbine discharge
l(eiperattirc redlinc. ''ho incident resulted in several design modifications (ECP's 248. 258, 271) including
a redesigned MOV inlet slccvc/e,'a! a,. and the incorx)ration of a vibration redlinc.
This failure resulted in uncontrolled engine shutdown and uncontained engine damage. The engine was
retired following this event. Thus, this failure is considered applicable to the study. Since there is no
indication of significant damage to the test stand in the available documentation this failure would not have
propagated "o an engine cluster failure.
Test 750-041* - During testing of engine 0201 on May 14, 1978, the steerhorn tube fractured due to
high structural loading (UCR A006466). The test was cut by the HPFTP turbine discharge temperature
redline. The failure resulted from structural fatigue associated with high strain accelerations attributed to
exhaust gas flow shock phenomena during start and cutoff transients causing failure of the flight nozzle
steerhorn fuel distribution manifold. The failure resulted in fuel starvation and loss of mixture ratio control.
Engine damage as a result of the high temperature was extensive and included the HPFTP, HPOTP, nozzle,
main injector and the high pressure fuel distribution manifold steerhorn damage. The failure resulted in
redesign of the feedline assembly and nickel plating of steerhorn tees.
This failure resulted in uncontrolled engine shutdown and uncontained engine damage. The engine was
retired following this event. Thus, this failure is considered applicable to the study. Since there is no
indication of significant damage to the test stand in the available documentation this failure would not have
propagated to an engine cluster failure.
Static Firing 6-01 - During Static Firing 6-01 (UCR A009437) high cycle fatigue resulted in the
failure of engine 2002 MFV housing, fuel leakag , and fire. The test was cut by the HPFTP turbine
discharge temperature redline. The MFV housing crack extended from the cap flange to the outlet flange.
The failure resulted in housing design modifications (ECR 09738). Rework housing cam bearing cutout to
reduce stress concentration.
This failure resulted in uncontrolled engine shutdown and uncontained damage to the engine during a
three engine cluster firing. The failure resulted in fuel leakage and fire during the ground test. In flight, the
chance of fire is a function of the available oxygen which is altitude dependent. There is no indication cf
significant damage to the other engines or to the test stand. The engine survived this event and was used for
later testing. Damage to the engine was not significant and this event is not considered applicable to the
study.
Stati,, Firing 6-03* - Testing of engine 0006 (engine position 3) during a cluster firing on November
4, 1979, resulted in a nozzle steerhorn rupture (UCR A010997). The test was cut by the HPOTP
intermediate sea] purge pressure redline. The failure was traced to use of an incorrect weld filler wire during
fabrication. The failure resulted in the implementation of stringent weld wire audits. Added nickel plating
to tee weld joints and redesigned to incorporatp steam loop,
This failurc csulted in uncontrolled engine shutdown and uncontained damage to the engine during a
tChe cri- ;ic cluster firing. T he engine was retired following this event. Thus, this failure is considered
applicabic to tc study. Since there is no indication of significant damage to the adjacent engines or to the
test stand in the available documentation this failure did not propagate to an engine cluster failure.
Test 902-198* - Main injector lox post failure resulted during test 902-198 of engine 2004. Cutoff
was by the HPOIP turbine discharge temperature redline. The failure resultwI in a change from the existing
injector- to Haynes 188 lox post tips in rows 10 throUgh 13. New injectors have all Haynes 188 lox
posts.
This failure resulted in uncontrolled engine shutdown and uncontained engine damage. The engine was
retired following this event. Thus. this failure is considered applicable to the study. Since there is no
indication ef significant damage to the test stnwd ir the available documentation this failure would not have
propagat(d to an engine cluster failure.

Test 901-284* - During test 901-284 (11CR A015786) of' engine 0010 a malfunctioning MCC
chamber pressure lee jet caused the controller to lower the ttPOTP output and resulted in HPOTP fire and
external damage. The test was cut by HPOTP accelerometer redlines. The fi! ,,re resulted in installation of
a poitive retainer in Pc por, flange to prevent lee jet from backing out.
This failure was uncontrolled resulting in destruction of the engine and damage to the test stand.
Although redesigns have been implemented this failure does demonstrate the result of a HPOTP failure,
subsequent fire and shrapnel. This failure is considered applicable to the study and although some damage
to the test stand was noted, it would not have propagated to a cluster failure.
Static Firing 10-01 - During Static Firing 10-01 (UCR A015391) of engine 0006 a burn-through of
the FPB liner and housing occurred. The test was cut by an observer. The failure resulted in the addiion of
a molybdenum insulator and new divergent ring liner.
This failure resulted in uncontrolled engine shutdown and uncontained damage to the engine during a
three engine cluster firing. The failure resulted in external leakage of hot gas from the FPB bum through.
There is no indication of significant damage to the other engines or to the test stand. The engine survived
this event and was used for later testing. Damage to the engine was not significant and this event is not
considered applicable to the study.
Tes 901-307* - During test 901-307 of engine 0009 a failure occurred in which the FPB injector
experienced a bum-through. This failure resulted in uncontrolled engine shutdown and uncontained engine
damage. The engine was retired following this event. Thus, this failure is considered applicable to the
study. Since there is no indication of significant damage to the test stand in the available documentation
this failure would not have propagated to an engine cluster failure.
Test 750-140 - Main injector lox post failure resulted during test 750-140 of engine 0110. This failure
resulted in a controlled engine shutdown and contained engine damage. The engine survived this event and
was used for later testing. Thus, this failure is not considered applicable to the study since it resulted in a
controlled engine shutdown and minor damage.
Tesl 901-331 * - During testing of engine 2108 on July 15, 1981, injector post and engine damage
was caused by material failure of the lox posts (UCR A013786). The test was cut by the HPOTP turbine
discharge temperature redline. '.ie failure resulted in the application of new materials for the lox posts and
the addition of flow shields.
This failure resulted in uncontrolled engine shutdown and uncontained engine damage. The engine was
retired following this event. Thus, this failure is considered applicable to the study. Since there is no
indication of significant damage to the test stand in the available documentation this failure would not have
propagated to an engine cluster failure.
Test 750-148 - Main injector lox post failure resulted during test 750-148 (UCR A016031) of engine
0110. Cutoff was by the HPOTP turbine discharge temperature redline. The failure resulted in the
implementation of all Haynes 188 lox posts and extended flow shields.
This failure resulted in uncontrolled engine shutdown and uncontained damage to the engine. However,
the engine survived this event and was used for later testing. There is no indication of significant damage to
the engine in the available documentation so that this failure is not considered applicable to this study.
Test 902-249* - The HPFT-P inlet volute of engine 0204 failed during test 902-249 (UCR A018288)

as a result of non-standard fuel prebumer injector modifications which produced a hot FPB core. A group of
plugged FPB LOX posts created a hot spot and delamination of the Ni/Rene first stage blade tip seal,
resulting in blade failure, shrapnel and inlet volute rupture. The test was cut by the HPFTP radial
accelerometer redline. The resulting fire destroyed both turbines, the po erhead, MCC and nozzle. This
failure resulted in a design change to all Rene blade tip seals and preburner modification restrictions to
preclude a "hot core."

'This failure a unc~tfltrol!cd resuLlting in destruction of the engine and damage to the test stand.
Alihough fixes have N-cln implemented this failure does demonstrate the result of turbine blade failure and
siihsc iicrc nfit .hr y~li
nisraiiitre I. ,oniidered ipplitable to the study and although sonme damage
to tle test SLtuid A5as nloted, it Ao!Y nodtr
have propagated :o a cl uster fai lure.
Test 9O!-1{) - A [{PFFP turbine dischargte sheet metal failure of weld 56 during test 901-340 (UCR
AMi835" of gnine 0107 caused turbine flow bloc:kage and resuted Ii contained turbopump damage. The
test was c:ut hy excccdng the iiPFY1P turbine discharge temperature redline. The failure resulted in weld
prep redesign to achieve l(XYI pneitration and the inclusion of x-ray inspection where accessible.
This failurec resulted in uncontrolled engine shutdown arid uncontained damage to the engine. However,
the e-Ilgiric stirs Ised all" (e;erL :uru k& u.sed lor later testIing. 'There Is no indication of significant damage to
teegine Ini the tva!1ale1 d~xuroe~itor so that this failure is riot considered applicable to this study.

Te:st 750- 160


A blockage of the fuel suipply as a result of ice formation occurred during test 750160 (UCR AOI6l45) of enemac 0110t which burned both turbines, HGM, main injector, MCC and nozzle.
The test was cut by the HPFFP turbine discharge temperaturo redline. The failure resulted in revised engine
dryi ng proc cd ures tu reeea!1 ssater following EDM1 operations
"his failure resulted in urrcertrolied enieston
and uncontained engine damage. The engine was
retired following this event. Thus, this failure is considered applicable it) the ,iudy. Since there is no
indication of significant damage to the test stand in the available dIoicnclti.or this failure would not have
propagated to an engine cluster failure.

~ A tewredesi gned Kaiseri cap out allowed hot gas ieakage into the coolant circuit
during test 901l-304 (IJCR 111,W 10) of engine 20 1.3 w%hich resulted in bearTing failure, uncontained engine
damage anO complete destruction of the engine. The failure produced significant shrapnel and test stand
damage wvith t.>e engire ultimnately separating from the test stand. A redline cut was set by the PBP radial
acceleromneter. Ric cedcsigned nut wstested no further an~d all cngiows continue to use the original design.
This failure .vas uncontrolled resulting in destructnon of the engine and damage to the test stand.
Although this hardware configuration is no longer in use it does demonstrate the result of a loss or
disruption oif ccilarit flossx to Lhe turbomnachiner- . This failure is considered applicable to the study and
would have pr,)aaiatcdt, a rc failure.
Test 7~L~-Durin.g test 7 s)- 165 of engine )1()7 the OPOV experienced sieal erosion. The test
continied for the rir1ra17,durtion. -.hIis failure resulINd in a controlled engine shutdown and contained
engi,ne damnage. 'The engine survived this e,,,nt and was used for later testing. Thus, this failure is not
corr)IderFed applicale to the studs stnce it resulted in a controlled engine shutdown and minor damage.
i~g7~-..~Durirgi ;est
leaacelns.:cir.re r d t'e
iv

retire f

le

" ..

''

ne 'Wvr

-ti

0 o of enigine 0107 AS!' blowkick, caused lxest cut-off OPOV ball seal
s i
reked nIrt erodd Ths Q,t ., :,tirred through the programmed

~t''dengine
::,

,tutdown

<

ti-. failki't is nou(mri

~t

mutcjw

crc inc damage. The eniewas

",tpphlr'h i- the study since it resulted in a

was niodified '.s Or the: instaiiation of an ultrasonic


flow meter. During te st 750)-175 JT'C Al) 1500) a failure re~ulied in IWO FPtrverspeed to 44,000 rpm
(noritinal 2 1,3f0l rp)n c rus rrg disc_ MlPtir'" pOiTp Fire, shrapnel! and~ CXLCIeNVci'engine damage. The test was
Cut by Lhe l'P R 1eroacter
rcdl ne'. "It 'It dilre OCcurre' at thce bra.,ed joiir4 i,,tween the prototype
ultrasonic, flOWrne'cer and the high pressurin 'Ii'c turbopumnp discharge duct and resulted in destruction of
the M-1O dw t, 01( il I) IT. thel(
'i i
rr11"ollcr. Furiie.r use of uiltrasonic flow mreter on H110 duct
*~~~" *u

wa4s elrni''1_

tcii.nrime
ofIID
"0

This failure was uncontrolled resulting in dcstruction of the engine and damage to the test stand.
Although this hardware configuration is no longer in use it does demonsLrate the result of a loss of oxidizer
flow and subsequent HPOTP turbine overspeed, lox fire and shrapnel. This failure is considered applicable
to the study and would have propagated to a cluster failure.
STS-1 I -- One major incident act,,dily occurred in flight during STS-I I and was obviously not
catastrophic. During the flight the ASI chamber of engine 2015 experienced erosion due to a drill chip
lodged in an ASI orifice. Engine cut-off was by programmed duration The failure resulted in t-'z addition
of an ASI fuel filter to the supply line.
The engine bum continued for the programmed duration. This failure resulted in a controlled engine
shutdown and contained engine damage. Although there was damage to the engine itself, there was no
damage to the adjacent engines. The engine survived this event and was used for later testing. Thus, this
failure is not considered applicable to the study since it resulted in a controlled engine shutdown and minor
damage.
Test 901-436* - A hydrogen leak during test 901-436 (UCR A013338) of engine 0108
overpressurized the HPFTP coolant cavity and resulted in a coolant liner failure and major engine damage,
destroying both turbines, the powerhead, MCC and nozzle. A redline cut was issued due to high HPFTP
turbine discharge temperature. Design changes were incorporated to decrease hot gas leakage into the
coolant circuit, a coolant liner pressure redline was implemented and inspection requirements were increased
on the coolant liner close-out weld.
Thi- failure resulted in uncontrolled engine shutdown and uncontained engine damage. The engine was
retired following this event. Thus, this failure is considered applicable to the study. Since there is no
indication of significant damage to the test stand in the available documentation this failure would not have
propagated to an engine cluster failure.
Test 901-468 - During test 901-468 (UCR A014585) of engine 0207 a stress concentration at the

welded boss caused the FPB manifold to crack resulting in fire and major engine damage. This failure
resulted in uncontrolled engine shutdown and uncontained engine damage. The cgine was retired following
this event. Thus, this failure is considered applicable to the study. Since there is no indication of
significant damage to the test stand in the available documentation this failure would not have propagated to
an engine cluster failure.
Test 750-259** - A failure of the MCC outlet manifold weld occurred during test 750-259 (UCR
A015713) of engine 2308 and resulted in complete engine destruction. The failure resulted in shrapnel and
test stand damage with the engine ultimately separating from the test stand. The test was cut by the
HPFTP accelerometer and turbine discharge temperature redlines. Failure investigation determined that the
MCC outlet assembly had ruptured due to fatigue or undetected flaws. The failure resulted in improved
inspection of the assembly, redesign of the ,jtlet neck and splitter and implementation of life limitations
on other MCC's.
This failure resulted in uncontrolled engine shutdown, destruction of the engine and significant damage
to the test stand. This failure is considered applicable to the study and would have propagated to an engine
cluster failure.
Test 750-285 - A Class I leak was experienced during test 750-285 at the number 8 feedline. Engine
0210 (May 21, 1987) experienced a feedline crack at the saddle bracket stop weld. The test was cut by a
facility ambient air thermocouple. The failure resulted in improved feedline/saddle bracket and weld
interference inspections.

This failure resulted in uncontrolled engine shutdown and uncontained damage to the engine. However,
the engine survived this event and was used for later testing. There is no indication of significant damage to
the engine in the available documentation so that this failure is not considered applicable to this study.
Test 902-427 - During testing of engine 2106 on June 26, 1987 at the NSTL A-2 test stand the low

84

pressure fuel pump discharge duct experienced a corrosion induced leak and subsequent external hydrogen
fire. The test was cut by an ambient powerhead temperature redline. To preclude the possibility of
corrosion induced failures, flight engines will use low pressure fuel turbopump discharge ducts with low
calendar life and/or hotfire time (DAR 2074). Subsequent flight engines will use corrosion protected low
pressure fuel turbopump discharge ducts (ECP 977).
This failure resulted in uncontrolled engine shutdown , however the damage to the engine was
contained. The engine survived this event and was used for later testing. There is no indication of
significant damage to the engine in the available documentation so that this failure is not considered
applicable to this study.
Test 902-42S - During test 902-428 of engine 2106 a crack in the OPB interpropellant plate resulted in
the formation and build up of ice, blocking the fuel supply which altered the OPB exhaust flow distribution
and burned through the finer causing faceplate erosion and l-IPOTP turbine end damage. The test was cut by
a tacility redline. The failure was caused by cracks in the interpropellant plate-to-element braze joints. The
cracks allowed propellant mixing and caused ice contamination to form in fuel manifold. The failure was
determined to be the result of poor braze joints made during fabrication. Flight engines are cleared by a
review of the manufacturing braze joint records.
This failure resulted in uncontrolled engine shutdown and uncontained damage to ,he engine. However,
the engi.c survived this event and was used for later testing. There is no indication of significant damage to
the engine in the available documentation so that this failure is not considered applicable to this study.
ANALYSIS
The results of applying the criteria to the SSME major incidents database results in a total of 18
applicable failures, of which 3 gre considered to propagate to a cluster failure.
The mean is then computed by
X = 3/18 = 0.167
Due to the small sample size the F distribution is assumed in order to develop the confidence interval
for this case. For a 95% confidence interval the result-, of applying the F distribution are
0.036 < X < 0.414
Thus, with a 95% confidence interval the probability that a failure will propagate to the adjacent
engines in the cluster is between 4% and 41%, given that an uncontrolled engine failure occurs.
CONCLUSIONS
In the development of future launch vehicles the potential benefit of engine out capabilities must be
weigi'cd against the risks that if an engine fails in an uncontrolled manner it will result in the loss of the
entire engine cluster. This study evaluated the SSME which is flown in a three engine cluster. No
uncontrolled SSME failures have occurred in flight. Only a limited amount of ground testing has actually
been done in a three engine clustcr and although failures have occurred none have propagated to involve the
enlire cluster.
However, the test data evaluated here indicates there is a reasonable probability, approximately 17%,
that an uncontrolled SSME failure will propagate to the adjacent engines given that an uncontrolled failure
occurs. Tlhe confidence inter-ai isbetween 4% and 41 % that a failurc will propagate to the cluster with a
95% cc ifilence level.

1c

_gj

i ---

-- wc

i~
U'

u00
UU

.9.
99

9.96
,99.

4-

fru

Lj

~9

z~

aZ
<

>

-N-.Z000

01O

U-w

zwL
>

.N

a)0000000

mid MlLL
.

lgf04go4Q4g,
,

86~

-Q4

-6

II

11-C0va

CL
0.

S
>.2

(D

00

oL

CL

LL

C)

00000000000006

--

006060

0
LU

Lii

0 0 00000C0aa0000o0

00

C-)

a0000

0C

C)

E~

0
00000cc0

00000

0..

00000000nLnnto00

n0tn0T

a'

0 0 Lr 0

r- or
C
.0
.
. N. N

0 00

400

0 0O

Ln 0

N. N. N.

N.

O OO

OO Ln

CE-

N.
.

O=

m mO.

)7

C'.
00

'
O
LO

O N.
0

O)
0
C

0
0

87

01 C) u LA CON. (0
0

0.

a a

CM
J

Cl)

C\J

~~oC
. .

(% O) O

00000000000-----------C
\
CNCMNNVN

N.

A CO N.

=L

<
UL)

V)
CDM

0C c

>

o0 c 0 Co.)0

c Coco

00

0 a

000

0)

a_

t.

Cj

0)0
000 0"

0)

(D

m0m0

OO

N' N

C'

-----

cl

0
OO0O0O0OO
C

0a0Nw--

M 0

*.J* ------

--

C\JC

'I

a
m

c c cC.00
3mmm00mmmMmmmC

-n

--

--

U -<<<<

N N

N r-

-C

QN N

-N

~~~~C~lO ~

'ui~'1

ac
m m

OOOO
0

...

NN N

.O

O00C0

(D.n

O
m
M0oo

00

-C

-L0

-.c

-.a

f~f

>

N-

U)
TO CL

(55

U)

.2

-00

U-

00

Co

.o
a=

c~~~~U

D0aCD0000O

a.
0
0

CD
00

wL

C1N

00

Cd)

'T

000N0-T

000M

(DMM

00

o-

0N
N00

7oW0 00

00

M00W

oL
0
C C

-'C

f-t

-r

ro lr

< <

~OM
(''?

Ci)

)j

-NO
)
N

-L

-(

-W

W(

'

ZNM'TnC

-MM0'

\
mm\J
m

NNC~

l-U)CJ
)mmmmmm(

N O~N(
NMNIT0

-w N

ma

nm

89

-NM

'
00

jo~oo
E

m(

0C

--

mmmmmc)mc

~
~

mmmmmmc

-0

2l

w~

0.
CL

00
020

oE

00

>

0 N

CLU
0

D0c

n(

nC

N00NN00

N-0N00N0N00N00IA'0N0
)C

nC

N00

CA' N

D-0M0NC

.a
WLTU4m
0O
-~~~

000

mU)(

00

ww 0

Nl m~- cn

LU

000

C.,00~

C/)l

v(

~ ~~~~

00

A
-CA
CDC0CoQ00cccoCaccC
)C
1
(M 0
( )C

000

a(
N

oo
too

00

w(D

ww

NNNN

-CI-n4
ncnm )-na

o cl

ww

00

o0ww0w0J

^
)C

~~90

)a

oa

N-

000

)a

w00w(D0

Daoococ
)
)(

00

oc

00

- N

N~~

NM

c
M

00

(00w0ww0

CJ-O

(n

-C

4CJC'
,6

~
nc

NC

C-'J'
J

to0

-a

ca 0.

*0
oo

-U

104
(D)

*6a6600'00

+~~00000000000

000

a' Ca'0'

'a

;0

a*a*

a.0

(D

cm0L

CD
0

~j(

0.>-

00
T

0 C.) 0

0n-

00
n0-(

0
E

E>

C~

\'CI

mC;e-0

\-C-0-C-C4

L000m(

~ -CI

\-c-c)

-C

T
no0 l V

nCc vr,

-c;c3

-M

en"

-M

-M

CL

(D IDC

(0 (0J (0j

0
'I

JC~
o cc

(Dr

j M0

0 0
M- M~

-0 -

'0

--

-O

-n

UM 0

--

91

4m U..
a~

W~ N~ --w,

.-

C IC.

C -D- Dc"

NN

-N

C,

00

C%4LA.L.L
U..
M

00
-mOnv-Ttrv-

CU C'JC

Nl ND N

co
L

00a

N1 N

Co.
at

>

C-

(3cu

~
CD

~x

~
q.CD

Ci,

00

-i

CL
C3
0 CP, 0 V)

in'r

cl)r

C14N
OW

r-

0.r

0U')M
0

n0L,-

0(

M-

)0

T(

aa
e

-ocq0r-

o
't

cl

04

-N

Qv

Cwu

nc*I

,WL

nI

0U
N0

M -W

00

,V~

CU

092

OW(
000000

nWr

-------------

000

WN'T

-U - - -~ -c~~-

cn

Wt-NNNN
-

--

0C~
---

70

3:
n
CLU*

L.

CL
.7U

CL
o

O0
.0

0.0-0.0

-N

m m v

0 .a.0 00

00.a.a.00

00000voc

.0 .0

0n
>

0.E

U)

00OO

OT

Nc

Mic

,c

-0

N v

w -0(

mo

-0.

---

0
C-

--

<<2w

<

-; Q

w~- 00
j0
rC

<<EO

Q~

CJ

00

00
NN 0

N0
0

00

C)C)

00

oo

oo

ooooo

m-cXI

00

.NN

00
00

0
0

00

0
0

~~~~~~~c
w000
0
,

00
00
0

0M0
..
0

NNNNNNNNN
N

<<.<<<OOO-Jc

NN3

zzz
~9

M0
0L

0
:5

.2

C.
0

C,)

.C.)

*00v

CU

00

0000CD00

000a0

0l

a.
iU

E000000000000

0n

0 Lx

00000000m00c

- - - -)~

-C
.. .

-0

-z

09

0 0 00

0 00

--

0 0

00M

00

000

-- 00000----------~00

C; '

n0

C; '

V
.

. .

cD00n0000

7"

0c;

. .

00 z' mO

00

cnNO

00

00

00

Q
al 0
fL
CLCU CU

.0
*0
0

CD
0

00

Q
-

42(Dd
0

CD XL

C
0000000a000000
a00a0

000000a

000000000000000000
0'0'0*0*0'0*0*0'000

000'006000a

0
(D

>

a0

0000000000

00000000

a0000O

a0000

00a000aa0

Y
00

0U
a.

E
m
8i
-'

0a

'

T0(
o4
IC!C!C!"

w-- v v -o

0N

- o-

mc
7

n"rL
o0
" '

I"I'1

v 0 0 oL o u)L
-0 o

)00m

Wvc

!4

I'

nt

-- o

n m wt
o--

Nc
O
T"

o
c

TvC
N
O
q

nL
O

0000-0000

(d 0CLC
1 aI
eu

" m t

U;> ; ;C;

al

tl-N0i-

0)V2
a

Z;E
CU

C
0

U(
Oo0N

~095

-MMClCM(4-

-r-c

------c

U(

Uc
-----

c~qu

<

wM0
owwwwr

%J0

~,M.

--

,a0

N-O--f0

90r

- -

OO
0CJm1
oz~00~o

,Nr

--.

N o

~I

0
0c

m0

c.

0
.

V)

U-

.0
C.
CL

.2

0M

-c

(LC

Nci

oL

0000000000l000o0
.
zo

owwc

VV

o;ENN00

00j

-C-

09

lV

i
Nmxm

OC

00

vc)

-r

o0

T0

NNN0-

:'

,Nr

DNr

NC

0.6

WU

-.-.. -0

1c

--

Dc)c

nc

O(

y-c

ojE0r

000

00

-c

,W

'0,0

mwv

,4

vo

00c

0
wa

C1

-*0

*0

\'0

C,0

rU0

CJ-N,

N -C\

C~

0-2

C))

03

CL

.2

o0

000.0

C,,

>0(

00

00000

-Cl

cJC

CJl

00

00

0
0.

.C-

xoC-

00
wG

CU, l

E0

CU, lU

liC

0I

JD00
0? a

'-

0l a0. 0

00c

nC

U.;lCCC
yC
M

00 0
w~C

0'J0C0
C;

0
0.- C'0

*-6

60

0 '0
a~ mfC

00

00

0z
0'J)C%
0

*t*N

*C

z
;C

zz
rU-wwaC

-------

0E

0000CO00\J0

'6,

'i

660'N

)u

)cnu

)-0

;ac

CcJ00C
ncnwc

Ii

N-

- ----

97

-C

00

N--N

00

ON000
0

M0

7LL

fn
-CuVm
C

lU

CLc
0d

a..

0.

CL

CN

IT0i

.
u

oNi0

maw0a
-C

0
W0CDl-N V) N.

Cfl

V)C

V) N

4C

lwC
nC

ovu
u

LO00
L

-CJ-Cj-

0~ CU00000Cjci00C

o0

C)

ac

C)L
Y

)C

l
v

n
qw0

MC

DNac

DCACjj0

00O0000O00Z'O)0Cfl0.-00C0O.-o.ww0O000000u
m--

00 N

0D

N 0T

.N

0 0q
a0

m- N

NI

00

~0

-0

0v

N V

00

N- N

N- N

00

000 0

cm

C)

NmN

r-

-0000-

0a
N

cll98fll

0
Nm N

N- N-

U)
0

.2

>~
-0

00

CD,

a.

(D~

E-:5

cT0

-a
CL

UD

>

CUCU

(n

CL-:

0n

0-

LLC'

0nc

0N

OO

nmc

U)

-cL
0;

ZUO
U)

0.399

f~~

(CDDz

NO

nw

lc

0.O

nc

--

.0

C.L
0

m.

.0
.2

_0

~0

.2

CL~
U,2

0000000000001,0000O

CDQM

CD

3:0

-C

>
x

CD
(a

0u
0

>
M (.)

LU
ft
0u

(I,

.. C E

Cfl3T

'tC
Ic
)c)m
-

'

MC

OCIMc)c)c

(M (3

C)
C-

oo mo

oC)ninDco00LCDp

MM Mc

c
v

4 ,,
-T-tI

IV T V
0 -T 0 T

CT
v

n-

-c

V I

"

L)

0.

CV)

cr

N 0

N 0

Ccn

N N N 0

,..

0.<

Cuz

(D:

>4
0

LWL

n0004t

CVCVVCV100

(n

CY

0 0

0N

N N

U; r.>. ).
0

.>>.,..
0

N
I.

~V
TWV

C;

(I

N 00
N N

00

N 0

CC C C C C CCC

Cu

<00

ao

'
0-M0t
o
l

In WLO c

0 00

00a0a0000aL
o
mr,

0a 0

-Ln W

0
N

MmmmmwU

0L)T '(no

n0f.VVVn
R 0 w

.
i-L-

<0

LL

wo

CVCVCVNOM
o
m

va

t:

LL

CL

.2.

a
>

(0-,

00
T

mU

CLL

OO

L2

OOO0O

OO

OO

O0

OO

-C

LU.

u-

0000.0

00

00

m.

o
-o

co
0&

0101

e'J

*--

U')

o
m0
.2

LL
c

<

~0

~LL
V)

a.

S2

(D0

7i
m

0c)(

a)C

00

nc

~0

00w00V0M

00

00100000.0
.

v1)0C

o0

CMo

VV00

vwc
N -MNN

MMM

M M -

a000

laac

N -

-N

0
.

Dwwww

nM

CL

N0W0000W0W
n0iI
o(
(nC
0

00(D000

0
.U)00C

0l

a
0

CD)

---

0T

CO0)'ItOn

M~o (CIO

(VOO

r-O O

0U)CO lO0

0-

0I02

~~~ ~

~~0 ~ ~0 ~%NN ~
>

IV

-----0.r

u~102

(O

r '

r-

~r-

N rl
0-

W'

M'

W'

Wf

0uuu0ca

>

O(O(

NNN(N0ONO0O
Mo

-N

.T

CO

CL

(a 0.

0.

0~

<

C-)
0
M

M,

>
2

0
C

CU0,

CD)

ID

0L

o
0

..

-T

N-

nv

V
0

U,

0N

00

0
jCWJCW

M0)

W
C~

tl

O0)fC

00 )

CLO-5

"

--

00

-MC

Y00a0

0'

,1

OC

CI)~U

U,03c m

0. U

00'
nL~n

o
(D

:3

.6
.U

LL

LWU

'LLL

to

coco

co

6.

LWL

to

w
0o0

LL

9
0

0
LLLL
0

103

r,

w-

000

0C N 0

00

0-0
0

v0 w0-owwL
w

6.

LL

m-

,a0

r-,

r-

00

V00

.0

CL 0

.2 t:
U-

.0

.
.2 ;

CD
-

0d

oc

0-000

a co

0,

a0

00(0

(D

a'a00a'0*0*0

00,

C,0

CD

0
'OC

000
>

Q.

o0c(

.uC\J
.N .'c'
'J .

(M

LU

Ei

>

CU

Eu

000aw OO

0000

C NNOmOcl cn

U)

N r-

r-N

rrl
N

V-IT'V-)C

-N

N
T

-P

U'-

NN

r- lf

fl

tnIVco coco 0co 03M

LO V-OV- 0OWOW-r-7fC:M

,P

lr

l-rr

COC ;O-0

-r

M(M OO
N-N-cv
CJC

oc

0c

oc

C.)

0
00
LU

-O -

(o w)NC
N
CO
- C" COO

m rN M) M 0) w

0
0

0
o000

00
0
oU-U-U00

0C")0

-000

0
a0

10

N N

00
0
o00000000WU-U-0

C4 -TUN N

0
0

00
0

NN
0
0
V)0 00

W W
0
0

-CD
.2

-5
(,

Lo

>
-80

0)

0(

oc

>f-

mmmmv

NOO(OOc~)o

to

CL

00

0)0V

OO~r00

(DwoN

0(

O~0)

O40

nmt

omm

-a

00C

-at,

m~0 -

0(

U)

mNo

(-o'0

N000~r-

0O

<

N1.C
-

Z.

000000000000~050

NO

--

CMON

i,

F-

?J

0 LU

~LL

CL1
A

CD

U)

.2
=

-c
.0

CL

~LL
0n

0
C~

>

C-

N Ln
lC0

cu

V CD vCW

m Cl

'm
No

uu

a.~

0000000000O

0000000000000000

LU

CJOOOO00O0OOZZZOZZ0Z0OOO0000Oc~cDccO
-

N-ONOC'J(0,.,0
C

CO

00OrC=

0.wm~-N -T VO

NMMccm(0

0Y r- 0

-0

'-

~-

r- 0

N0 N- N-0

<

<
00

N N
v

00

0D30

00

00
00

Y~CO

0 N v000

00

00aa00000LLL00a00a0

00wN
00
0000

0 0

0
0

n0 0

00
0a0

000 :0

N0

N 0~ N~ 0

000000

0
0

00000L0

m0
ci)

cU~

CL 0
00
t:

LU

0W0 0

v0

- CV0N0000W0W0-

Wr-0

r0M0

000
Nr.

CL

qY ?0

Nn

0-

Zo

nWL

w-

'JrN 0 N

N 0

COO

10

LL

-wwwU

LWr

N00r
,0c

m0
CL

aL

CL

CL
)
.0
.2

-o
(DC

00

>
a-

(0

a00
cm~C)

LU)

a.o

w,

of*00-wow

0O

woOOooOOLnoOOOOOr~wo

DCV)
Ln

04-00C

x
Cu

C M- CD

0 --

Ocjo~u

oooruiw-:

mC)U

oc

NV

M4N0

00F

ML

i.

NM

0WN

Nr

o)

,4

U, (

w E

0to

0 0 0

i0

*r-

'0'a

0' N0C'oM"to
o0
Wtootfto
f0,0

0-NNNNN

0 0~ 0W
0 0

08~O

((

a'

a* 0

u10-

-l -

-O

--

O00O00O~~0(0()

----

----

M, w;

-.

0~0

Cu

0-0

m0.

.2

C,,

0
0

LL
o4

00

a.

0
0

'T

(D

0
0
CCLCJC'

Cc

C1 6

14

C-

-CJli

v_:
D 0'0
'

'

CJ~(

CD x

a.

---

--

- -- -

co06

0N

NN

-N

10

--

a.

M0

CL ca
'L-i

U2

>~

CL
o

E
m

0i >

- 0

E -- .

40

M N0

w*cC--W

cow

N 0

Ce,

0
v

Ma
00

000MM0

am0

000

C.)j

C'

-000000000000000

.-

0 ----0

--

-000.--r----

z =0

0~<
w;

CO
-

03

01

0tv' 0

70C
0

(a C(

C,,
"0

-L

00
(A =

Q0

o0

3o

C.CL(~)C

>

MU~...

00

C14-CD

0
0
0.oU(
x

)~t

'

V-

'

o-0I

.0

Y-NJ0

W)c

(0

N~

N0Ccc

IM P

0;

to

c0 oco

)aw0c

o~

..

0UN0;

;r

L.;

81;

OO

D0C)W

D0C

oa(

aatN

U-L

;-

NNN

(D

Lnoon

oooooo0U)C

oo)v0o0)NNN00000

OCOul

LL
a

00

.0 CL~

CU
>.

.0
0

0
*(

Q-

-j
'7

)I

C.

(D

C)C
'7

uZQ

m
0

OC

nC
0

0 L

TvC

0oc

0l. mm0
0-

0 -

000

nL

nt

ni

n0

.6

C0

Dc

ot

o'

oc

000CM

0c

0000cqN

W-0 W- 'aM- CO C;

r-

04 00nCinwC;C;

0r-

mo

m m N m(-mc

N-

n0
oO112

mr

,
Mr

NM

v0

www(

wwwwwwwwww00awww

CD
ZCE

oc

C00V"00TN0-00P

L;C-(;

0oint

n'

IU

0-00400to

-0

C.)

nc

I-Mt

0
DM

-r

Vn

-r

M.1

Dwwwr

r,

(uo
.2

-2

.2
CL

U)l

0,

0,

CL

'u j

ao
00,0

.0

-~ -~ -C -C0-

7CJC(

e~-

FC,0

acc0

--

0r-0

0r-

0n

0
a

l(

TVC

**-o1

0)

0
wL

vc

CL

O(

nL

00
0

o0

--

(MM

----

M N

w a

OO

-O
N

NN
NNN

NN

O
NNN

-0

00

- co r-j w M 0

N0u'

a--

IMN0 mw

0n

00

tvv"

vv

--

OO
NN

0
NNN

NO
NN

N
NNN113N

0
N

2a.
co0

fl(
U)

LL

0)

3:

0
(

cLu

7~0-~-00

a.
0
CD

c
0.C

Vr

U)CL

. .

W 00

M0

. . .

NN

o-M

o~

s.

0)0

N-(0VV

--N

NV V

rw

0 z0
U)~~nfn
;
6 N
N

N- N-

<
U;

N N

NN

<
<
; 4 U; NN NN

N NN

-, C; ;

NN

0 0 0

N NN

0
0000a
O
O
oo
o
o
w
N
N - M, 0N W, M, a'
4 (L ?N C; 0;
N;

~<<cO

0
0 0000
00
0 0000 0
omvwrl-womwr-aM~
0
'
,
a

E~zz
---

NNC14

0 0
Z

000

0000
0

O
00

COU0

999O

Cuz

NNNN

>'

NNNNNNNNN

(De

--

OOOOON

>

Z;

aLn

. . .

~0--000-00-00-----00-00

-Vc

)NNN

l4
o

>M >T
o

ou 0
o'LL

100

00

a..
3

*0

oD
Co

co0

C~j

04

>n

0r

00N

(n

wco
0x.co Ef.-N

CO -0-

-0-

C0

0- -

O-

; cv (\

M -

0,

&C

"

CN cq C~iC

M cn

MMMMM

;u

1CU

0.

N
cy C\

ca~
0-

00aE
ZC

o115

N
0000r

O-N

-L

LL

LL

NMN4

LU

L
-r-0fa

0.

(30

-0

CD

=U)
Cl(

m Ca

~LL
oococooo

oo0

0o0

o00

o0o0

oo00

o0o00

coco

(3

oc-00

-UMN

CL

VN
00

o0Ofl0OO~o00a
' fN

ITL

000
N

Ln Cn

0OP.r
)

-T

04

UdU

oU

U00

-r

xv

T l-i-C-W
.

cn

-c-Q

oa~~
LC

000O

NF

,(
)

30

C-4-a-o

-N

Nr

-t

-a

-NO

4C
:-

-o

OC

-wC

mw

Ncc0'00
N NU

N0OU Y NOC\

na-t

DC

'

OC

OC

00

'

OC

L L O

)i

I n-NC

lv

In

nC

l-r-r-N

,r-rN

"

N NNN

7wwmmr

n0'

-cr-r

00

000Mmq

Oc

2
alC0-

000

-0-0

w
E

NNNNNNNNNNNN

0:3
Z'

'

'

'

'

flf

'

'

~.

'

'

'

~
..

'

h.

'

'

'

'

'

'

>>

'

'

'

'

.>

'

'

>.>.C

'

'

'

CC

'

'

'

'

CC116C

'

'

'

m0

< L)

cu~

.0

CL
(z

cis

LA(

(U

_j

0.
CLL

u7aa0-c

aj

va000
ci

owcn00000(
c.*J

Lo -Nl

nInCJ-

-cm
x
0

0
(D~

00

0U0

0
a
Cu
o

Cuj

C'

Df-Nvvv-

r-rc
.

c
--

----

-wr

Tr

N-

- Ll-(uC(u
-

-----.~

--------

UL

CM N

N, c

0-C

Z
Z&

000U)0

-0

NNNv
-uu

000

N
-.rOrN
(C.;-Cu)E~Cu-C

NOCO

-----

v-

00u

N000U---0O0O

--

00J JmN

N0N

000

(Cu0

NUv

N MN "r0 m v0
C

ZC

Z
w

0c

uOOO

(Cu~

-(C117C

to0

.0

CL
0

ca

0L

CD
>

75

(D

0(

I
(000a00000000000000000000000

CL

N04uCl

co

00PO0O

000O

000

000

000

000O

r-

N'TaC

0c)

cC.-

M u)c'c

CV

cu

11

Cl)N

000
C'

nNc
0
N

rcy
f

7(-1

C7

a-

O-TL0-1 ON

0O-N
oC)Q

0) -T

-)

L00

'o

,-0(00000000W
m

Go Ut

O000
n

Ln(
0a

0)t 'T

-nC
N

oC

Co -

0
cn

CL0
Cl)

2LZE)

0)

) 0)

CD(M

)0

Mcm M0)

~~~-0

-N N
M.woo-

Ln

0)000

N4
N~ N

M
u

0
CNON

0a 0
nmoNMNNN

0 0

-Ifl-0.00
in C')W

UC.)

0 0

00a

0)

O.--~

0.

0 0

--

N N

Lo a 0

NY
- N~

N 00

N N N

0-000
0r)U0c)

N N

0
-

'It

--aNNa

N N N N

Qo

vN

%0) 0Oa0O0O

C)00)

NN

NN0040mNN
N

- -0N

-0-0-0-

CU

0 Cu0 Cu

N N

N N N00 U

Cu Cu kn

u a

or,, (MC

(40
m0
.2
cc

cc

, CL
L

05

0 (3

CD

U_

o0
o30(

w 7U
0

D l C
xx

00

N0Mr-NNa

ociC

q.v0

CD

0.-

Nf

aM0
-

..

NWaCa4C

L
M

,r

0c

It) V

-3

0.

O0
LU0U)0

C)o

OO

Lno04o04

00Oo0-

ou_

"o

0o0a

1191

(AO

.2

=;U

<

(D

0
0

0-

<CL

0.
cc

ItLL

( 0 a*C)

ama

m0o

V
N

0-.

A?'

00

-6 '-T-

U')
COLO')
0 NNN'0

r-

0ON CO NO0N 0

LfLNC

U)

W
-

)-.-n

--

:~C

>0

cm

a.

0C

v0

0.NC.)

NN

N
C

0N0

;C;C

- - - TVI WI

MC

0 0

0 0

C%)
0

C, C, C, C, C,C

NNN~)')
NN
~--

-~ - - - - OC c j
nL

nL

0)0

--

120

NNN

0)

)0)0

)00

IT

-o-0-c

CM

NNNNNNNNNN

NNNNN

VL

)00

(a0

0D
0D

CL
<

.
CL 0

(a

+t

00o

0030

Lo
0 L

ca wt

CD

MCV

0.0

r-

.i.0

oU' 'U

a.

0r

no0I
-q

-D

MaU.t
-o
- co

0
-V

W W

0-

0
co

W-

0
o

>

0D
CD

CL

a.

0D

N NN

0-im-N

cuO~)C'C)'U

nNNNNN

00 C N 00 CU Y CU U CY

Cu -N

JN N

B-in

M0

0-I-

0to

ooooU

coo

C;0

NV NY N N N

NN

0)

00

NN

JN

to'u.-;--

Co

fr-Z

, L

C 0

0N N N N

MC

0CMC N N 000

MNNNNNCJC4
-

nCDt

0010

'

oo

0 ww 121
N-Tw

000

N10oaW10

0f0-I00 0

'

0N N

N N.04.N

00

)U)nn

NM
MN

oto

NCMC

nn

w0

000

nn

ca

CL O

.y

.2
r_

CL Z-UC

CD
a-

in

or_

!CJ

o D

(
0

CYMMMe

U))

0
C~0(

(Dr
DC

.0

n v:

NvNN

CSUc w

0
Uta
-

CL

U)

-i

>

08 0

N -0

'T 0 a

Cd)N

N-

--

-M
'o0WWC
aLocqWC-D

l On-

aW
M
NNN

Z
'12

00' -

0'UN

6)
)

NaMM

- DD

Ln a

ir-

000

C_-~~

N-C'N0--

00a)0)
00

'U
0 a

--

tn0

t-N

-.

NNLL

(3

-u

(D

<0

co
c'
0

.0

7MC

00

0_'T0

r-w

0-

n0

M'-~

NC'J0

m.

CC
o

C C
0.

m
0.DN

Mc
r

c-

nc

000

0w. - .
.00 Na
_(
r l l
0 - - - 0
00

ic)

1,C

N
,

0
0

0O
1

C"

~ ~
C---

~
N -

----

cuDz

U)123

O0N

0'-

,0

,N
V

-C

Tc

,N
0

0.

l
O

c)00t

o.cNU

0L0Wr

CM-00M(
0-N*0

)4

,0.,a

N
W

0 0

r-N.0

'M

NCa.
.~0N. . . e'. . - .
lc

0a

'Oc

W0
. OC)N
. .

Wc

oc

0l---Ifl-O

00

fa0
LL

(D 0.

c
.2

0r

0
U,

co U

cc

a)

ccr

V)

OiOnC
x

(4

0000

~ ~O'U0(

000-

000 O00

0C

0000

00m

aja
(

(D

-l

CL-0I

-r'C)C)~
- -'

00

0a
ui
2

o/
(o

m0 E c

c001
0
CM
oc-V

L 5C
0

tn0

nw

EOi

T i
,

-- 0

CZ
Z

N a

~JN

N-- -

N, M P, W, a) cc
0
'"
7o
F-

sc

iu

au)u)t

'

r:r:C

0
,

nw

oc

*C

NN

-N

) uN -

,U

----N
NN

-C3

( ( m cNa,c m LL
9

.o

(n
Soa 0

-M

Nv

- )0
N CD-00 % 0>J-C
- 00
000
-)ooa0
0

Q
mc~moc
O nzncO ,
12

;i-W

04 U) U

nC4NL
U0

CJN

N m m N c6 o6 N N

~ ~ ~~~

-N

*I,

') N N

Nn

-r

C1
Nr

)L YC4V
0

)(

0(

O(

NI

(M 04

N LN cn N N

)z)a

NN

uN N-

N, U
, a, N, al f, (m t- Uj U, U, 00 U, OD 0) 00 U, CO U,) U,)
D0aC)TC) 0
0
0 Cl
00
0 0 0 00
l \ l 04rMI0
0
0

Cn~~~~~~~~
L LU
4-0
0000o0000nac T
:oa

ci60

,U0
a

l0Nmi nL ;- ic

C;C;CJ

owt

ta

w uN

;C

)- LO

Od-MW.-U

)-r0 a

vi GiW

nv

ccN

tnl

)u

*6

NN

-N

N0wr
NN
0
T VU)

CO U,0
U
nn0

nCJJU
0000000-

) )o

U
M)

C00
)U

zzWU
f

CL

0.

0L

E.2
.0_

C.)C4

7.
a.

oE

>n a

ma

". a

'

0T

va0c

00(

o0ac

00
0.0-0
-i
EQU>

0)

NO

L L (0
C0C
0

C.N)

00

00

0.'-

0N

-0

LNN0
00

o-

--

0D

o;
.4.C
12

-1

---

)Ot.u0w)tn

000N0

L6

-0

--

(0

0.

o
Z

o-n ZZ

-OZ

.0

0
CZ L

0
CL

T(00

oacrvw-

cr N
w-_

ovc

D0vvi

jc

(D

CD

U,

0.

-,

Co

O'T

---

-M

0o~
0

0U
U)

>

wo
O(t

(NO
r
-

wr-0N

-N

0N
NN

(0 w m-(-O

N0.0

U)
0'E

CTCD

(:C- c

OCD- CD-

OC

N(
MNCC
04C0\
o\J

.\

ocn0

;c;c;I'

-~ -

uj

ac

on

co

CZ

nn

-)nn0nn

(D

12

;CO

n)u-.L

-o -~~- -

fl

CO

NCM N0

NNN
-N-N'--.-CJCMCMC
CJC

ocr

\ oC'0o

t-U)

--

q~

U.

.2 Wo
CO

00

CD

0,

. .

...

00t

(..U

w) N

w.

NV.N

N-

-V

C'CITI

0 w0OwO0

0 0N0

w N

0C w0

00000

CL0

co

Tco

-Tv

oo

CY

v0

a.

LU

M
o~ 0C

~~

(UWU0C

OC
N!o

0~CJ-

'T0 a-'--

OWc

0r-0

--

0-U( 0

-C

-C OITC
-r
V
(0-0N0M203NW
V
VC
Lmcfl.UC'W CU a IT OV0C

)jC

al

-0

Lu

) 0) 0

M0

(.0c

v~Ecm
C~

U127

t
o0

I
Y0

-C

U)

LC

C
mC
-

nvN
N0N'
-0
- CM0CV0

L0

4o

-I

N CM0

laC

-,-

~.)

CV

LC

;
N N

C;U
_*

m0
Z;EL.aC

37

m(M0
0

n a% 0

()

w
(U

a-

0a

aD a(CJ0 a

CD

-0

LC
>nn

CjCjc
CO-

n TNi
- N a

c)M

Tu

CDL

T4N
- - N -0

M O' a

0'
OO0O0O

t
C

.0 CL (D

cc0

U.
A

C_

rc

'a

.0

'

0.

U-

00

a.

W l

(D

OD

00

C-tJ

C'J

r..--

aC

C
LU

0D

00

00O0--0

J0 0

>

C6 o

000

0~ a~ Cu 0

Ch-0400

0 o
ONN

Jfr r

aU w CU 0

U 0 0

InLnCnC

CD 0 CU 0

V0

)U

0)
WOO

ww 0

0)

0~~0MMW
o
oU0
N

wU000 a

am0

00C)-

00

C0U

00 a~ C

) co
00

ON

LI0

a) WCo(DCC

o Co00OC CooC

aU 0a0

0 co 'IT
NOO
NM
ONO

0O~

0.
0

00

0a. C.
o000

,0

U0
N

0UUUU0
N

--

128

,
-

C
-

0
-

W00-ar-0
-

IcU)
co 0.
u

:3

-c; LL

>

o 0

-0

C.)

(D

00
(D

0
0j

a)-U
r-

7,

o~ w) (mc

0~0 a

(O

vONOO

- -

avc -n

0~0 a

0O

ODNO(

00

(w0

aaL

0m U

-1

0( M '

N~

NW

~C

IT V-

0-

cm-0

0- a0)

v00MC

m
)

CU

LD0
O~n

o0
0

n60

0--

M--

00

-N

LOcc-

MMNCt

Co CLa

Z l

MML

O0

4aw00f-0a0

-c

;(-4-0

00C

a0a
'
NC 0
U
IN
N
N

-0

-M

'0

lN c

t0

Yc

0a

0-ONc

N
-

0(
-

u)Mi

.>>>>>>

04EC
CU OlM---OlC4O

(D

C.) IT--Dr
T.

IT v'
C')')

0000000
Z

U
o

Cd)

c
O

-c

T-ra-

a' OOM

C')C'
C') CI)M

TIT0 tM
U

ooonoo

o
oo-

0oMM
0
U

004
o

ooo

W)N

LO'C'C'
U)') a

00
o

'M

000

oo
ooo.-oo

-C

N N

M)YC)'

C)o'

00'T0MM
o

ND N NO NO N
V)

ITLOM0n

u
o't

-C')

')

N
-'

NO Nm N
C') Cl7C'

M0ITC00(0lT

ooo129

o~

C)

Cc
C1 0

CL

00

~V)

*0

+
CL

o0C

ci0

CuC)
QA

0o

0.
0

0
>0

-n

Uw

Cu

,-

7-

Q0

! 90)

0:

m0

::O~~

)-

:0

FC)C) c')N

0z0

---

00m

r:r-r-(

C-C

a-

~a

-c-W

Z Z

Na

o-c

OCDI

a J00m

JNC

W :: 000

0anm

0..

0OCD

0W

-.-

cm)N cm)

' "C' N' 0 ' N' N C') C) 0' c') 0' N' cm)"' N' N' N' a

0mmmmmmmmmmm fCL

Nn -

lm 0

000000000
0 0

0 -

00O

La(.

')

tn 0

r, 0

C)

100

0 -

00 m

DC0
CL

CL

v' to

m
NM00

0
m

Appendix A.2

A Quick Calculation of
the Effect of
Failure Correlation Factor
VS.
Engine Out Capability

131

r;

trade off

A preliminary
.1clusterin~g"

well

Ll'4LIINE OUT

ON

&RLO~TI

CAPA

study of

I LIT Y

single

large

with reliability as the driver

as engine out capability

liquid

rocket

engines vs

follows. Weight and cost as

is also considered but not calculated.

Let,
=

RI

rocket engine reliability excluding

plumbing

to tanks.

R2 = reliability of plumbing.
Assume a single engine plumbing reliability of R2 =0.999 and that increases
in numbers of rockets produce directly proportional increases in plumbing
complexity.
Since reliability decreases with increasing complexity then if,

:1.

the

.16

R2

total

niumber of rocket engines

exp(n ln(0.999))
n

Since smaller "state of the art" enginies are more mature thus possibly
more reliable then R1 increases as n (the no. of engines) increases.
This is because the more engines there are, the smaller they are.
R1

n
n1
O.
096
985

997~
.988

R2
T
-O.9991--n

992

0.9386'7571.9
99

L 9-.
995 1

?9991

21

RT
n
I

!09415944 0.9900449

8 0548j

E398065

engine out

the number of

I..L

.9

0 9543
0. 9870777.
07251:0.9860906.
o9e51045,
0
Q. 9851045
4_.985696810.98411941

Consider now
if

.99

09
b7361:0.9920279 i
-O
0 9367355 10. 99103.59

9463546f'
96110.
2
0
997
99

R1
n

' 0.972196 1!0.998001


1 0 .9 6 1 5 0 4Ei; 0 .9 9 7 0 0 3
0. 952857 1 10 996006_H
10.946196B!!0.99501
0 94148011 L0.9940152

99
?91_

:R2

E-,:

I
"No.
PT

=0.9302877

engines varies from

4 to

16,

the total

k =

the maximum engine out capability

=total

:1.0

no.

of engines

reliability with engine out capability-.


cost

U
Au

n
l6
of engines"
minimum reliability
with 8 engines and
NO engine out capability.

capability:

M=

RS

jrtif~~1.
1-ALL
-

7-

:
RiR2
m
.985

exp(m

ln(0.999))

"987
989
.989
.99
T
-.991
.92
?
993.
994
995
996
997

1pt.

Nnw
k

-3

.-.
4

engines

req~uired

m.
; R1
!(m

k)'

success

RE
k

for

rn-k
1--R1

k'

k
RS

998

w C RE
k

HS

R2
k

0.9946681
4

i999
999.1

Let

REC

with correlated

=reliability

.7

failures

the number of

correlated

failures

4
RL

1=
1

for

four

engines

1000l

REC

p.9-eQ5'39
0.98409971

1hus 4 engines
between 20 and
RT

:REC

are no better than 1 if


30%. as shown below.
RS
3 4

RT
0.99075
0. 9E3667 54 5
'U.9828055
0. 9-788.684.
0 .97494179.1
0.961279133

the correlation

factor

is

Not only are four engines no better than one under the above conditions,
three or two engines are also no better.
In fact the correlation factor
drives the results and begins to do so at about 15%.
Time did not allnw a thorough study Lof the effects of cost or weilht.
In fact tne entire subject is complex enough to warrant a separate study.
One could easily envision that an i:icrease in the number uf engines,
pllmhinq and detection apparatus would increase weight thus reduce payload
and might quickly render a clustered system uneconomical.
The purpose of this brief set of calculations is not to draw conclusions
but that correlation factors of about 15% are definitely a "red flaq' that
warrants further study.
It appears that liquid engine manufacturers are
overly optimistic about correlation factors.

134

Appendix A.3
Reliability Analysis
of
Current US Launch Vehicles
Yu Shen
SAIC, Division 265, New York

December 21, 1988

135

TABLE OF CONTENTS

1.0

2.0

3.0
4.0
5.0
6.0
7.0

SUMMARY...................................................................
OBJECTIVE and BACKGROUND ...........................................

137
137

EXISTING METHODOLOGIES ..............................................


1.1
The Binomial Method..............................................
1.2
Polynomial Curve Fitting ........................................
1.3
Bayesian Analysis.................................................
D. Lloyd's Method .................................................
1.4
A NEW STATISTICAL MODEL..............................................
2.1
Estimation of Launch Vehicle Reliability......................
2.2
Estimation of Stage Reliability .................................
2.3
Estimation of System Reliability...............................
2.4
Estimation of Engine (or Motor) Reliability..................
DATA COLLECTION.........................................................
ALGORITHM ................................................................
EXAMPLE ...................................................................
RESULTS....................................................................
CONCLUSIONS ..........................................................

145
145
145
146
146
147
147
149
149
150
151
151
152
157
159

BIBLIOGRAPHY............................................................. 160

136

SUMMARY
This report contains reliability data for the following families of United Slates launch vehicles: Thor/
Delta, Titan, Atlas, the Saturn "Family", the Scout "Family", and the Space Shuttle.
The reliability data was obtained through the statistical models and procedures described in Section
2.0 as applied to the "Launch Vehicle Failure History Data Base" compiled by C.T. Clague of the Aerospace
Corporation and other data sources given in the bibliography. The results of the analysis are summarized
in the following table.
The statistical model and algorithm contained in this report is unique and is the only technique except
for D. Lloyd's model that has been developed expressly for launch vehicles. It provides conservative
reliability estimates during the "early launch" period of development. It also converges to the same value
obtained by D. Lloyd when a sufficiently large number of launches and ortests have been attained. Unlike
D. Lloyd's method, it does not require judgement as to whether or not a failure has been corrected nor
does it require that component failure mode be known.

OBJECTIVE AND BACKGROUND


The objective of this report is to produce a statistical model and algorithm which can estimate
launch vehicle reliability based solely on attribute data that presently exists. In addition, the methodology
is to serve as a means of estimating stage and system reliability. A secondary objective is to use the model
as a means of predicting the reliability of new systems.
Presently existing methodologies do not meet the objectives cited above.
By way of background, the first attempts to measure launch system reliability were made in order
to either ascertain what level of reliability had been attained at a given point especially prior to customer
"buy off" or acceptance.
In the 1960's the most widely accepted approach was to assume that each test or launch was
independent of all others. Using this assumption, one could easily calculate the reliability at any given
level of confidence using the Binomial distribution. It became obvious, however, that reliability and
confidence levels above 90% would require an inordinately large number of tests. In the early 70's
Bayesian analysis was introduced. However, due to the subjective nature of prior distributions which rely
on expert judgement rather than direct results from experiments and tests, the Bayesian approach did
not receive wide acceptance in the aerospace industry.
In recent years D. Lloyd of TRW began developing a methodology that does require judgement, but the
judgement is based solely on evidence that the propensity for certain failure modes to occur has been reduced
by redesign and retest.
Dr. D. Lloyd's method appears to be the most recent attempt made to estimate reliability or
developmental environment which includes Reliability Growth until now.
The methodology developed for this study is discussed in Section 2.0 of this report and is an approach
which has some attractive features not found in other methods. This methodology was applied to the
historical data obtained during the course of the study to produce the tables of reliability data which follow.
A summary of all the results is given in Table A.2 and results for individual launch vehicle failures are
indicated in Tables A.2a through A.2f.

I6

137

IA,_a

I__ 00

02

ai

00

0vi

S.S

13

TABLE A.2a: RELIABILITY OF THE ThOR/DELTA FAMILY


Thor /Delta
Vehicle Name

____________

Thor

Delta

Combine

57-83

60-87

57-87

0.8982
0.8750
0.9181

0.9402
0.9110
0.9615

0.9192
0.8789
0.9551

0.9965

0.9950

Stage 1

0.9346

0.9850

Stage 2

0.9764

0.9746

Stage 3

0.9877

0.9843

Propulsion

0.9568

0.9701

Guidance

0.9830

0.9950

Flighit Control

0.9907

0.9851

Structure

0.9969

Electrical

0.9815

0.9950

Separation

0.9969

0.9950

F Otheror (UK)

0.9923

Data Collection
Period
Success
Ratio: Mean
5%
95%
Stage 0
Stage 1/2
0

z
LU

Stage 4

2
U)
U)

139

TABLE A.2b: REUABIUTY OF THE TITAN FAMILY


Titan
Vehicle Name

Data Collection

Titan I

Titan II

Titan III

Titan 34D

Combine

59-65

62-76

64-87

82-87

59-87

Success
Ratio: Mean
5%

0.6427
0.5585

0.8864
0.8323

0.9406
0.9055

0.7355
0.4978

0.8013
0.6075

95%

0.7202

0.9272

0.9651

0.8990

0.9546

0.9946

0.8678

Period

Stage 0
Stage 1/2

z
w
a,

Stage 1

0.8214

0.9574

Stage 2

0.7825

0.9258

0.8476
0.9783

I-

Stage 3

0.9667

Stage 4
Propulsion

a
I.-

0.6725

0.9290

0.9622

Guidance

0.9929

0.9892

Flight Control

0.9858

0.9946

Structure

0.9702

0.9946

Electrical

0.9929

Separation

0.9858

Other or (UK)

140

0.7355

TABLE A2c: RELIABIUTY OF THE ATLAS FAMILY


Atlas
Vehicle Name
Data Collection
Period

Atlas SLV Atlas G

Atlas H

Atlas/
Centaur
62-87

Combine

0.9069
0.8450
0.9489

0.7883
0.4761
0.9953

Atlas A

Atlas B

Atlas C

Atlas D

Atlas E

Atlas F

57-58

58-59

58-59

59-67

60-88

61-81

0.4219
0.1827
0.6977

0.5558
03010
0.7896

0.5833
0.2642
0.8585

0,8401
0.8015
0.8734

0.7426
0.6454
0.8240

0.8883
0.8359
0.9276

0.9445
0.8736
0.9652

Stage 1/2

0.8713

0.9573

0.9861

0.9814

Stage 1

0.8523

0.9279

0.9719

0.9810

0.9856

0.9420

0.9824

0.9535

0.9824

0.9907

Success
Ratio: Mean
5%
95%

67-83

84-87

83-87

no failure no failure
0.6313
0.6313

Stage 0

0
z

Stage 2
I-

Stage 3
Stage 4
Propulsion

0.8844

0.6667

Guidance
2
UJ

Flight Control

0.7688

U)
)I.I

Structure

0.7688

0.8889

0.8713

0.9212

0.9571

0.9869

0.9428

0.9869

I-

Electrical

0.9857

0.9814

0.9857

Separation
Other or (UK)

09934

141

0.9824

0.9907

0.9824

0.9907

57-88

TABLE A.2d: RELIABILITY OF THE SATURN FAMILY


Saturn "Family"
Vehicle Name
Data Collection
Period
Success
Rato: Mean
5%
95%

Jupiter

Juno

Saturn I

Saturn 1B

Saturn V

Combine

58-58

58-61

62-65

66-75

67-73

58-75

0.3611
0.1026
0.6879

0.4300
0.2135
0.6743

no failure
0.7943

no failure
0.7743

0.9822
0.8180
0.9997

Stage 0
Stage 112

0
z

Stage 1

0.8575

w
Stage 2

0.5741

Stage 3
Stage 4

0.6290

Propulsion

0.7870

0.7009

0.9822

0,7629

0.9822

0.9378

Guidance
S

Flight Control

I-

Structure
Electrical
Separation

0.5741

Other or (UK)

142

0.7547
0.2652
0.9935

TABLE A.2e: RELIABILITY OF THE SCOUT FAMILY


Scout "Famliy'
Vehicle Name

Data Collection
Period
Success
Ratio: Mean
5%

__________________________

Vanguard

Scout

Combine

57-59

60-88

57-88

0.3388

0.9420

0.6404

0.1555

0.9023
0.9683

0.1821
0.9744

5%0.5723
Stage 0
Stage 1/2

6
z
0

Stage 1

0.8347

0.9917

Stage 2

0.5049

0-9875

Stage 3

0.8039

0.9746

Stage 4

0.9870

Propulsion

0.752 1

0.9793

Guidance

0.9174

0.9917

Flight Control

0.8347

0.9917

Structure
Electrical

0.9876

Separation

0.9959

Other or (UK)

0.8347

0.9959

143

TABLE A.2t: RELIABILITY OF THE SPACE SHUTTE


STS
Vehicle Name
Data Collection
Period

Spc Shuttle

81-88
Success
Ratio: Mean
5%
95%

097
097
0.8147
0.9806

Stage 0
Stage Vt2

0z

0.9275

Stage 1
0

Stag2
Stage 3
Stage 4
Propulsion

0.9275

Guidance
a

Flight Conitrol

I-I

Structure
Electrical
Separation
Ot-her or (UK)

144

1.0 EXISTING METHODOLOGIES


For the purposes of this report, the following existing methodologies will be briefly discussed-

Binomial
Polynomial Curve Fitting
Bayesian
D. Lloyd's Method

1. 1 The Binomial Method


The "traditional," or classical, approach to reliability demonstration in a go/no-go type environment
is the Binomial distribution shown below. In addition to the obvious constraints of the assumptions listed
below, it is interesting to note, for example, that it would require 45 launches with no failures to
demonstrate 0.95 reliability at 90% confidence. Since trials are assumed to be independent, the growth
effect (a type of dependency) cannot be evaluated.
Stated mathematically the Binomial Distribution is as follows:
NX

(1-R)

=1 - C,

ifN<_Ss 0

=S

where;
S = number of successful start tests
N = number of trials
R = reliability
C = confidence level
where it is assumed that
" Trials or tests ate independent
" Each trial results in success or failure
" The reliability (probability of success) of each system is the same on each trial
" The number of tests is fixed in advance of the demonstration test
1.2 Polynomial Curve Fitting
Polynomial trends are of the form

Y =A + BX

3
CX 2 + DX + ... JXk

The straight line is a special case having or y the first two terms on the right hai.d side of the equation.
Generally speaking, it is unwise to fit a high-degree polynomial to the data because of the possibility of
mixing trend and cycle. The polynomial can be forced to fit data quite closely by just adding enough terms.
This, however, does not contribute any information about trend. In fact, 1 degree of freedom for error
is lost for every parameter that is estimated from data. Thus, if there are n observations and n degrees
of freedom are lost in fitting a polynomial of degree n-1 item, there are 0 degrees of frPedom left for
error!

145

1.3 Bayesian Analysis


For the purposes of this report, Bayesian analysis can be divided into two categories:
1. Reduction of the number of tests or flights to demonstrate that a given level of reliability has been
achieved.
2. The Beta-Binomial Model
If it is desired to reduce the numbers of tests or flights required to demonstrate a given level of
reliability, then Bayesian analysis can be useful. If the following equation, taken from reference 1, is
solved for n at R=0.95, r=0, P=0.50 and C=90% confidence is desired, then it can be concluded that only
14 launches would be required.
1

C=

FR

(1- P)(1- R) 2

1+

n-r 1r

q cp

2;1

n-r

(P) (R)f

r+1

dp

R
where;
n = number of launches
r = number of failures
R
reliability
C =contidence level
P Bayesian Prior
The Beta-Binomial Bayesian model is used for Bayesian estimation when information is available about
components of similar design and application. In this model, several similar components are treated as a
single class. The probability p of each component in the class is assumed to be constant, but will have
different values from component to component. If the Binomial distribution is used to obtain the probability
of K failures in n trials, then the conjugate distribution g(p) for the class is the Beta distribution. This
model weights the reliability growth effect and can be applied to forecast the reliabilities of launch
vehicles. The detailed theoretica' analysis can be found in reference 2. The disadvantage of this model is
that it is very difficult to separate the total sample data into several similar components unless there
is detailed engineering analysis concerning each failure mode during the different periods of launch vehicle
development history.
Bayesian approaches are highly sensitive to the prior distributions used. If no meaningful estimate
of the prior probability of success can be made, none of the above conclusions apply. Particularly, one
must be wary of consistent optimism or pessimism when records of success do not support the prior
probabilities.
1.4 D. Lloyd's Method
In Lloyd's model, the rationale is that when engineering corrective action for a failure is implemented,
the probability of recurrence of that failure is reduced; therefore, such failures should not be carried as

146

full failures in subsequent reliability estimates. The failure value for each failure model is assumed to be

f . I -( 1- y) "o
where y is the confidence level and n is the number of successful tests after corrective action.
Based on a detailed engineering analysis for each failure mode, the result of each failure for each
failure mode can be obtained by solving the above equation. The final result of the reliability estimation
is R = 1 - YXf/N where If is the cumulative failure number of all failure modes and N = the test number.
This model weights the growth effect and can be extended to forecast the reliability, the failure mode
and the launch number at which the failure mode occured as well as the launch number at which it was
corrected. The confidence level y is directly related to the final results and requires subjective
judgement as to what value is to be used.

2.0 A NEW STATISTICAL MODEL


The developmental history of any launch vehicle can be considered as two time periods - the early
testing period and the performance period. Generally, during the early testing period the unreliability of
a launch vehicle is high and unstable. After a "failure, analysis, and fix" process, in conjunction with
technical and design improvements, the unreliability of a launch vehicle decreases and stabilizes in
the performance period.
A statistical model which weights the reliabilities of these two periods has been developed. The detailed
descriptions of the materials for reliability analysis of vehicles, stages, systems, and engines (or motors)
are introduced in the following sections.
2.1 Estimation of Launch Vechicle Reliability
The easiest way to estimate the average unreliability of a launch vehicle is:
(1 )

U. = F/L

where U8 is the estimated average unreliability, and F and L are the cumulative failure and launch numbers.
As was mentioned before, the reliability growth effect must be considered to get a more realistic
estimation of the unreliability. In the present model, the average unreliability is defined as
(2)

U = U. - AU

where AU is the change in reliability caused by reliability growth and can be explained as
AU = AF/L
or
(3)

AF = AU L
where AF is the cuimlative failure correction number.

147

Averaging both sides of equation (3) results in


AF = AU.

L
2

or2

AU=

AF

(4)

Substitute equation (1) and equation (4) into equation (2)

u=FL

L A

(5)

2F

The estimation of the unreliability of the launch vehicle at the nIh launch can then be approximated as
YFi-

Un=- Fn

L,

L,

Li

--

(6)

where L, is the iT launch number, and F, is the cumulative failure number at the ilh launch.
The reliability R, at the nIh launch is
Nj

2 FiRn=1 - Un= 1

-Fn

-L Li

(7)

The concepts of confidence levels based on the value of average reliability from equation (7) are now
illustrated as the following.
Let N be the launch number, then X = N Rn is the success number
5th confidence Roo 5 =
X
x + (n- x+ 1) Fo.s(2n- 2x+ 2, 2x)

(8)

95th confidence Roas-

(9)

(x+ 1) Fogs( 2x+ 2,2n- 2x)


(n-x) + (x+l)Fogs(2x+2, 2n- 2x)

where F,(n 1 n 2) is the 100 rh percentile of F-distribution with n1 numerator and n 2 denominator degrees
of freedom.
This completes the formulation of the launch vehicle reliability calculations. The example which
applies this model is given in sect;on 5.

148

2.2

Estimation of Stage Reliability

The basic method of estimating the stage reliability of a launch vehicle in the present study is based on
the following assumptions:
1. The failure of the launch vehicle must occur in one of its stages.
2. The starting operation time for each stage is followed by the order of stage number. In other words,
the first stage should begin operating before the second stage.
The following formulation has been developed to perform the reliability estimation for the
Rsi = 1 -s

stage

U
Uv

-Fsi.

F,-

it h

'-l

(10)

F)

(0

Uv

j=1,jd-1

where R, is the reliability of the il" stage, F, is the cumulative failure number of the

ilh

stage, F is the

cumulative failure number of the launch vehicle, Uvis theunreliability of the launch vehiclefrom equation
(6).
For example, the reliability for
First stage:

Rs
3 =1

- 1 Fs

Secondstage: Rs2=1 Third stage:

Rs3 =1

Fv

Fv-

UV
Fsl Uv

Fv-

F83 - UV
( Fsi + Fs2)

FS2-

Uv

Since the value of Uv in equation (10) has been weighted, the estimation of reliability for each stage
Ri is also a weighted average.

2.3 Estimation of System Reliability


The basic assumption for the method of estimating system relaibility in the present study is that the
failure of the launch vehicle must occur in one of its systems.
The average reliability of each system of the launch vehicle can be formulated as
RY.

= 1- U

(11)

FY /F,

where RY,, is the reliability of the ill system, F, is the cumulative failure number of the ith system, Uv
is the unreliability of the launch vehicle, Fis the cumulative failure number of the launch vehicle.

149

2.4

Estimation of Engine (or Motor) Reliability

The basic assumption of the method for estimating engine (or motor) reliability is if any of the
engines (or motors) in a stage fails, then the entire stage has failed. Since the failure of a stage can be
caused by either engine (or motor) failure or other failures, the cumulative failure number of engine
(or motor) in this stage needs to be known. The model for estimating engine (or motor) reliability is
described as
Rei= (1

Usi" Fel FSi)

where
R., is the reliability of the engine (or motor) in the

ith

stage.

U, is the unreliability of the it" stage which can be obtained by 1-R, from equation (10).
F., is the engine (or motor) cumulative failure number in the ilh stage.
F, is the cumulative failure number of the ith stage.
N.i is the number of engines (or motors) in the

jth

stage.

150

3.0 DATA COLLECTION


Based on the analysis of section 2, the following table for data collection of each launch vehicle was
developed.

Vehicle Name
Data Collection from

Yr to

Yr

Total Launch Number


Total Failure Number

Date

Failure
Launch

Success
Run

Failure
Stage

Failure
System

Failure
Descrptn

Engine or
Failure Y/N

Table A.3
In this table,
Date:
Failure Launch:
S;uccess Rung
Fai u.re.StageL
Failure System:

the date when the launch vehicle failed


the launch number at which the launch failed
the number of successful launches between two failures
failure stage number
one of the following systems failed: propulsion, separation, flight control,
structure, electrical, guidance, etc...
Failure Description: failure mode
Engine or Motor Failure Y/N:
Y = engine or motor failure;
N = no engine or motor failure.
This table template was then applied to the history of all US Launch Vehicle Families according to given
cut-off dates. The cut-off dates and the resulting historical tabulations are given in the supplement to this
appendix.
4.0 ALGORITHM
The general solution procedures of launch vehicle reliability analysis can be described by the following
steps.
1. Use Table A.3 to collect the data for each launch vehicle.
2. From the date of "Failure Launch" listed in Table A.3, the launch vehicle reliability can be
estimated by applying equation (7) in section 2.1. The corresponding 95th and 5th confidence levels can
be obtained by solving equations (8) and (9) in section 2.1.
3. From the data of "Failure Stage" listed in Table A.3 and the launch reliability obtained in step
2, the reliability of each stage of the launch vehicle can be calculated by using equation (10) in section
2.2.

151

4. The date of "Failure System" together with the results of step 2 provide the information to obtain
the reliability of each system in the launch vehicle by applying equation (11) in section 2.3.
5. From the data of "Engine (or Motor) Failure Y/N" listed in Table A.3 and the result of step 3, the
reliabilities of each engine (or motor) can be obtained by solving equation (12).
5.0 EXAMPLE
Consider the "Atlas/Centaur" as an example. The general information about the "Atlas/Centaur" is
illustrated in the following figure which is taken from the report "Hazard Analysis of Commercial Space
Transportation", Volume I, May 1988, published by the U.S. Department of Transportation.
Following the solution procedures described in section 4:
1. Table A.4 lists all the failure data on the "Atlas/Centaur", The data collection period is from 1962
to 1987. The launch number of the "Atlas/Centaur" during this period is 67, and the corresponding failure
number is 11. In this example, the failure data was collected from the "Launch Vehicle Failure History Data
Base," which was compiled by Cindy Thatcher Clague of the Aerospace Corporation (reference 4).
The March 26, 1987 failure, shown in Table A.4, which was caused by a lightning strike is considered
as an externally caused failure. This failure is eliminated in the present reliability analysis otherwise all
failures are included.
2. Based on the data in Table A.4, we used equation (7) in Section 2.1 to estimate the launch vehicle
reliability. The estimation of the reliability for n=67 is
R, = 0.9069
The corresponding 95th and 5th confidence levels, obtained by solving equations (8) and (9), are
Ro.0

5 =

R0.95

0.8450
0.9489

3. From the "Stage Failure" data in Table A.4


The first stage is slage 1/ and nas ihe failure number F12 = 2.
The second stage is stage 1 and has the failure number F, = 2.
The third stage is stage 2 and has the failure number F. = 6.
The reliability of each stage can be obtained by solving equation (10). In this example, the unreliability
of the vehicle is U, = 1-R, = 0.0931, and the cumulative failure number of the vehicle is F = 10.
Substituting these values into equation (10), we get
R, = 0.9814 for stage 1/2.
R.2 = 0.9810 for stage 1.
R,3 = 0.9420 for stage 2.
4. From the "System Failure" data in Table A.4
The
The
The
The
The

failure
failure
failure
failure
failure

number of the propulsion is 5.


number of the structure is 2.
number of the separation is 1.
number of the flight control is 1.
number of the electrical is 1.

152

General Staee Data


Atlas Centaur Launch Vehicle

General Dynamics

Stage 1/2

Stage 2

Centaur fl-lA
38.777
29.734
29.8
10
2

Atlas G
320.675
300632
76.7
10

Designation
Stage Mans, klbm
Usable Propellant, klbm
Stage Length, ft
Stage Diameter, ft
NuznherofEngine

FAIRING

Stage 1

Honeywell
Four Gimbal
Inertial

ManufActuir
Type
iFWUETAU

01011

II7KNSTAME~

ADAFIE

7~
***17RMxueRai,0?22

Pratt and Whitney


RL-1OA-3-3A
2

LOX

RLO0X

LOX

180,750

60,500

660

733

Rocketdyne
YLR489-NA-7
1

Fuel
Oxidizer
Average Thrust per Engine, lbf
Sea Level
Vacuum

.2

13.7.
A

OXIDZER

Rocketdyne
YLR-106-NA-7
1

Designation
Nwmber of Starts Posuible

iTI*Manufacturec

TAUL-

OZDZ*Average
TAKSea

:0uNozzleEitArea,ft
ATLALS

Chamber Pressure,
Specific Impulse, sec
Level
Vacuum
ToalBurn Time, am
Nozzle Expansion Ratio
W11.24
Engine Cant Angle, deg
Thrust Vector Contral

-u

16.500

25
2231.2
153
81

2834(
25
11.56
0
0
Giniballed Engines and Verniers

FIEL TANK1

ALL. S13S1INS AK in FRY

Figure Al1.

Atlas/Centaur launch vehicle configuration ani, data.

153

474

446.4
61
8.22
0
Ginmballed Engine

TABLE A.4: FAILURE HISTORY DATA OF ATLAS/CENTAUR


Vehicle Name:
Data Collection from:
Total Launch Number:.
Total Failure Number.

Date

Failure Success
Launch
Run

Atlas/Centaur
62 to 87
67.11

Failure
Stage

Failure
System

Failure
Description

Engine/Motor
Failure Y/N

05/08/62

Structure

Centaur upper stage structure failure

06/30/64

Propulsion

Centaur hydraulic failure, Loss of C


hydraulic power

03/02/65
....

1/2

Propulsion

Loss of Atlas thrust during liftoff, due to fuel


starvation of booster engines stemming from
closure of fuel prevelue

04/07/66

Propulsion

08/10/68

16

Propulsion

Centaur restart sequence failure, engine


ignition occurred but not sustained due to fuel
deplation
Failure of boost pump H20, supply system
centaur didn't achieve its second main engine
start

1130/70

21

Separation

Nose fairing failed to jettison properly

05/08/71

23

Flight Control

Centaur pitch control lost

02/20/75

34

10

Electrical

Atlas booster section electrical disconnect


failed during booster jettison

09/29177

42

1/2

Propulsion

Atlas booster engine hot gas leak failed missior

06/09/84

62

19

Propulsion

Failure occurred at A/C Separation a liquid


oxygen tank crack

03/26/87

67

other

Ughtning strike failed mission

154

By solving equation (11), the reliability of each system can be obtained

propulsion

R structure
Reparation

flight control

Reectrical

= 0.9535
= 0.9814
= 0.9907
=
0.9907
=

0.9907

5. There are two engines (YLR-89-NA-7) in stage 1/2, one engine (YLR-1 05-NA-7) in stage 1, and
two engines (RL-1 OA-3-3A) in stage 2. From Table A.4, the failure number of engine YLR-89-NA-7 is
2. The failure number of engine YLR-105-NA-7 is 1, and the failure number of engine RL-10A-3-3A is
0. By solving equation (12) together with results of stage reliabilities, the reliabilities of each engine can
be obtained.
Ry LR-89-NA.7

0.9907

RyYLR105.NA.7

0.9905

RRL.1OA.3.3A

No Failure

The results of the reliability analysis for the "Atlas/Centaur" are summarized as
ATLAS/CENTAUR

RELIABILIY

Vehicle
Mean
5%
95%

0.9069
0.8450
0.9489

Stages
Stage 1/2

0.9814

Stage 1

0.9810

Stage 2

0.9420

System
Propulsion
Structure
Separation
Flight Control
Electrical

0.9535
0.9814
0.9907
0.9907
0.9907

En iA
YLR-89-NA-7

0.9907

YLR-105-NA-7

0.9905

RL-10A-3-3A

No Failure

The reliability estimation of "Atlas/Centaur" based on equation (7) at each launch is described in
the following figure, A.2.

155

1.0

0.9

0.8

0.7

0.6

~0.5
I

0.4
0.3
0.2
0.1

0 .0

. . . . . . .

10

15

20

25

30

35

40

45

50

55

Launch Number
Figure A.2. Reliability estimation of Atlas/Centaur.

156

60

65

70

6.0 RESULTS
The statistical model (section 2) and the data collection method (section 3) following the solution
procedures (section 4) have been applied to twenty-four U.S. launch vehicles. The results are listed in
Table A.5.
In Table A.5, launch vehicles are separated into six groups based on their developmental histories. The
results of the 'Combine" in Table A.5 are the reliability estimates for each group. The following
formulations, based on Bayesian reliability analysis, have been applied to perform the calculation for each
group.
1 N

where N is the vehicle number in the group, R, is the reliability of the iPh vehicle, ja is the mean reliability
of the group.
z
1
N
a = ---(RiN- 1i1

1)

where o is the variance.


Let

a=l(1-

.)

b=-(

la-I

1- I)+

Then the mean of the group is


=

a/(a+b)

The 5% confidence level is

Ros =
a+ b-

a
Fos4(2b,2a)

The 95% confidence level is

Ross

a Fo9 s(2a, 2b)


b+ a, Fos42a,2b)

The reliability estimations for each engine of the launch vehicles are not listed in Table A.5. They are
partially listed in the matrices which are for engine reliability analysis.

157

ui~

Alj

~~Coa
R3

158

7. 0 CONCLUSIONS
A new model has been developed which has the following advantages:
1. This model weights the reliability growth effect. Since the reliability of a launch vehicle can be
estimated from each past launch, the extension of this model should be able to predict the future reliability
of the launch vehicle.
2. The formulations of the model are simple and easy to apply. A computer program is being developed
for future applications.
3. The results of the calculations are only dependent on the data collection.
4. The reliability estimations of vehicles, stages, systems, and engines are separated, which reduces
the restrictions to the data collection.

159

BIBLIOGRAPHY
1. "Why Bayes is Better" by A. J. Bonis from the 1975 Proceedings of the Annual Reliability and
Maintainability Symposium (Including notes taken from Dr. Bonis' lectures at the Eleventh Annual
Reliability Engineering and Management Institute, November 11-20, 1973, University of Arizona.)
2. "Bayesian Reliability Analysis", H. F. Martz and R. A. Waller, John Wiley and Sons 1982.
3. "Techniques for Reliability Asse.isment and Growth in Large Solid Rocket Motor Development" by D.
K. Lloyd et. al., AIAA Paper Number 67-435, Presented at AIAA 3rd Propulsion Joint Specialist Conference,
Washington, D.C., July 17-21, 1967.
4. "Launch Vehicle Reliability Study Data Base" compiled by C. T. Clague of Aerospace Corporation as
transmitted to G Platt of Marshall Space Flight Center in a memo dated July 18, 1988.
5. A Historical Look at United States Launch Vehicles 1967- Present, March 1988, Space Technology
Division, Note STDN 88-3, ANSER
6. Hazard Analysis of Commercial Space Transportation, Volume I, May 1988, U.S. Department of
Transportation.
7. Launch Vehicle Flight Failure Data Base, July 18, 1988. Cindy Thatcher Clague, The Aerospace
Corporation.

8. Final Report of the Aerospace Independent Atlas E/F Review Team, Volume III Historical Data,
January 29, 1982.

160

Appendix A.4
History
of
US Launch Vehicles

161

Cut-off dates for launch vehicle reliability data

Launch vehicle

Cut-off date

Failure No.

Launch No.

Thor/ltaIIII
Thor
Delta

01/25/57
05/13/60

02/06/59
03/16/62
09/01/64
10/30/82

06/11/57
07/19/58
12/23/58
04/14/59
10/11/60
08/08/61
02/02/67
06109184
02/09/83
05/08/62

07/26/58
12/06/58
10/27/62
02/26/66
11/09/67

08/05/83
03/20/87

66
12

369
181

03/05/65
06/27/76
02/11/87
11/28/87

24
16
11
2

68
94
137
11

06/03/58
02/04/59
08/24/59
11/07/67
02/03/88
06/23/81
05/19/83
03126187
05/15/87
03/26/87

5
4
3
42
18
17
4
0
0
10

8
9
6
197
49
96
73
5
5
67

10/23/58
05/24/61
07/30/65
07/15/75
05/14/73

3
5
0
0
1

6
10
10
9
13

09/18/59
03/25/88

8
14

11
110

09/29/88

26

Titan
Titan
Titan
Titan
Titan

I
I
III
34D

Atlas
AtlasA
Atlas B
Atlas C
AtlasD
Atlas E
Atlas F
Atlas SLV
AtlasG
Atlas H
Atlas/Centaur

Saturn "Family"
Jupiter
Juno
Saturn I
Saturn IB
Saturn V

Scout "Famlty"
Vanguard
Scout

12/06/57
07/01/60

STS
Space Shuttle

04/12/81

162

Vehicle Name:
Data Collection from:
Total Launch Number.
Total Failure Number.

Date

Thor
57 to 83
369
66

Failure Success Failure


Stage
Launch
Run

Failure
Description

Failure
System

Engine/Motor
Failure Y/N

01/25/57

Propulsion

Missile fell back on launcher, oxygen start tank


fill and check valve malfunction

04/19/57

Human

Erroneously destroyed by RSU

05/21/57

Structure

Fuel tank ruptured

08/30/57

Propulsion

Propellant valve pneumatic line failure

10/03/57

Electrical

Microswitch failure in MFV delayed signal to


gas generator valve opening

10/11/57

Propulsion

Possible turbopump failure

12/07/57

Electrical

Electrical systems malfunction, no main engine


cutoff

01/28/58

11

Guidance

Excessive trajectory dispersion after 95 sec.


terminated by RSO

02/28/58

12

Propulsion

Premature shutdown, failure of gas generator


LRRP or liquid ox line

04/19/58

13

Propulsion

Fell back on launcher due to fuel system


malfunction

04/23/58

14

Propulsion

Turbopump failure

07/13/58

18

Electrical

Main engine cutoff failed to get through


circuit problem

07/26/58

20

Structure

Pneumatic line failure caused MLV closure


missile broke up due to aerodynamic forces

08/17/58

22

Propulsion

First stage malfunction, Turbopump failure

11/05/58

24

Guidance

ALiopilot malfunction

163

Vehicle Name:
Data Collection from:
Total Launch Number:.
Total Failure Number:

Date

Failure Success
Launch
Run

Thor
57 to 83
369
66

Failure
Stage

Failure
System

Failure
Description

Engine/Motor
Failure Y/N

11/08/58

25

Propulsion

3rd stage failed to ignite

12/05/58

27

Propulsion

Liquid oxygen tank pressurization malfunction

12/30/58

30

Guidance

Guidance malfunction at liftoff

01/21/59

31

Propulsion

Exploded on pad. A malfunction during


countdown

01/23/59

32

Electrical

Electrical malfunction prevented cutoff and 2nd


stage Ignition

01/30/59

33

Propulsion

Liquid oxygen tank pressurization problem

06/03/59

47

14

Propulsion

06/16/59

49

Guidance

Premature engine burnout due to fuel


exhaustion, Insufficient velocity was gained for
orbital attainment
Autopilot did not program possibly liftoff switch
didnot extract

06/25/59

51

Electrical

06/29/59

52

Electrical

A diode failure in the D-timer brake circuit


caused the Agena engine to burn to fuel
exhaustion
Electrical malfunction R/V did not separate
retro-rockets did not fine

07/21/59

53

Flight Control

Flight controller did not program; Launcher arm


did not extract liftoff pin

08/14/59

60

Propulsion

Fuel depletion, fuel underload, leak or engine


miscalibration

09/17/59

65

Separation

2nd stage retro device failed, 3rd stage did not


ignite

12/01/59

77

11

Propulsion

12/14/59

79

Flight Control

Main engine cutoff occurred 6 sec. early.


Possibly main liquid oxygen valve closed
orematurelv
Control failure. Missile stability !3st

164

Vehicle Name:
Data Collection from:
Total Launch Number.
Total Failure Number

Date

Thor
57 to 83
369
66

Failure Success Failure


Stage
Launch
Run

Failure
Description

Failure
System

Engine/Motor
Failure Y/N

02/04/60

83

Electrical

Failure of the fuel injector pressure switches or


a short around them

02/19/60

86

Guidance

Autopilot component failure

06/29/60

94

Guidance

2nd stage attitude instability

08/18/60

97

Propulsion

Failure of the first stage hydraulic system

10/26/60

101

Separation

2nd stage failed to separate

11/30/60

103

Electrical

Main engine shutdown from a premature


MECO signal

03/30/61

111

Propulsion

A hydraulic system failure resulted in lose of


attitute control

06/08/61

113

Propulsion

Fuel line leak, Engine failed to provide thrust

07/21/61

118

Flight Control

Control system instability

08/03/61

119

Flight Control

A failure occurred inthe hydraulic system whic


provides the power for engine gimballing

10/23/61

125

Propulsion

Hydraulic failure and a failure in the engine


actuating system

11/05/61

126

Guidance

Apogee was higher than predicted as a result


of excess velocity

01/13/62

131

Electrical

Blew a fuse in the line to the gyro guidance


packages

01/24/62

133

Propulsion

2nd stage misfired, An acutator lug on the 2nd


stage thrust chamber was broken

02/21/62

134

Propulsion

The fuel vent valve stuck open during first burn

16q

Vehicle Name:
Data Collection from:
Total Launch Number.
Total Failure Number:

Date

Failure Success
Launch
Run

Thor
57 to 83
369
66

Failure
Stage

Failure
System

Failure
Description

Engine/Motor
Failure Y/N

03/19/62

136

Guidance

Pitch HIG gyro malfunction

05/10/62

140

Electrical

Failure of the 1st and 2nd stages to separate


which was caused by 1st stage electrical

06/20/62

147

Propulsion

High temps weakened the load-carrying


capabilit of the Thor engine section

07/25/62

153

Propulsion

The main oxidizer valve only partially opened

10/15/62

162

Propulsion

The actuator potentionmeter voltage show a


continuing loss of power

02/28/63

174

11

Propulsion

Solid motor failure

03/18/63

175

Electrical

Electrical short circuit in the safe-arm junction


box

04/26/63

177

Guidance

Failure in horizon sensors

06/12/63

179

Propulsion

During 1st engine operation a power short


condition developed, igniters were set off by

11/09/63

191

11

Propulsion

overheating of the boattail section

11/10/63

192

Flight Control

Unstable and premature termination of


powered flight

03/24/64

203

10

Electrical

Electrical short circuit, loss of guidance and


control

04/21/64

204

UK

Flight Control

Failure of flight control

04/27/64

206

UK

UK

UK

UK

05/28/64

207

UK

UK

UK

UK

malfunction

radiated heat from the nozzle

166

Vehicle Name:
Data Collection from:
Total Launch Number
Total Failure Number:

Date

Failure Success
Run
Launch

Thor
57 to 83
369
66

Failure
Stage

Failure
Description

Failure
System

Engine/Motor
Failure Y/N

09/02/65

250

42

UK

Guidance

Guidance failure, destroyed by RSO

01/06/66

260

UK

Failed to orbit

UK

05/03/66

269

Propulsion

Fire in thrust section due to leakages

05/18/68

301

31

Guidance

Gyro failure, Booster guidance malfunction

02/17/71

335

33

Propulsion

Exploded after 40 sec.

UK

02/18/76

354

18

UK

UK

UK

UK

167

Vehicle Name:
Data Collection from:
Total Launch Number.
Total Failure Number:

Date

Failure Success
Launch Run

Delta
60 to 87
181
12

Failure
Stage

Failure
System

Failure
Description

Engine/Motor
Failure Y/N

05/13/60

Flight Control

2nd stage attitude control malfunction,


No 3rd stage ignition

03/19/64

24

22

Propulsion

Loss of 3rd stage halfway thru burn

08/25/65

33

Propulsion

3rd stage ignition before separation,


Did not achieve orbit

09/18/68

59

25

Guidance

1st stage control system (rate gyro)

07t25/69

71

11

Propulsion

3rd stage (AKM) thrust dropped during burn


possibly nozzle blown off

08/27/69

73

Propulsion

1st stage hydraulic system failure

10/21171

86

12

Flight Control

2nd stage control gas oxidizer vent valve


failure, leak

07/16/73

96

Propulsion

2nd stage hydraulic system pump motor failure

01/19/74

100

Flight Control

2nd stage electronics failure

04/20177

130

29

Separation

Clamp band released early

09/1377

134

Propulsion

SRM (Castor IV)burn-through

05/03/86

178

43

Electrical

1st stage electrical short in relay box


(main engine shutdown)

168

Vehicle Name:
Data Collection from:
Total Launch Number:.
Total Failure Number:.

Date

Failure Success
Launch Run

Titan I
59 to 65
68
24

Failure
Stage

Failure
Description

Failure
System

Engine/Motor
Failure Y/N

08/14/59

Structure

Vibration fired hotddown bolts: 1B1E pulled


causing shutdown

12/12/59

Propulsion

Failure on pad: destruct system

UK

02/05/60

Structure

Failure at T+43 sec.

03/08/60

10

UK

UK

UK

UK

04/08/60

12

UK

UK

UK

UK

01/01/60

18

Propulsion

Failure at stage I hydraulics

07/28/60

19

Propulsion

Stage I premature shutdown

UK

08/10/60

20

UK

UK

UK

UK

09/29/60

23

UK

UK

UK

UK

12/03/60

26

UK

Vehicle destroyed

UK

12/20/60

27

Propulsion

No stage II ignition

UK

01/20161

28

Propulsion

No stage IIignition

UK

03/02/61

31

UK

Premature stage IIshutdown

UK

03/31/61

33

UK

Premature stage I shutdown

UK

06/23/61

36

UK

Premature stage II shutdown

UK

169

Vehicle Name:
Data Collection from:
Total Launch Number.
Total Failure Number.

Date

Failure Success
Launch
Run

Titan I
59 to 65
68
24

Failure
Stage

Failure
System

Failure
Description

Engine/Motor
Failure Y/N

12/15/61

49

12

Propulsion

No stage IIignition

UK

01/20/62

50

Propulsion

No stage II ignition

UK

02/23/62

52

Propulsion

No stage II ignition

UK

05/01/63

60

Propulsion

Failure at liftoff

UK

07/16/63

61

Propulsion

No stage II ignition

UK

08/30/63

G3

Propulsion

Gas generator shutdown

12/08/64

66

UK

Stage II prel. shutdown

UK

01/14/65

67

Propulsion

No stage IIignition

UK

03/05/65

68

Propulsion

Propellant depletion

170

Vehicle Name:
Data Collection from:
Total Launch Number:.
Total Failure Number:.

Date

Failure Success
Launch
Run

Titan II
62 to 76
94
16

Failure
Stage

Failure
Description

Failure
System

Engine/Motor
Failure Y/N

06/07/62

Propulsion

Stage IIgas generator oxidizer injection


blocked

07/25/62

Propulsion

Stage IIfuel pump leak downstream of TCV


failure due to combustion instability

12/06/62

Propulsion

Stage IIoxidizer bootstrap line failure

01/10/63

10

Propulsion

Gas generator oxidizer injector blocked

02/16/63

13

Separation

Umbilicals failed to disconnect properly

04/19/63

15

Propulsion

Bootstrap premature shutdown

05/09/63

17

Propulsion

OX leak, Premature shutdown of stage II


10% loss of stage II oxidizer during S IIflight

05/29/63

20

Propulsion

Subassembly 1 thrust chamber fuel valve leak


occurred at engine ignition

06/20/63

21

Propulsion

Gas generator oxidizer injector clogging

04/30/65

45

23

Propulsion

Subassembly / shutdown abruptly and vehicle


flight continued erratically, Turbopump failure

06/14/65

48

Flight Control

Loss of vernier nozzle

09/21/65

54

Electrical

Premature shutdown of stage II,bad connector


coupled with a surge in the AOS power

11/30/65

57

Propulsion

Fuel leak, possibly at cross-over manifold with


resultant thrust vectoring

12/22165

60

Human
(Guidance)

Control of record stage lost following staging


Probably due to technician reading wrong scale

05/24/66

67

Separaton

No r/v Separation

171

Vehicle Name:
Data Collection from:
Total Launch Number:
Total Failure Numbe-:

Date

04/'12/67

Failure Success
Launch
Run
69

Titan 11
62 to 76
94
16

Failure
Stage

Failure
System

Flight Control

Failure
Description

}Stage

17 2

11yaw rate gyro

Engine/Motor
kailure YIN
N

Vehicle Name:
Data Collection from:
Total Launch Number:
Total Failure Number:

Date

Failure Success
Run
Launch

Titan Ill
64 to 87
137
11

Failure
Stage

Failure
System

Engine/Motor
Failure Y/N

Failure
Description

09/01/64

Propulsion

Premature transtage cutoff, Pressure system


failure

!0 15/65

Propulsion

Propellant f.eezing in stage III engine bi-prop


valve engine failed to shutdown

122165

Flight Control

ACS engines failed to shutdown after vernier


burn loss of attitude control

08,26 66

10

Structure

P/L fairing failure during SRM flight

04'2E 67

16

Propulsion

Stage IIengir?- thrust dropped to 1/2 nominal


gross contamination on Martin side of interface

1 1,06/70

48

31

Guidance

IGS-IMU failure, The electroni


the IMU shorted out

02'11,74

75

26

Propulsion

Centaur stage failed to start after separation,


failure of LO 2 boost pump

05/20/75

85

Guidance

IMU failed, Internally shorted transistor

09/15'7;

99

13

Propulsion

Engine failed to shutdown on command burned


to completion, hard contaminant in fuel valve

09/05/77

106

Propulsion

Low velocity at stage II shutdown

03/25/78

110

Propulsion

Turbine drive hydraulic pump failure after


qntion

jspension of

Vehicle Name:
Data Collection front:
Total Launch Nur-ber:
Total Failure Number:

[Date

82 to 87
11
2

Failure ISuccess Failure

Failure

Failure

Engine/Motor

Run

System

Description

Failure Y/N

ILaunch

0O8/28/85

Titan 34D

04,18/86

Stage

Stage I engine shutdown premnaturely-massive

Propulsion

leak shortly after ignition


0

Propulsion

Insulation/case debond vehicle disintegrated


at T+8.764 the first explosive flash was noted

174

Vehicle Name:
Data Collection from:
Total Launch Number:
Total Failure Number:

Date

Atlas A
57 to 58
8
5

Failure Success Failure


Stage
Launch
Run

Failure
Description

Failure
System

06/1 1/57

Structure

09/25/57

Structure

02/07/58

Flight Control

0220/58

Flight Control

04/05/58

Propulsion

175

Engine/Motor
Failure Y/N

Vehicle Name:
Atlas B
Data Collection from:
58 to 59
ToWa '.aunch Number.~ 9
Total Failure Number:
4

Date

Failure
Launch

uccess
Run

07/1 9/58

Flight Control

09/18/58

Propulsion

11/17/58

0Propulsion

01/15/58

Failure
Stage

Failure
System

Failure
Description

Propulsion

176

Engine/Motor
Failure Y/N

Vehicle Name:
Data Collection from:
Total Launch Number.
Total Failure Number.

Date

Atlas C
58 to 59
6
3

Failure Success Failure


Stage
Run
Launch

Failure
Description

Failure
System

01/27/59

Guidance

02/20/59

Propulsion

03/18/59

Propulsion

177

Engine/Motor
Failure Y/N

Vehicle Name:
Data Collection from:
Total Launch Number:.
Total Failure Number:.

Date

Failure Success
Launch Run

Atlas D
59 to 67
197
42

Failure
Stage

Failure
System

04/14/59

Propulsion

05/18/59

Propulsion

06/06/59

Propulsion

09/09/59

09/16/59

01/26/60

1/2

Failure
Description

Electrical

Electrical signal to initiate separation did not


reach the pyrotechnic cartridges

Propulsion

Hydraulic failure

19

10

Guidance

03/10/60

23

Propulsion

04/07/60

24

Propulsion

05/06/60

26

Flight Control

06/22/60

30

Electrical

07/02/60

32

07/22/60

33

Flight Control

07/29/60

34

Structure

09/12/60

37

Propulsion

09/29/60

41

Electrical

Engine/Motor
Failure Y/N

Electrical

Static or dynamic loads, higer than could be


predected, rupture of LOX tank

178

Vehicle Name:
Data Collection from:
Total Launch Number.
Total Failure Number.

Date

Failure Success
Launch
Run

Atlas D
59 to 67
197
42

Failure
Stage

Failure
System

Failure
Description

Engine/Motor
Failure Y/N

10/12/60

43

Propulsion

12/15/60

47

Structure

Rupture in the miss#'a LOX tank

04/25/61

52

1/2

Flight Control

Unsatisfactory due to a failure in the flight


control system

09/09/61

57

1/2

Electrical

Failure of the ground power umbilical to eject


normally at liftoff

10/21/61

59

1/2

Guidance

roll control was lost

11/22/61

61

1/2

Flight Control

Booster pitch control lost

12/22/61

65

Flight Control

Sustainer engine failed to cutoff

01/26/62

68

1/2

Guidance

Failure of Mod III G Guidance system

02/21/62

71

04/09/62

74

Electrical

Electrical failure, excess altitude and undervelocity condition

07/22/62

67

12

Guidance

Failure of engine burning time

10/02/62

92

12/17/62

98

Thrust chamber oscillation

01/25/63

100

Structure

03/09/63

104

Flight Control

Propulsion

Electrical

Propulsion

179

Vehicle Name:
Data Collection from:
Total Launch Number
Total Failure Number:

Date

Failure Success
Launch Run

Atlas D
59 tO 67
197
42

Failure
Stage

Failure
System

03/15/63

106

Propulsion

03/16/63

107

Flight Control

06/12/63

i11

09/06/63

117

09/1 1/63

Failure
Description

Engine/Motor
Failure Y/N

Hydraulic failure

Propulsion

Booster hydraulic accumulator failure,


Exploded just after launch

Propulsion

Hydraulic failure

118

Propulsion

10/07/63

119

Propulsion

11/13/63

123

Propulsion

Hydraulic failure

01/21/65

149

25

Propulsion

Injection failure, no separation

03/02/65

153

1/2

Propulsion

Stage failed due to loss of thrust

05/27/65

159

1/2

Propulsion

Booster exploded

03/04/66

175

15

1/2

Flight Control

Failure of sustainer low pressure hydraulic


system at booster jettison

05/03/66

179

UK

UK

UK

UK

1/2

]BC

Vehicle Name:
Data Collection from:
Total Launch Number.
Total Failure Number.

Date

Failure Success
Launch Run

Atlas E
60 to 88
49
18

Failure
Stage

Failure
Description

Failure
System

Engine/Motor
Failure Y/N

10/11/60

1/2

Guidance

Nitrogen control-gas was broken off, causing


control-gas depletion

11/29/60

Propulsion

Loss of sustainer engine hydraulic pressure

01/24/61

Flight Control

Lost vehicle stability

03/13/61

Flight Control

Premature shutdown of the sustainer engine


due to fuel depletion

03/24/61

1/2

Flight Control

Control bottle helium was d3pleted during


boost phase and the booster package was not

06/07/61

1/2

Propulsion

Combustion instability in B1 thrust chamber

06/22/61

10

1/2

Flight Control

Excessive pitchover rate during boost phase

09/08/61

13

Propulsion

Sustainer engine shutdown shortly after jettison


of the booster section

11/10/61

16

Propulsion

Sustainer engine shutdown during main stage


transition

02/28/62

20

UK

Structure

UK

UK

07/13/62

21

Propulsion

LOX leak during flight, failure of slow-closing

12/18/62

22

1/2

Propulsion

Booster engine shutdown due to loss of lube oil

07/26/63

26

Electrical

Spurious voltage transients on range safey


cutoff cir .iitry

09/25/63

29

Propulsion

Sustainer hydraulic system failed at staging

02/12/64

30

Guidance

Guidance failure in premature engine cutoffs

jettisoned

propellant valve

181

Vehicle Name:
Data Collection from:
Total Launch Number.
Total Failure Number.

Date

Failure Success
Launch
Run

Atlas E
60 to 88
49
18

Failure
Stage

Failure
System

Failure
Description

Engine/Motor
Failure Y/N

08/27/64

32

UK

Guidance

Radial impact error BB NM short, GD/A did not


perform an analysis

12/08/80

36

1/2

Propulsion

Booster engine nol 2 shutdown prematurely,


due to loss of oil

12/18/81

37

1/2

Propulsion

Propulsion B1 GG burn through

182

Vehicle Name:
Data Collection from:
Total Launch Number.
Total Failure Number:

Date

Failure Success
Launch Run

Atlas F
61 to 81
96
17

Failure
Stage

Failure
System

Failure
Description

Engine/Motor
Failure Y/N

12/12/61

Guidance

The ARMA guidance system computer malfunctioned. Engine cutoff 4 sec early

12120/61

Propulsion

Loss of sustainer hydraulic pump iniet pressure

04/09/62

Propulsion

The sustainer lox turbopump was destroyed by


an internal overpressure

08/10/62

Flight Control

Missile failed to roll to the planned target


azimuth

11/14/62

12

Guidance

Guidance computer malfunctioned

03/23/63

17

UK

UK

Missle self-destructed at 91 sec.

JK

10/03/63

19

1/2

Propulsion

81 Main fuel valve failed to open

10/28/63

20

Propulsion

Sustainer hydraulic return system failed

04/03/64

23

1/2

Propulsion

Thrust imbalance due to B1 main fuel valve


sticking

08/08/66

29

1/2

Propulsion

Abnormal operation of B2 engine caused high


fuel and iow LOX usage, partial blockage of

10/11/66

30

1/2

I__

B2 LOX hiclh pressure system

_the

Propulsion

Fuel starvation of B1 engine due to malfunction

of B1 engine fuel prevalve


10/27/67

39

1/2

Propulsion

Loss of vehicle stability caused by small leak in


booster hydraulic high oressure system

05/03/68

45

Flight Control

Divergent oscillations of booster pitch control

11/06/68

52

Propulsion

Vernier engine hydraulic pressure lost after


SECO

10/10/69

58

Sustainer and vernier engines shutdown


prematurely

Propulsion

183

I
Y

Vehicle Name:
Data Collection from:
Total Launch Number:
Total Failure Number.

Date

Failure Success
Launch
Run

Atlas F
61 to 81
96
17

Failure
Stage

Failure
System

04/12/75

80

21

Propulsion

05/29/80

95

1A

1/2

Propulsion

Failure
Description
Damaged thrust section allowed overheating
and premature shutdown of the sustainer and
vernier engines
B1 engine performance was 79% of nominal
and injection time was late

184

Engine/Motor
Failure Y/N
Y

Vehicle Name:
Data Collection from:
Total Launch Number.
Total f ailure Number

Date

Failure Success
Launch
Run

Atlas SLV
67 to 83
73
4

Failure
Stage

Failure
System

Failure
Description

Engine/Motor
Failure Y/N

11/30/70

26

25

Separation

Nose fairing failure to jettison

12/04/71

30

Flight Control

Lost attitude control E pack

02/20/75

42

11

Electrical

Electrical disconnect failure during Atlas boost


separation

0929/77

52

1/2

Propulsion

Hot gas leak in the booster gas generator

185

Vehicle Name:
Data Collection from:
Total Launch Number.
Total Failure Number

Date

Failure Success
Launch

03/26/87

Run

_ __

__-

Atlas G
84 to 87
6
i

IFailur I
IStage

Failure

Failure

Engine/Motor

System

Description

Failure Y/N

Other

Lightning struck vehicle

__186_

Vehicle Name:
Data Collection from:
Total Launch Number.
Total Failure Number:

Date

Failure Success
Run
Launch

Atlas H
83 to 87
5
0

Failure
Stage

Failure
Description

Failure
System

187

Engine/Motor
Failure Y/N

Vehicle Name:
Data Collection from:
Total Launch Number:
Total Failure Number:

Date

Failure Success
Launch
Run

Atlas/Centaur
62 to 87
67
11

Failure
Stage

Failure
System

Failure
Description

Engine/Motor
Failure Y/N

05/08/62

Structure

Centaur upper stage structure failure

06/30/64

Propulsion

Centaur hydraulic failure, Loss of C2


hydraulic power

03/02/65

1/2

Propulsion

Loss of Atlas thrust during liftoff, due to fuel


starvation of booster engines stemming from
closure of fuel prevelue

04/07/66

Propulsion

08/10/68

16

Propulsion

Centaur restart sequence failure, engine


ignition occurred but not sustained due to fuel
deplation
Failure of boost pump H20 2 supply system
centaur didn't achieve its second main engine
start

11/30/70

21

Separation

Nose fairing failed to jettison properly

05/08/71

23

Flight Control

Centaur pitch control lost

02120/75

34

10

Electrical

Atlas booster section electrical disconnect


failed during booster jettison

09/29/77

42

1/2

Propulsion

Atlas booster engine hot gas leak failed missior

06/09/84 g

62

19

Propulsion

Failure occurred at A/C Separation a liquid


oxygen tank crack

03/26/87

67

other

Lightning strike failed mission

188

Vehicle Name:
Data Collection from:
Total Launch Number.
Total Failure Number.

Date

Failure Success
Launch
Run

Jupiter
58 to 58
6
3

Failure
Stage

Failure
Description

Failure
System

Engine/Motor1
Failure Y/N

03/05/58

Propulsion

4th stage failed to ignite

08/28/58

Separpition

Booster burned into remaining stage upper


stage fired in wrong direction

10/23/58

Separation

2nd stage faiied to fire premature separation

189

Vehicle Name:
Data Collection from:
Total Launch Number.
Total Failure Number

Date

Juno
58 to 61
10
5

Failure Success Failure

Failure
System

IFailure

Description

Engine/Motor
Failure Y/N

Launch

Run

Stage

07/16/59

UK

Gidance

Guidance failed, destroyed by RSO

08/14/59

Propulsion

Booster fuel depletion

03/23/60

UK

Ignition malfunction

UK

02/24/61

UK

2nd stage malfunction

UK

05/24/61

10

UK

2nd stage failed to ignite

UK

190

Vehicle Name:
Data Collection from:
Total Launch Number.
Total Failure Number.

Date

Failure
Launch

Success
Run

Saturn I
62 to 65
10
0

Failure
Stage

Failure
Description

Failure
System

191

Engine/Motor
Failure Y/N

Vehicl.- Name".
Data Collection from:
Total Launch NumberTotal i;&iiure Number.

Date

Failure Success
Launch
R'rn

Saturn lB
66 to 75
9
0

Failure
Stage

Failure
System

Failure
Description

192

Engine/Motor
Failure Y/N

Vehicle Name:
Data Collection from:
Total Launch Number:
Total Failure Number.

Date

04/04/68

Saturn V
67 to 73
13
1

Failure Success Failure


Stage
Run
Launch
2

2
3

Failure
Description

Failure
System

Second stage engine malfunction


Third stage failure to restart

Propulsion

193

Engine/Motor
Failure Y/N
Y

Vehicle Name:
Data Collection from:
Total Launch NumberTotal Failure Number:.

Date

Failure Success
Launch
Run

Vanguard
57 to 59
11
8

Failure
Stage

Failure
System

Failure
Description

Engine/Motor
Failure Y/N

12/06/57

Propulsion

First stage lost thrust, exploded after 2 second

02/05/58

Flight Control

First stage control system malfunction after


57 sec

04/28/58

Propulsion

Bad 2nd stage shutdown preventing 3rd stage


firing

05/27/58

Flight Control

Improper 3rd stage trajectory loss of attitude


control

06/26/58

UK

Early 2nd stage shutdown prevented 3rd stage


firing

UK

0926/58

UK

Below minimum 2nd stage performance


prevented orbit

UK

04/14/59

Guidance

Loss of 2nd stage pitch control

06/22/59

10

Propulsion

Low tank pressures after 2nd stage ignition


caused instability

194

Vehicle Name:
Data Collection from:
Total Launch Number.
Total Failure Number.

Date

Failure Success
Launch Run

Scout
60 to 88
110
14

Failure
Stage

Failure
Description

Failure
System

Engine/Motor
Failure Y/N

12/04/60

Electrical

Failed to ignite: Caused by wire break or


disconnected power input

06/30/61

Propulsion

Improper venting causing ignition leads to be


severed

08/25/61

Separation

Diaphragm separation system failure

11/01/61

Guidance

Guidance failure destroyed by RSO after


30 sec

04/26/62

11

Guidance

Control was lost due to H202 not being availabl

05/23/62

12

UK

2nd stage shock input all three axes 0.29 sec


aiter ignition

UK

04/05/63

18

Flight Control

3rd stage reaction control system failure

04/26/63

19

Electrical

short circuit inthe destruct system, attitude


control was iost

07/20/63

23

Propulsion

stage I engine nozzle failure

09/27/63

24

Flight control

Pitch motor failure, loss of vehicle control

06/25/64

28

Electrical

Linear shaped destruct charge was ignited by


an unplanned electrical input

01/31/67

51

22

Propulsion

Motor graphite nozzle insert resulted in rupture


of the motor case

05/29/67

56

Propulsion

Failure of motor caused by unstable chumber


pressure

12,, 5f-5

94

37

Propulsion

3rd stage nozzle failure

195

Vehicle Name:
Data Collection from:
Total Launch Number
Total Failure Number

Date

01 /28/86

Space Shuttle
81 to 88
26
1

Failure Success Failure

Failure

Launch

System

25

Run
24

Stage
0

[Failure

Description

Propulsion

Vehicle exploded 73 sec. after launch-SRM


0-ring failure

196

Engine/Motor
Failure YIN
Y

Вам также может понравиться