INTRODUCTION
2  1
2
PROBABILISTIC
DESIGN
2.1
INTRODUCTION
In the design of a product for massproduction we are faced with the challenge that every item produced will be different . These differ ences will be slight to the casual observer, but may combine in the indi vidual items to give vastly different performance characteristics, and thus impact the perceived quality of the product. These differences are caused by, among many other things, drift in machine settings, batch variability in material properties and operator input. The value of each design parameter embodied in any item is there fore likely to be different from the value in any other item. If we mea sure the values of a design parameter (a length, say) in all the items in a production run we will get data on the frequency of occurrence of the values of the parameter. If there are sufﬁcient data values we can rescale the frequency to give a probability. Design parameters may thus be viewed as random variables. Most physical variables used in engineering design are in fact ran dom variables. Standard calculations are really calculations with their mean values. If we are interested in the possible range of values our re sult might have, then we must use more information in our calculation algorithm than the mean values alone. The classical approach to design is to apply safety factors to each design parameter to allow for uncertainties. If the design is complex, these safety factors can compound to cause overdesign with an uncer tain reliability. And in some important cases, where there is an upper
©JMBrowne
July 12, 2000 11:53 am
4ProbabilisticDesign
Page 1 of 52
2  2
PROBABILISTIC DESIGN
and lower speciﬁcation or functional limit, the safety factor method can not be used at all. Probabilistic design studies how to make calculations with the probability distributions of the design parameters, instead of the nomi nal or mean values only. This will then allow the designer to design for a speciﬁc reliability or speciﬁcation conformance, and hence maximize safety, quality and economy. Design parameters are usually independent random variables . Each type of parameter will have a distribution. Common distributions for design parameters are the normal, lognormal, poisson, uniform, tri angular, exponential and weibull distributions.
CALCU
WITH
UNSUR
NUMBE
2.2 TYPES OF PROBABILITY DISTRIBUTIONS
Detailed brieﬂy below are the types of probability distributions more commonly found in engineering.
2.2.1 Normal
• The distribution is symmetric and bellshaped
• The variable may itself be the sum of a large number of individual effects.
Example: Heights of the adult male population.
Example: Dimension of a fabricated part.
2.2.2 Lognormal
• The variable can increase without bound, but is limited to a ﬁnite value at the lower limit
• The distribution is positively skewed (most of the values being closer to the lower limit).
• The logarithm of the variable yields a normal distribution.
Example: Real estate values, river flow rates, strengths of materi als, fracture toughness.
©JMBrowne
July 12, 2000 11:53 am
4ProbabilisticDesign
Page 2 of 52
TYPES OF PROBABILITY DISTRIBUTIONS
2  3
2.2.3 Weibull
• A distribution possessing three parameters enabling it to be adjusted to cover all stages of the “bathtub” reliability curve.
• A shape parameter of 1 gives an exponential distribution. A shape parameter of 3.25 gives an approximation to the normal.
• Finds principal application in situations involving wear, fatigue and fracture.
Example: Failure rates, lifetime expectancies
2.2.4 Exponential
• Describes the amount of time between occurrences.
Poisson
• Complements
the
distribution
(which
describes
number of occurrences per unit time.
the
Example: Time between telephone calls.
Example: Mean time between failures
2.2.5 Triangular 

• 
Used when the only information known is the minimum , the most likely , and the maximum values of a variable. Example: Item costs from different suppliers or future estimation. 


.50 
0.65 
0.80 
0.95 
1.1 

2.2.6 Uniform 

• 
All values between the minimum and maximum are equally likely 

Example: A number from a random number generator. 

.90 
0.95 
1.00 
1.05 
1.1 
2.2.7 Poisson (discrete)
• Describes the number of times an event occurs in a given interval.
• The number of possible occurrences in the interval is not limited.
• The occurrences are independent.
• The average number of occurrences is ﬁxed.
Example: Number of telephone calls per minute.
©JMBrowne
July 12, 2000 11:53 am
4ProbabilisticDesign
Page 3 of 52
2  4
PROBABILISTIC DESIGN
Example: Number of errors per page in a document
Example: Number of defects per square metre in sheets of steel.
2.2.8 Binomial (discrete)
Describes the number of successes in a ﬁxed number of trials.
• For each trial only two outcomes are possible  success or failure.
• The trials are independent
• The probability of success remains the same from trial to trial.
Example: Number of heads in ten tosses of a coin
Example: Number of defective items in a given batch, given that the average rate of producing defectives is known.
2.2.9 Geometric (discrete)
Describes the number of trials until the ﬁrst successful occurrence.
• The number of trials is not ﬁxed and continue until the ﬁrst success
• The probability of success is the same from trial to trial
Example: Number of times to spin a roulette wheel before you win.
Example: Number of wells you would dig before the next gusher.


00 
3.25 
5.50 
7.75 
10.0 
2.2.10 Custom
Used to describe a unique situation that cannot be described by any of the standard distributions.
• The area under the curve should equal 1.
2.2.11 Comparison of distributions
• Poisson: Number of times an event occurs in a given interval.
• Exponential: Interval until next occurrence of event.
• Binomial: Number of successes in a ﬁxed number of trials.
• Geometric: Number of trials until the next success.
• Large number of trials: Binomial approaches normal.
©JMBrowne
July 12, 2000 11:53 am
4ProbabilisticDesign
Page 4 of 52
DESCRIBING PROBABILITY DISTRIBUTIONS
2  5
2.3 DESCRIBING PROBABILITY DISTRIBUTIONS
2.3.1 Types of description
The types of description we will use for describing probability dis tributions include its parameters, its probability density function (pdf), its cumulative distribution function (cdf), and its set of moments.
• The parameters of a given type of distribution are the mathematical parameters in the formula for the distribution (not to be confused with design parameters).
• The probability density function describes the basic shape and location of the distribution. The graphs shown in the previous section are probability density functions.
• The cumulative distribution function allows us to read off the area under the probability density function in a given range. This area represents the probability that the random variable will lie in this range.
It is the probability density function, the parameters which describe it, and the ﬁrst few of its moments that will be of most use to us in prob abilistic design. For brevity we may use the terms “pdf”, “distribution”, or “density function” instead of “probability density function”. Since the distribution is the main description of a random variable we will sometimes use the terms interchangeably.
2.3.2 Properties of probability density functions
In order to be called a probability density function, a function must have the following properties: (Any function that looks like a blob of goo on the axis is probably a good candidate)
• It is indeed a function (no undercuts)
• The area between it and the axis is unity
The support of the probability density function is the domain of the random variable over which the function is deﬁned. The full deﬁnition of a probability density function comprises the speciﬁcation of its formula and its support.
©JMBrowne
July 12, 2000 11:53 am
4ProbabilisticDesign
Page 5 of 52
2  6
PROBABILISTIC DESIGN
2.3.3 Functions of random variables
Functions of random variables are central to the design of products
for quality and reliability. Since the performance of a product is gener ally a function of its design parameters, and the design parameters are random variables, the performance is a function of random variables, and is thus itself a random variable.
A central tool in the design of quality products therefore, is the
ability to calculate functions of random variables.
2.3.4 Notation
Generally we will denote a probability density function of a ran dom variable x by f(x). However, when we are considering a function z = g(x) we will distinguish the two probability density functions by de noting them f _{x} (x) and f _{z} (z).
2.3.5 In sum
Design parameters and the quality variables which depend on them are most often random variables which we describe by probability den sity functions.
Problem 2.1
A design parameter is a random variable uniformly distributed be
tween 1 and 3. Sketch its probability density function and its cumulative distribution function showing pertinent values on the axes.
Uniformly distributed design parameter
2.4 GRAPHICAL FUNCTIONS OF A RANDOM VARIABLE
In this section we shall describe the concept of a function of a ran
dom variable in graphical terms. The most important attribute of a function when applied to a ran dom variable is whether it has an inverse over the support of the density function of the random variable. If it does, then it is straightforward to compute the function of the random variable. If not, then the function must be broken up into pieces so that each piece has an inverse, the
©JMBrowne
July 12, 2000 11:53 am
4ProbabilisticDesign
Page 6 of 52
GRAPHICAL FUNCTIONS OF A RANDOM VARIABLE
2  7
transformation associated with each piece applied, and the results summed.
2.4.1 The concept
Suppose x is a random variable with probability density function f _{x} (x), and that z = g(x) is an invertible function of x over the support of f _{x} (x). The central concept is as follows:
The probability of x being in the interval [x _{1} , x _{2} ] is equal to the probability that z is in the interval [z _{1} , z _{2} ] = [f _{x} (x _{1} ), f _{x} (x _{2} )]. Geometrically, this is equivalent to saying that the area under the probability density function of x in the interval [x _{1} , x _{2} ] is equal to the area under the probability density function of z in the interval [f _{x} (x _{1} ), f _{x} (x _{2} )].
Equating the two probabilities (areas) we obtain
A
=
 f _{z} (z) dz 
=
 f _{x} (x) dx 
=> f _{z} (z) = f _{x} (x) /  dz/dx  Note that because the probability (area) is always positive, the same relationship will exist whether the gradient of the function g(x) is
©JMBrowne
July 12, 2000 11:53 am
4ProbabilisticDesign
Page 7 of 52
2  8
PROBABILISTIC DESIGN
positive or negative. Hence we always take the absolute value of the de rivative dz/dx.
2.4.3 Dimensional considerations
Probability is dimensionless. However, x and z may have (differ ent) dimensions (units) [x] and[z], say. The probability density func tions f _{x} (x) and f _{z} (z) must have dimensions 1/[x] and 1/[z] respectively. This fact corroborates with the formula above and may be used as a check on the correctness of any functional transformation.
2.4.4 Examples
This simple relationship between area elements of the two density functions may be used to perform a graphical determination of a func tion of a random variable. It is of course generally more accurate to de termine the result analytically, however it is useful to be able to visualize the process graphically.
A linear function through zero
•
©JMBrowne
July 12, 2000 11:53 am
4ProbabilisticDesign
Page 8 of 52
GRAPHICAL FUNCTIONS OF A RANDOM VARIABLE
2  9
A general linear function
©JMBrowne
July 12, 2000 11:53 am
4ProbabilisticDesign
Page 9 of 52
2  10
PROBABILISTIC DESIGN
A convex function
•
Note that the lower gradient of the transformation function leads to a concentration of the probability in the corresponding region of the transformed probability density function. Much of our success in the design of quality products will depend on our being able to tune the design to make use of these regions of low gradient, hence minimizing the variability of the design output distribu tions.
Problem 2.2
Sketch yourself a distribution and a transformation function. Sketch the shape of the resulting transformed distribution.
Sketching distributions
Problem 2.3
The sound output of a product has been determined to follow a tri angular distribution with mode 2 units, lower limit 1 unit and upper limit 3 units. Graphically determine the probability density function for the (nat ural) logarithm of the sound output.
Sound output
©JMBrowne
July 12, 2000 11:53 am
4ProbabilisticDesign
Page 10 of 52
ANALYTICAL FUNCTIONS OF A RANDOM VARIABLE 2  11
2.5 ANALYTICAL FUNCTIONS OF A RANDOM VARIABLE
2.5.1 Deﬁnition of an invertible function
The process above is straightforward if, over the support of x, there is only one value of x for each value of z. Functions with this property are called invertible.
2.5.2 Noninvertible functions of a random variable
If the function is not invertible, the following process may be ap plied:
• 1. Break the function up into piecewise invertible pieces over intervals [x _{i} , x _{j} ]
• 2. For each piece, follow the procedure below for an invertible function. The result will be valid over the interval [g(x _{i} ), g(x _{j} )] (or [g(x _{j} ), g(x _{i} )], whichever is in the correct order), and zero outside of it.
• 3. Add the results.
Functions which are constant (ﬂat) over an interval give rise to a discrete jump in the probability density function of z.
2.5.3 Examples of invertible and noninvertible functions
Invertible functions
•
©JMBrowne
July 12, 2000 11:53 am
4ProbabilisticDesign
Page 11 of 52
2  12
PROBABILISTIC DESIGN
Noninvertible functions
•
2.5.4 Invertible functions of a random variable
The procedure for calculating an invertible function of a random variable is as follows:
Given:
A. A probability density function: f _{x} (x), x _{1} ≤ x ≤ x _{2} B. A transformation function: g(x)
1. Calculate dz/dx from z = g(x)
2. Solve for x in terms of z to get x = g ^{}^{1} (z) = h(z). (There should
be only one solution since the function is invertible)
3. Substitute h(z) for x in f _{x} (x) / dz/dx to get f _{z} (z)
4. Determine the new support: g(x _{1} ) ≤ z ≤ g(x _{2} )
We apply this procedure to some simple cases below. We assume a general (undeﬁned) pdf f _{x} (x) transformed by an invertible function g(x) which we can differentiate. The original probability density function is shown in light grey and the result of the function (or transformation) in darker grey.
Addition of a constant: [z = x + a]
1. dz/dx = 1
2. h(z) = z  a
©JMBrowne
July 12, 2000 11:53 am
4ProbabilisticDesign
Page 12 of 52
ANALYTICAL FUNCTIONS OF A RANDOM VARIABLE 2  13
3. f _{z} (z) = f _{x} (za)
4. x _{1} +a ≤ z ≤ x _{2} +a
Geometrically, the addition of a constant to a random variable sim ply gives another random variable all values of which are increased (dis placed to the right) by that constant. Example: The conversion of a random temperature expressed in Celsius to one expressed in Kelvin.
Multiplication by a constant: [z = a x]
1. dz/dx = a
2. h(z) = z/a
3. f _{z} (z) = f _{x} (z/a) / a
4. a x _{1} ≤ z ≤ a x _{2}
Geometrically, the multiplication of random variable by a constant simply gives another random variable all values of which are multiplied by (stretched to the right) by that constant. Example: The conversion of a dimension expressed in metres to one expressed in millimetres.
The general linear transformation: [z = a x + b]
1. dz/dx = a
2. h(z) = (zb)/a
3. f _{z} (z) = f _{x} ((zb)/a) / a
4. a x _{1} +b ≤ z ≤ a x _{2} +b
Geometrically, a general linear function of a random variable pro duces both a shift and a change in scale. The form of the function re mains the same. Example: The conversion of a random temperature expressed in Celsius to one expressed in Fahrenheit.
The exponential transformation: [z = e ^{x} ]
1. dz/dx = e ^{x}
2. h(z) = ln z
3. f _{z} (z) = f _{x} (ln z) / z
4. exp(x _{1} ) ≤ z ≤ exp(x _{2} )
©JMBrowne
July 12, 2000 11:53 am
4ProbabilisticDesign
Page 13 of 52
2  14
PROBABILISTIC DESIGN
Example: The conversion of a variable expressed on a logarithmic scale back to one expressed on a linear scale.
2.5.5
Example
Exponential transformation of a Uniform distribution
A. Probability density function: f _{x} (x) = 1/(ba), a ≤ x ≤ b, a > 0 B. Transformation function: z = c exp(k x) where c and k are con
stants
1.
Calculate dz/dx from z = c exp(k x):
dz/dx = k c exp(k x)
2. Solve for x in terms of z to get x = g ^{}^{1} (z) = h(z):
x = h(z) = ln(z/c)/k
3. Substitute h(z) for x in f _{x} (x) / dz/dx to get f _{z} (z):
f _{z} (z) = (1/(ba)) (1/k c exp(k x))
= (1/(ba)) (1/k c exp(k (ln(z/c)/k)))
= (1/(ba)) (1/k z)
4. Determine the new support for f _{z} (z):
c exp(k a) ≤ z ≤ c exp(k b)
Problem 2.4
A manufacturer makes spheres to meet a speciﬁcation on the vol ume. The process is known to deliver spheres with their diameters nor mally distributed with mean 10 mm and standard deviation 1 mm.
1. Determine the formula for the probability density function of the
volume.
2. Compare the true mean volume with the approximate mean vol
ume calculated from
Sphere volume
π
10 ^{3}
6
Alloy steel
. (Advanced exercise).
Problem 2.5
The percentage x of an alloy in a steel is exponentially distributed with probability density function f _{x} (x) = a exp( a x), 0 ≤ x ≤ ∞, a constant.
©JMBrowne
July 12, 2000 11:53 am
4ProbabilisticDesign
Page 14 of 52
MOMENTS
2  15
The ultimate tensile strength of the steel, z, is logarithmically relat ed to the percentage of alloy by z = log _{e} (x/b).
Derive the formula for the probability density function f _{z} (z) of the ultimate tensile strength, and state its support.
2.6
MOMENTS
Moments of a distribution are a way of summarizing the important characteristics of a distribution as single numbers, without having to cope with too much detail. The ﬁrst few (lower order) moments are gen erally of most interest to us. An analogy might be to the reduction of a vibration trace to its ﬁrst few harmonics. More precise mechanical analogies are 1. The mean is the centre of area of the distribution  summarizing the location properties of the distribution. 2. The variance is the second moment of area of the distribution about the mean  summarizing the way in which the area is spread over the object. Because we generally lack detailed information about the probabil ity density functions of our design parameters, we will usually be mak ing our calculations with the ﬁrst few moments, often just the mean and variance. Following are some deﬁnitions of moments and coefﬁcients based on them.
2.6.1 (Noncentral) moments
The nth (noncentral) moment origin is
^{µ}^{'} (n)x
of a distribution f(x) about the
^{µ}^{'} (n)x
=
∞
∫
–∞
x ^{n} f (x)dx
The ﬁrst noncentral moment is called the mean. The mean of a random variable x will be denoted µ _{x} , or simply µ where the context is clear. The mean is also the expectation of x, denoted E[x].
©JMBrowne
July 12, 2000 11:53 am
4ProbabilisticDesign
Page 15 of 52
2  16
PROBABILISTIC DESIGN
2.6.2 Central moments
The nth central moment µ _{(}_{n}_{)}_{x} of a distribution f(x) about the mean µ of a distribution is
^{µ} (n)x
=
∞
∫
–∞
(x – µ) ^{n} f (x)dx
The ﬁrst central moment of any distribution is zero. The second central moment is called the variance, denoted v _{x} . The third central moment is called the skew, denoted s _{x} . The fourth central moment is called the kurtosis, denoted k _{x} . Since we will be dealing mostly with central moments, we will of ten refer to them simply as moments.
2.6.3 Variance
The variance is, after the mean, the most important moment of a distribution. Its unit is the square of the unit of the random variable and hence is always positive. It measures the spread of the distribution. A zero variance thus implies a deterministic variable.
2.6.4 Skew
The skew is the next most important moment. Its unit is the cube of
the unit of the random variable and hence may be positive or negative.
A positively skewed distribution has its longer tail to the right. A nega
tively skewed distribution has its longer tail to the left. We will some times use the skew to test how valid it is to assume a given distribution
is symmetric (and hence perhaps approximatable by a Normal distribu
tion).
2.6.5 Kurtosis
We include here the kurtosis mainly for completeness. Since the kurtosis measures the “squatness” of the distribution, it is useful for dif ferentiating different types of symmetric distributions (for example the Normal and the Uniform). However since most of the distributions we will be using are bellshaped, we will not use the kurtosis much. It is al ways positive.
©JMBrowne
July 12, 2000 11:53 am
4ProbabilisticDesign
Page 16 of 52
MOMENTS
2  17
2.6.6 Standard deviation
The standard deviation of a distribution is the (positive) square root of the variance. The standard deviation has the same dimensions as the mean but it is the variance that is the more fundamental quantity. The standard deviation of a random variable x is denoted _{σ} _{x} .
2.6.7 Coefﬁcient of variation
The coefﬁcient of variation is the ratio of the standard deviation to
the mean, and is thus a measure of the relative spread of the distribution. This ratio is dimensionless and so may often be used to cast formulae in a dimensionless form. The coefﬁcient of variation of a random variable x will be denoted
by
xˆ
.
2.6.8 Variance ratio
The variance ratio is the (dimensionless) ratio of the variance to
the square of the mean. We will ﬁnd this measure of relative spread to occur more commonly in our applications than the coefﬁcient of varia
tion. The variance ratio will be denoted by u _{x} (=
xˆ ^{2} ).
2.6.9 Coefﬁcient of skewness
The coefﬁcient of skewness is the (dimensionless) ratio of the skew to the cube of the standard deviation. The normal distribution has a co efﬁcient of skewness of 0. The exponential distribution has a coefﬁcient of skewness of 2.
2.6.10 Coefﬁcient of kurtosis
The coefﬁcient of kurtosis is the (dimensionless) ratio of the kurto sis to the fourth power of the standard deviation (the square of the vari ance). The coefﬁcient of kurtosis measures the peakedness of the type of distribution. Uniform distributions have a kurtosis coefﬁcient of 1.8, triangular of 2.4, normal of 3, and exponential of 9.
©JMBrowne
July 12, 2000 11:53 am
4ProbabilisticDesign
Page 17 of 52
2  18
PROBABILISTIC DESIGN
2.6.11 Terminology
There are varying deﬁnitions in the literature for skew and kurtosis and their dimensionless ratios. It is wise to check the deﬁnition the au thor is using.
2.6.12 A note on notation
In situations where there are several random variables, for exam ple, x, y, … we will use µ _{x} , µ _{y} , …for the mean of x, y, …, and ν _{x} , ν _{y} , … for the their variance. If we dealing with a single random variable, we will often drop the subscripts.
2.7 THE NORMAL DISTRIBUTION
The normal distribution is the most important distribution in the ap plication of probability theory to science and engineering. The Central Limit Theorem (to be discussed later) tells us that the Normal distribu tion has an interesting involvement in the description of complex prob abilistic systems. It will be worth getting a good intuitive feel for its properties.
• It is symmetric
• Its support is from Inﬁnity to +Inﬁnity
• 99.7% of the distribution lies within ±3 standard deviations of the mean
• 95% of the distribution lies within ±2 standard deviations of the mean
• 68% of the distribution lies within ±1 standard deviations of the mean. The inﬂection point on the curve is at this point.
• Because of its symmetry, its odd central moments are zero.
• Its even central moments are given by (where ν is the variance):
{ν, 3 ν ^{2} , 3x5 ν ^{3} , 3x5x7 ν ^{4} , 3x5x7x9 ν ^{5} , …} = {ν, 3 ν ^{2} , 15 ν ^{3} , 105 ν ^{4} , 945 ν ^{5} , …}
©JMBrowne
July 12, 2000 11:53 am
4ProbabilisticDesign
Page 18 of 52
THE NORMAL DISTRIBUTION
2  19
• Its probability density function is
• The graph of its probability density function for µ = 0 and σ = 1 is
• Its cumulative distribution function is
1

2
• The graph of it cumulative distribution function µ = 0 and σ = 1 is
©JMBrowne
July 12, 2000 11:53 am
4ProbabilisticDesign
Page 19 of 52
2  20
PROBABILISTIC DESIGN
Problem 2.6
Probability of a continuous random variable
1. What is the probability that a normally distributed random vari
able has its mean value?
2. What is the probability that a normally distributed random vari
able lies between µ − σ and µ + 2σ?
3. What is the probability that a normally distributed random vari
able is greater than µ + 6σ?
Problem 2.7
Sketch carefully a normal distribution with mean 9 and variance 9.
A random variable is distributed as above. What is the probability that it is less than zero?
Sketching a Normal distribution
2.8 MEANS FROM NOMINAL VALUES
The usual design speciﬁcation on a parameter is given by a nominal value and a tolerance. The nominal value is usually the value given as n in the speciﬁcation [n  t _{1} , n + t _{2} ]. The question arises: Given only a speciﬁcation on a parameter in this form, what should we assume the
©JMBrowne
July 12, 2000 11:53 am
4ProbabilisticDesign
Page 20 of 52
STANDARD DEVIATIONS FROM TOLERANCES
2  21
mean value of the parameter to be? Until more research is done in this area, we propose that the mean be estimated as µ = (t _{1} + t _{2} )/2.
2.9 STANDARD DEVIATIONS FROM TOLERANCES
While mean values are often easy to ﬁnd from data sources, it is usually more difﬁcult to obtain an estimate of the variance (or standard deviation) of a design parameter. This section discusses some rules of thumb for estimating standard deviations from tolerances.
2.9.1 
Estimation from tolerance range 
If 
we know that the parameter is approximately normally distrib 
uted and the proportion of product that is expected to lie within a certain tolerance range, then a rule of thumb for estimating the random vari able’s standard deviation from the properties of the normal distribution is:
If expect 68% If expect 95%
If expect 99.7% to lie within ±∆
to lie within ±∆ to lie within ±∆
then set _{σ} _{x} . = ∆ then set σ _{x} . = ∆/2 then set σ _{x} . = ∆/3
2.9.2 Estimation from limited data
A rule of thumb which enables standard deviations to be estimated
from limited data is given by Haugen:
If the estimate of the tolerance ∆ that is required is obtained:
then set _{σ} _{x} = ∆ then set σ _{x} = ∆/2
From about 500 samples then set σ _{x} = ∆/3
From about 4 samples From about 25 samples
2.9.3 Estimation from knowledge of manufacturer
A further rule of thumb given by Shooman is:
If the product is being made by:
©JMBrowne
July 12, 2000 11:53 am
4ProbabilisticDesign
Page 21 of 52
2  22
PROBABILISTIC DESIGN
• Commercial, early development, little known or inexperienced manufacturers, then set σ _{x} = ∆
• Military, mature, reputable, or experienced manufacturers, then set σ _{x} = ∆/3
2.10 THE EXPECTATION OPERATOR
The expectation of a function g(x, y, …) of random variables x, y, … with probability density function f(x, y, …) is denoted E[g(x, y, …)] and is deﬁned as the integral:
Egxy[ ( ,
, …)]
=
∞
∫
–∞
gxy( ,
, …) fxy( ,
, …)dxdy…
The following properties may be proven from the deﬁnition:
2.10.1 The expectation of sums and products
Constant The expectation of a constant c is the constant itself. That is
E[c] = c
Linear sum If a, b, … are constants, then
E[a g _{1} (x, y,…) + b g _{2} (x, y,…) +…] = a E[g _{1} (x, y,…)] + b E[g _{2} (x, y,…)]
Product of independent random variables If x, y, … are independent random variables, then
E[g _{1} (x) g _{2} (y) …] = E[g _{1} (x)] E[g _{2} (y)] …
2.10.2 Relation of moments to the Expectation
Noncentral moments as Expectations E[1] = 1 E[x] = µ _{x}
E[x ^{n} ] =
^{µ}^{'} (n)x
©JMBrowne
July 12, 2000 11:53 am
4ProbabilisticDesign
Page 22 of 52
THE EXPECTATION OPERATOR
2  23
Central moments as Expectations E[x  µ _{x} ] = 0 E[(x  µ _{x} ) ^{2} ] = ν _{x} E[(x  µ _{x} ) ^{3} ] = s _{x} E[(x  µ _{x} ) ^{4} ] = k _{x} E[(x  µ _{x} ) ^{n} ] = µ _{(}_{n}_{)}_{x}
Problem 2.8
Prove the formulae relating moments and Expectations above from the deﬁnitions of moment and Expectation.
Moments as Expectations
2.10.3 Relation of central and noncentral moments
Central moments in terms of noncentral moments
E [ 
( 
x 
– 

E 
[ 
( 
x 
– 
^{µ}
x
^{µ}
x
) ^{n} =
]
) ^{n} =
]
E
n
∑ 
n 
^{} 
x ^{i} (– 
^{µ} x 
i 

i = 0
_{)} n – i
n
∑
i = 0
n
i
(–
µ
x
)
n – i
E
[
i
x
]
Noncentral moments in terms of central moments
E
[
E [
x
n
]
n
x
=
]
=
n
∑
i = 0
E
[ (
(
x
n
i
^{} E
[
–
(
x
µ
x
–
)µ
+
x
)
n ]
µ
x
)
i
]µ _{x}
(
_{)} n – i
Problem 2.9
From the deﬁnition of Expectation, prove the formula for the Ex pectation of a linear sum of functions of random variables.
Expectation of a linear sum
Problem 2.10
First central moment
Show that E[x  µ _{x} ] = 0.
©JMBrowne
July 12, 2000 11:53 am
4ProbabilisticDesign
Page 23 of 52
2  24 
PROBABILISTIC DESIGN 
Problem 2.11 
Central and noncentral moments 
Fill out the detail in the above derivations relating central and non central moments.
Problem 2.12
1. Derive expressions from ﬁrst principles for the variance and
skew in terms of noncentral moments. Use the binomial expansion and the properties of the Expectation operator.
Variance and skew
2. Verify your results using the general formulae above.
Problem 2.13 Mean values of a power
1. Derive expressions from ﬁrst principles for the mean values of
second, third, and fourth powers of a random variable in terms of its central moments. Use the binomial expansion and the properties of the Expectation operator.
2. Verify your results using the general formulae above.
Problem 2.14
A beam of circular crosssection has a normally distributed diame ter D with mean 100 mm and standard deviation 2 mm.
) about a
diameter.
2. Compare this with the nominal second moment of area based on
a nominal diameter of 100 mm. (Hint: The kurtosis of a normal distribution is 3σ ^{4} ).
Mean second moment of area
1. Calculate the mean second moment of area (
I
= D π ^{4}
64
Problem 2.15 Volume of sphere
The performance of a product is dependent on the volume V of a contained steel sphere of diameter D remaining within tight speciﬁca tions. The machine manufacturing the spheres is controlled by the spec iﬁcation on the nominal diameter.
By using the expectation operator and the identity a ^{n} = ((a  b) +
©JMBrowne
July 12, 2000 11:53 am
4ProbabilisticDesign
Page 24 of 52
LINEAR FUNCTIONS
2  25
b) ^{n} , or otherwise, derive an exact formula for the mean µ _{V} in terms of µ _{D} , and higher order central moments.
2.11 LINEAR FUNCTIONS
2.11.1 Introduction
In this section we begin to explore an approximate method for computing with random variables by considering only the ﬁrst few mo ments of a distribution (typically only the mean and variance, but some times the skew and kurtosis). Exact methods in computation with random variables are often exceedingly complex and insufﬁciently gen eral. However even the approximate probabilistic approaches developed here are an order of magnitude more powerful in engineering design for quality and reliability than the traditional “factor of safety” approach. In this section we look only at linear functions. In a later section we will look at more general function types.
2.11.2 General formulae
For the special case of a linear function of several variables, the moments may be derived exactly and take particularly simple forms. Note that the relations below are true, independent of the types of underlying distributions possessed by the x _{i} . However in the general case, z will not have a distribution of any known standard type. If x _{1} , x _{2} , x _{3} , … are independent random variables, a _{1} , a _{2} , a _{3} , … are constants, and z = a _{1} x _{1} + a _{2} x _{2} + a _{3} x _{3} + …, then the ﬁrst four moments of z are given by:
_{µ} _{z} = a _{1} µ _{x}_{1} + a _{2} µ _{x}_{2} + a _{3} µ _{x}_{3} + … = Σ a _{i} µ _{x}_{i}
ν _{z} = a _{1} ^{2} ν _{x}_{1} + a _{2} ^{2} ν _{x}_{2} + a _{3} ^{2} ν _{x}_{3} + … = Σ a _{i} ^{2} ν _{x}_{i}
s _{z} = a _{1} ^{3} s _{x}_{1} + a _{2} ^{3} s _{x}_{2} + a _{3} ^{3} s _{x}_{3} + … = Σ a _{i} ^{3} s _{x}_{i}
k _{z} = a _{1} ^{4} k _{x}_{1} + a _{2} ^{4} k _{x}_{2} + a _{3} ^{4} k _{x}_{3} + …
^{+} ^{6}^{{}^{a} 1 ^{2} ^{ν} x1 ^{a} 2 ^{2} ^{ν} x2 ^{+} ^{a} 2 ^{2} ^{ν} x2 ^{a} 3 ^{2} ^{ν} x3 ^{+} ^{a} 1 ^{2} ^{ν} x1 ^{a} 3 ^{2} ^{ν} x3 ^{+} ^{…}^{}}
©JMBrowne
July 12, 2000 11:53 am
4ProbabilisticDesign
Page 25 of 52
2  26
PROBABILISTIC DESIGN
= Σ a _{i} ^{4} k _{x}_{i} + 6 Σ a _{i} ^{4} ν _{x}_{i} a _{j} ^{4} ν _{x}_{j} [i<j]
Note that for moments of order higher than 3, the relations are no longer simple sums of the same order moments.
2.11.3 Sums, differences and multiples
In the special case of sums and differences of two random vari ables; and ﬁxed scalar multiples of a simple random variable the above relations reduce to
Table 1: Moments of sums, differences and multiples
SUM 
DIFFERENCE 
MULTIPLE 

FUNCTION 
z = x + y 
z = x  y 
z = a x 

MEAN 
µ _{z} = µ _{x} + µ _{y} 
µ _{z} = µ _{x}  µ _{y} 
µ _{z} = a µ _{x} 

VARIANCE 
ν _{z} = ν _{x} + ν _{y} 
ν _{z} = ν _{x} + ν _{y} 
ν _{z} = a ^{2} ν _{x} 

SKEW 
s _{z} = s _{x} + s _{y} 
s _{z} = s _{x}  s _{y} 
s _{z} = a ^{3} s _{x} 

KURTOSIS 
k 
_{z} = k _{x} + k _{y} + 6 ν _{x} ν _{y} 
k 
_{z} = k _{x} + k _{y} + 6 ν _{x} ν _{y} 
k _{z} = a ^{4} k _{x} 
2.11.4 Linear functions of normal distributions
Linear combinations of normally distributed random variables are a special case. They are themselves normal. This means that we can ﬁnd the actual normal distribution resulting from a linear sum by simply calculating the mean and variance from the formulae above. Below, we graphically depict the addition of two normal random variables.
©JMBrowne
July 12, 2000 11:53 am
4ProbabilisticDesign
Page 26 of 52
LINEAR FUNCTIONS
2  27
ADDITION OF TWO NORMAL RANDOM VARIABLES
2.11.5 The Central Limit Theorem
The sum of a number of independent but not necessarily identically distributed random variables tends to become normally distributed as the number increases, provided that no one random variable contributes appreciably more than the others to the sum; that is, no type of distribu tion dominates. This is an important result for designers. It means, for example, that the overall dimension of an assembly of component parts, independent of the distribution types of each component dimension, will tend to be normally distributed. Knowing this, the designer can work back from the individual component tolerances to get an estimate of the proportion of assemblies which will lie outside any given speciﬁcation. The six greyed graphs below are, sequentially, the distributions of the average of 1, 2, 3, 4, 5, and 6 independent identically Uniformly dis tributed random variables on [1, 1]. The full line is the Normal distri bution which has the same variance as the average. It can be seen that even the average of only three Uniform distribu tions gives a result surprisingly close to Normal.
©JMBrowne
July 12, 2000 11:53 am
4ProbabilisticDesign
Page 27 of 52
2  28
PROBABILISTIC DESIGN
•
Problem 2.16
Inconsistency?
If x = y in the formula for the variance of a difference, that is, z =
x  y = 0, does this imply ν _{z} = 2ν _{x} = 2ν _{y} ?
Problem 2.17
An assembly is made up of several components whose nominal lengths are L _{1} , L _{2} , L _{3} , and L _{4} . It is important that the distance L = L _{1} +L _{2}
 (L _{3} +L _{4} ) be kept within speciﬁcation limits.
1. Write down a formula for the standard deviation σ _{L} as a function of σ _{1} , σ _{2} , σ _{3} , and σ _{4} .
Tolerance buildup
2. What can be said about the type of distribution that L has?
©JMBrowne
July 12, 2000 11:53 am
4ProbabilisticDesign
Page 28 of 52
LINEAR FUNCTIONS
2  29
Problem 2.18
An assembly contains two springs of stiffness K _{1} and K _{2} connected in parallel. Their stiffness distributions have moments of K _{1} : {µ _{1} = 500 N/m, ν _{1} = 144 (N/m) ^{2} , s _{1} = 10 (N/m) ^{3} } K _{2} : {µ _{2} = 300 N/m, ν _{2} = 25 (N/m) ^{2} , s _{2} = 10 (N/m) ^{3} }.
1. Calculate the mean, variance and skew of the distribution of the
overall stiffness K of the system.
Springs in parallel
2. Is the resulting distribution symmetric?
Problem 2.19 Counterweights
Two designs are proposed for a sensitive counterweight. Design A utilizes 4 spheres each of mass m. Design B utilizes two spheres each of mass 2m.
Assuming that the coefﬁcient of variation of the mass of each of the spheres is the same, determine the ratio of the standard deviation of the total mass of Design A to that of Design B.
Problem 2.20 Algenon and Biggles
Bricks are manufactured with heights of a given mean and vari ance. Algenon Ant climbs straight up a vertical stack of N bricks (no mortar). His brother Biggles (a little disoriented) goes straight up and down the ﬁrst brick for the same number of brick traverses (hence cov ering the same mean distance).
Determine the ratio of the standard deviation of Biggles' journey to the standard deviation of Algenon's journey.
Problem 2.21 Machine support
Suppose that a machine is to be supported with a number of springs of the same nominal stiffness, and that the overall stiffness of the spring assembly is to be within a given tolerance of a ﬁxed target value K. Sup pose also that all the springs in the assembly have the same percentage tolerance on their stiffness no matter what size they are. That is, the stiff nesses of the springs have the same coefﬁcient of variation.
©JMBrowne
July 12, 2000 11:53 am
4ProbabilisticDesign
Page 29 of 52
2  30
PROBABILISTIC DESIGN
Discuss the inﬂuence of the number of springs in the assembly on the variability of its stiffness.
Problem 2.22 Moon lander
In a design analysis of a suspension system for a moon lander, it has been determined that the overall damping coefﬁcient of the system is a critical quality variable, and should be held within tight speciﬁca tions. Suppose that the shock absorbers are linear over their range of ap plication, and that the standard deviation of the damping coefﬁcient of each shock absorber is a ﬁxed fraction α of its mean value for any size shock absorber. Suppose also that there are two systems proposed: System F with 4 parallel shock absorbers, and System G with 16 parallel shock absorb ers, where both systems have the same total mean damping coefﬁcient. (You may assume that the damping coefﬁcients are additive).
Determine the ratio of the standard deviation of the overall damp ing coefﬁcient of assembly G to that of assembly F.
2.12
RELIABILITY
The reliability R of a system is the probability that the system will perform as expected.
The unreliability Q of a system is the probability that the system will fail to perform as expected.
R + Q = 1 Remark on terminology:
The term reliability is commonly used to refer to the probability of failure of one item due to degradation over time. It is not generally used for the general conformance to speciﬁcation of the product coming off the end of a production line. For simplicity however we will often use the term “reliability” to mean “probability of conforming to speciﬁca tion”. Thus the reliability of a massproduced product is equivalent to the proportion of the product within speciﬁcation.
©JMBrowne
July 12, 2000 11:53 am
4ProbabilisticDesign
Page 30 of 52
RELIABILITY
2  31
2.12.1 Operating windows
Many systems (products, designs) depend for their correct perfor mance on the values of their quality variables (design parameters or functions of design parameters) remaining within given bounds, limits,
or tolerances. This leads to viewing these bounds as the frame of an op erating window.
A design speciﬁcation might read something like:
“The parameter x must lie in the range x _{L} to x _{U} ”.
Since x will usually have a distribution of values it is most likely that not all values will lie in this range.
If a product’s function depends only on the single parameter x, then
its reliability is the probability that x lies in the range x _{L} to x _{U} . That is
R = Pr (x _{L} ≤ x ≤ x _{U} )
and this is represented by the area under the probability density function which can be seen through the operating window. Conversely the unreliability is the area under the curve outside the window.
OPERATING WINDOW
2.12.2 Example: Paper feeder operating window
Many photocopiers have paper feeders which use the frictional
©JMBrowne
July 12, 2000 11:53 am
4ProbabilisticDesign
Page 31 of 52
2  32
PROBABILISTIC DESIGN
driving force of an elastomeric covered roll which sits on top of the pa per stack. It is clear that if the normal force exerted by the roll on the paper is too low, the paper will not move. This failure mode is called a misfeed. Conversely, if the normal force is too high, more than one sheet will be driven forward. This failure mode is called a multifeed. The normal force thus becomes a quality variable which must be kept within deﬁned upper and lower speciﬁcation limits. Considering the normal force as a random variable, the proportion of its probability density function that we can see through the window frame formed by the upper and lower speciﬁcation limits is the reliabil ity of the feeder for the normal force failure modes.
2.12.3 Supply and demand
Another type of system depends for its correct performance on the demand x being less than the supply x _{s} .
R = Pr (x < x _{s} )
where both x and x _{s} are independent random variables.
MARGIN OF SAFETY
“Supply and demand” here should be taken in the most general sense of any imposed physical variable: force, stress, deﬂection, tem perature, time, ﬂowrate, …
©JMBrowne
July 12, 2000 11:53 am
4ProbabilisticDesign
Page 32 of 52
RELIABILITY
2  33
The margin of safety is deﬁned by y = x _{s}  x. Hence the reliability may be written
R = Pr (y > 0)
The operating window for y is then 0 ≤ y for a margin of safety problem.
2.12.4 Estimation of the reliability
1. Calculate the mean and variance of y.
2. If the type of distribution for y is known, use a formula or table for its cumulative distribution function to calculate the area 0 ≤ y. Otherwise use a table for the normal distribution as follows:
3. Calculate the distance z of the mean of y from zero in units of the standard deviation of y.
4. Read off the required probability (reliability or unreliability) in the table.
The parameter z (measured in standard deviations of y) is often called the reliability index or safety index. It can be seen from the table below that the reliability is quite sensitive to small changes in z for z greater than about 2. A doubling of z from 2.4 to 4.8 decreases the prob ability of failure by a factor of approximately 10 000! You can visualize this geometrically by imagining what happens to the area of the distribution for y ≤ 0 as you shift the distribution to the right.
Table 2: Probability of failure versus reliability index for a Normal Distribution
RELIABILITY 
UNRELIABILITY Q PER MILLION 
RELIABILITY 
UNRELIABILITY Q PER MILLION 

INDEX Z 
INDEX Z 

0.00 
500 
000 
2.33 
10 000 
0.67 
250 
000 
3.10 
1 000 
1.00 
160 
000 
3.72 
100 
1.28 
100 
000 
4.25 
10 
1.65 
50 000 
4.75 
1 
©JMBrowne
July 12, 2000 11:53 am
4ProbabilisticDesign
Page 33 of 52
2  34 
PROBABILISTIC DESIGN 
Problem 2.23 
Bolt strength reliability 
A production run of bolts has a normally distributed ultimate ten
sile strength with mean 100 MN and standard deviation 2 MN.
The applied load is expected to be normally distributed with mean 90 MN and standard deviation 4 MN.
What proportion may be expected to fail?
Problem 2.24
A design calculation predicts that the buoyancy force B acting on
a sonar device is normally distributed with mean 800 N and standard de viation 24 N, and that the weight force W is normally distributed with mean 800 N and standard deviation 8 N. The sonar device fails to operate as intended if: a) it sinks, or b) its buoyancy force exceeds its weight force by more than 32 N.
Calculate the probability of failure to operate as intended (to 3 dec imal places).
Buoyancy force reliability
Problem 2.25 Bearing ﬁt
A massproduced bearing of a journal bearing has a normally dis
tributed diameter with mean 50 mm and tolerance ±0.03 mm. The journal has a normally distributed diameter with mean 49.9
mm and tolerance ±0.03 mm. The assembly fails if (a) the journal will not ﬁt in the bearing, or (b) the diametral clearance is greater than 0.1 mm.
1. Estimate the proportion of assemblies that might be expected to
fail if the manufacturer is very inexperienced in this area of manufac ture.
2. Estimate the proportion of assemblies that might be expected to
fail if the manufacturer is highly experienced in this area of manufac ture.
Problem 2.26
Shaft failure
A production run of shafts has a normally distributed failure torque
©JMBrowne
July 12, 2000 11:53 am
4ProbabilisticDesign
Page 34 of 52
PRODUCTS OF RANDOM VARIABLES
2  35
with mean 100 kNm and variance 9 (kNm) ^{2} . The applied torque is expected to be normally distributed with mean 80 kNm and variance 16 (kNm) ^{2} .
What proportion of product may be expected to fail? (Express your answer as number of failures per million)
Problem 2.27
Discuss the potential application of the margin of safety concept to the ﬁtting together of components in the automobile industry, for exam ple, doors and windshields.
Fitting of car doors and windshields
2.13 PRODUCTS OF RANDOM VARIABLES
To develop formulae for the moments of products of independent random variables we use the fact that if x, y, … are any independent ran dom variables, then
E[x y …] = E[x] E[y] … The formulae developed below will be exact independent of the type of distribution to which the random variables belong. The formulae are used by calculating the mean, variance and skew in succession. Suppose z is a product of any number of independent random vari ables x _{i} : z = x _{1} x _{2} x _{3} …
2.13.1 The mean of a product
The mean of a product is a direct application of the formula above.
^{z} ^{=} ^{x} 1 x 2 x 3 …
E[z] = E[x _{1} ] E[x _{2} ] E[x _{3} ] …
^{µ} z ^{=} ^{µ} x1 ^{µ} x2 ^{µ} x3 ^{…}
The mean of a product of independent random variables is simply the product of their means.
©JMBrowne
July 12, 2000 11:53 am
4ProbabilisticDesign
Page 35 of 52
2  36
PROBABILISTIC DESIGN
2.13.2 The variance of a product
The variance of a product is obtained by taking the expectation of the square of z
_{z} ^{2} = x _{1} ^{2} x _{2} ^{2} x _{3} ^{2} …
E[z ^{2} ] = E[x _{1} ^{2} ] E[x _{2} ^{2} ] E[x _{3} ^{2} ] …
(µ _{z} ^{2} + ν _{z} ) = (µ _{x}_{1} ^{2} + ν _{x}_{1} ) (µ _{x}_{2} ^{2} + ν _{x}_{2} ) (µ _{x}_{3} ^{2} + ν _{x}_{3} ) …
To calculate the variance ν _{z} of a product of independent random variables, ﬁrst compute the product on the right hand side of the equa tion above and then subtract the square of the mean µ _{z} ^{2} calculated pre viously.
2.13.3 The skew of a product
_{z} ^{3} = x _{1} ^{3} x _{2} ^{3} x _{3} ^{3} …
E[z ^{3} ] = E[x _{1} ^{3} ] E[x _{2} ^{3} ] E[x _{3} ^{3} ] …
(µ _{z} ^{3} + 3 µ _{z} ν _{z} + s _{z} ) = (µ _{x}_{1} ^{3} + 3 µ _{x}_{1} ν _{x}_{1} + s _{x}_{1} )
(µ _{x}_{2} ^{3} + 3 µ _{x}_{2} ν _{x}_{2} + s _{x}_{2} ) (µ _{x}_{3} ^{3} + 3 µ _{x}_{3} ν _{x}_{3} + s _{x}_{1} ) …
To calculate the skew s _{z} of a product of independent random vari ables, ﬁrst compute the product on the right hand side of the equation above and then subtract the term µ _{z} ^{3} + 3 µ _{z} ν _{z} calculated from the pre vious steps. Higher moments are calculated in a similar fashion.
Problem 2.28 Volume of a cube
Calculate the mean and variance of the volume of a cube of side L where the sides are machined independently by the same machining process and are therefore considered to be identically distributed inde pendent random variables each with mean µ and variance ν.
Comment on how this calculation differs from one based simply on the formula V = L ^{3} (see below).
©JMBrowne
July 12, 2000 11:53 am
4ProbabilisticDesign
Page 36 of 52
POSITIVE INTEGER POWERS
2  37
2.14 POSITIVE INTEGER POWERS
2.14.1 The mean of a positive integer power
We have already derived the formula for the Expectation of a pos itive integer power of a random variable in terms of central moments. Since the expectation gives the mean value, we have immediately that for z = x ^{n} :
^{µ} z
=
n
∑
i = 0
n
i
^{} E
[
(
x
–
µ
x
)
i
]µ _{x}
(
_{)} n – i
2.14.2 Tables for a Normally distributed random variable
Since the central moments of a normal distribution can all be ex pressed in terms of its mean and variance (see the listing in the section on the Normal distribution), its powers can therefore be expressed via the above formula in terms of them also. The tables below thus give exact formulae for calculating the mean, variance and skew of positive integer powers of a normally distributed random variable x with mean µ and variance ν.
The entries in the table are expressed in the form (ﬁrst order approximation) (1 + terms in the variance ratio u) where the variance ratio u has been deﬁned as the square of the co efﬁcient of variation u = v/µ ^{2} .
Table 3: Moments of a square
z = x ^{2} 

µ 
z 
µ ^{2} (1 + u) 
ν 
z 
4 µ ^{2} ν (1 + u/2) 
s z 
24 µ ^{2} ν ^{2} (1 + u/3) 
©JMBrowne
July 12, 2000 11:53 am
4ProbabilisticDesign
Page 37 of 52
2  38
PROBABILISTIC DESIGN
Table 4: Moments of a cube
z = x ^{3} 

µ 
z 
µ ^{3} (1 + 3 u) 

ν 
z 
9 µ ^{4} ν (1 + 4 u + (5/3) u ^{2} ) 

s z 
162 
µ ^{5} ν ^{2} (1 + (16/3) u + 5 u ^{2} ) 
Table 5: Moments of a fourth power
z = x ^{4} 

µ 
z 
µ ^{4} (1 + 6 u + 3 u ^{2} ) 

ν 
z 
16 µ ^{6} ν (1 + (21/2)u + 24 u ^{2} + 6 u ^{3} ) 

s z 
576 
µ ^{8} ν ^{2} (1 + 16 u + (149/2) u ^{2} + 99 u ^{3} + 33 u ^{4} ) 
Table 6: Moments of a ﬁfth power
z = x ^{5} 

µ z 
µ ^{5} (1 + 10 u + 15 u ^{2} ) 

ν 
z 
25 µ ^{8} ν (1 + 20 u + 114 u ^{2} + 180 u ^{3} + (189/5) u ^{4} ) 
s z 
1500 µ ^{1}^{1} ν ^{2} (1 + (97/3)u + 366 u ^{2} + 1710 u ^{3} + 2997 u ^{4} + 1323 u ^{5} ) 
Problem 2.29
Inconsistency?
If µ = 0, does this imply that µ _{z} , ν _{z} , s _{z} are all zero?
2.15 GENERAL FUNCTIONS
2.15.1
Introduction
We complete our introductory discussion of probabilistic design by describing a method (called the Moment Analysis Method) by which you can calculate the moments of any differentiable function of inde
©JMBrowne
July 12, 2000 11:53 am
4ProbabilisticDesign
Page 38 of 52
GENERAL FUNCTIONS
2  39
pendent random variables, and hence get an estimate of the variability inherent in a given design. Indeed, all the formulae we have introduced so far may be derived by this method. The basic principle of the Moment Analysis Method is the speciﬁ
cation of each probability distribution by its set of moments in the form
{mean, variance, skew, kurtosis,
Then, if we wish to calculate a
function of several random variables, the moments of that function will be functions of the moments of those several random variables.
}.
The two techniques that we will use are
1. Expansion of the function in a Taylor series
2. Application of the Expectation operator to the series
2.15.2 The basic algorithm
Calculation of the mean
1. Expand the function z = g(x, y,
) as a Taylor series about the
mean values (µ _{x} , µ _{y} , …) of the independent random variables.
2. Calculate the mean of the function (µ _{z} ) by calculating the expec
tation of the terms in the expansion.
Calculation of the nth central moment
1. Expand the function [z  µ _{z} ] ^{n} as a Taylor series about the mean
values (µ _{x} , µ _{y} , …) of the independent random variables.
2. Calculate the nth central moment of the function (ν _{z} , s _{z} , k _{z} , …)
by calculating the expectation of the terms in the expansion.
3. Calculate µ _{z} and substitute for it in the expression.
2.15.3 The theoretical foundation
Assumptions
The fundamental assumptions upon which the method is based are:
1. The random variables x, y, … are independent. (Very important!)
2. The pertinent information content of each of the distributions is
sufﬁciently well represented by a ﬁnite (small) number of moments.
©JMBrowne
July 12, 2000 11:53 am
4ProbabilisticDesign
Page 39 of 52
2  40 
PROBABILISTIC DESIGN 
3. 
The function is sufﬁciently well represented by a ﬁnite (small) 
number of terms of its Taylor series.
Formulae
The fundamental formulae upon which the method is based are:
1. The mean µ _{z} of a function z = g(x, y,
) is the expectation of the
function.
2. The nth central moment µ _{(}_{n}_{)}_{z} of a function z = g(x, y,
expectation of (z  µ _{z} ) ^{n}
) is the
3. The expectation of a linear sum is the sum of the expectations of
the terms.
4. The expectation of a product of independent random variables is
the product of their expectations.
2.15.4 How to write down a Taylor series
In this section we discuss a mnemonic method for easily writing
down a Taylor series expansion of a function of several variables.
) and you wish to write
down the Taylor series for z expanded about the point: x = µ _{x} , y = µ _{y} ,
Suppose you have a function z = g(x, y,
…. A simple mnemonic way of doing this is as follows:
1. 
Write down the power series for exp(X+Y+…): 
1 
+ (X+Y+…) + (1/2!)(X+Y+…) ^{2} + (1/3!)(X+Y+…) ^{3} + … 
2. 
Expand the terms: 
1 
+ (X+Y+…) + (1/2!)(X ^{2} +2XY+Y ^{2} +…) 
+ (1/3!)(X ^{3} +3X ^{2} Y+3XY ^{2} +Y ^{3} +…) + …
3. Make the following replacements:
→ [z]
1
µ
x ,
µ
µ
x ) ^{n}
µ
X ^{n} →
y ))
©JMBrowne
July 12, 2000 11:53 am
4ProbabilisticDesign
Page 40 of 52
GENERAL FUNCTIONS 
2  41 

_{X} n _{Y} m 
→ 
(n + m)
∂
z
∂
x n ∂ y m

µ 
( 
x 
– 
µ 
x 
) ^{n} ( 
y 
– 
µ 
y 
) ^{m} 
and so on for more products of more than two variables. Remember that the notation […] _{µ} means that the bracketed func tion is evaluated at the point x = µ _{x} , y = µ _{y} , ….
2.15.5 How to write down the expectation of a function
) and you wish to
write down an expression for the expectation E[z] of z. The normal pro cedure for doing this is:
Again suppose you have a function z = g(x, y,
1. Write down the Taylor series with x _{0} = µ _{x} , y _{0} = µ _{y} , …
2. Apply the expectation operator to the series, remembering its
properties when it acts on a constant, a linear sum, and a product of in dependent random variables.
3. Make the following replacements:
E
E
[ (
x
[
x
–
–
µ
x
µ
x
] → 0
) ^{n}
^{]}^{µ} (n)x
→
The resulting expression is a series expressing E[z] in terms of the moments of x, y, …. If the series terminates the expression will be exact. A polynomial function, for example, will terminate.
2.15.6 The shortest way to write down the expectation
It may be somewhat shorter to ﬁrst simplify the terms in our origi nal mnemonic expansion 1 + (X+Y+…) + (1/2!)(X ^{2} +2XY+Y ^{2} +…) + … before replacing them with their corresponding terms in the Taylor se ries expansion. We list possible simpliﬁcation rules below, and illustrate them with the example of a function of two variables for which we know only their means and variances:
©JMBrowne
July 12, 2000 11:53 am
4ProbabilisticDesign
Page 41 of 52
2  42 
PROBABILISTIC DESIGN 

1 
+ (X+Y) + (1/2!)(X ^{2} +2XY+Y ^{2} ) + (1/3!)(X ^{3} +3X ^{2} Y+3XY ^{2} +Y ^{3} ) 

+ 
(1/4!)(X 
^{4} 
+4X 
^{3} 
Y+6X ^{2} Y ^{2} +4XY 
^{3} +Y ^{4} ) 
+ 
(1/5!)(X ^{5} +5X ^{4} Y+10X ^{3} Y ^{2} +10X ^{2} Y ^{3} +5XY ^{4} +Y ^{5} ) + … 

1. 
Any term involving a variable to the ﬁrst power is zero since the 
expectation E[x  µ _{x} ] is zero.
1
+ (1/2!)(X ^{2} +Y ^{2} ) + (1/3!)(X ^{3} +Y ^{3} )
+ (1/4!)(X ^{4} +6X ^{2} Y ^{2} +Y ^{4} )
+
2.
(1/5!)(X ^{5} +10X ^{3} Y ^{2} +10X ^{2} Y ^{3} +Y ^{5} ) + …
Any term involving a higher power leading to a moment for
which you have no information must be omitted. In this example we only know means and variances, hence the ex
pression reduces to
1 
+ (1/2!)(X ^{2} +Y ^{2} ) + (1/4!)(6X ^{2} Y ^{2} ) 
3. 
If the coefﬁcients resulting from the higher derivatives in the ex 
pansion are small enough compared to those resulting from the lower ones, the corresponding terms may be neglected. This is often the case for functions which are not too far off linear in the region near the point
µ = (µ _{x} , µ _{y} ). In this example we would look at the comparative size of
µ
Assuming the term can be neglected the expression reduces to
1 + (1/2!)(X ^{2} +Y ^{2} )
leading ﬁnally to a general second order approximation for µ _{z} :
^{µ} z
=
g
(
µ
x
,
µ
y
)
1
+ 
2
µ
^{ν}
y
It is evident from this process that the same form is valid for any number of variables.
^{µ} z
=
g
(
µ
x
,
µ
y
, …)
1
+ 
2
µ
^{ν}
y
+ …
©JMBrowne
July 12, 2000 11:53 am
4ProbabilisticDesign
Page 42 of 52
GENERAL FUNCTIONS
2  43
Note carefully that the mean of a general function is only equal to the function of the means as a ﬁrst order approximation.
2.15.7 Calculation of the variance of a function
As an example of the method described for calculating higher order moments of a differentiable function of random variables, we will cal
culate an expression for the second order approximation to the variance
of z = g(x, y,
, we can let Z = (z 
µ _{z} ) ^{2} . E[(z  µ _{z} ) ^{2} ] then becomes µ _{Z} which we can write down directly from the result derived in the section above:
),
that is, E[(z  µ _{z} ) ^{2} ].
1. Since (z  µ _{z} ) ^{2} is still a function of x, y,
Гораздо больше, чем просто документы.
Откройте для себя все, что может предложить Scribd, включая книги и аудиокниги от крупных издательств.
Отменить можно в любой момент.