Вы находитесь на странице: 1из 52

INTRODUCTION

2 - 1

2

PROBABILISTIC

DESIGN

2.1

INTRODUCTION

In the design of a product for mass-production we are faced with the challenge that every item produced will be different . These differ- ences will be slight to the casual observer, but may combine in the indi- vidual items to give vastly different performance characteristics, and thus impact the perceived quality of the product. These differences are caused by, among many other things, drift in machine settings, batch variability in material properties and operator input. The value of each design parameter embodied in any item is there- fore likely to be different from the value in any other item. If we mea- sure the values of a design parameter (a length, say) in all the items in a production run we will get data on the frequency of occurrence of the values of the parameter. If there are sufﬁcient data values we can rescale the frequency to give a probability. Design parameters may thus be viewed as random variables. Most physical variables used in engineering design are in fact ran- dom variables. Standard calculations are really calculations with their mean values. If we are interested in the possible range of values our re- sult might have, then we must use more information in our calculation algorithm than the mean values alone. The classical approach to design is to apply safety factors to each design parameter to allow for uncertainties. If the design is complex, these safety factors can compound to cause overdesign with an uncer- tain reliability. And in some important cases, where there is an upper

July 12, 2000 11:53 am

4ProbabilisticDesign

Page 1 of 52

2 - 2

PROBABILISTIC DESIGN

and lower speciﬁcation or functional limit, the safety factor method can- not be used at all. Probabilistic design studies how to make calculations with the probability distributions of the design parameters, instead of the nomi- nal or mean values only. This will then allow the designer to design for a speciﬁc reliability or speciﬁcation conformance, and hence maximize safety, quality and economy. Design parameters are usually independent random variables . Each type of parameter will have a distribution. Common distributions for design parameters are the normal, log-normal, poisson, uniform, tri- angular, exponential and weibull distributions.

CALCU

WITH

UNSUR

NUMBE

2.2 TYPES OF PROBABILITY DISTRIBUTIONS

Detailed brieﬂy below are the types of probability distributions more commonly found in engineering. 0.93
1.00
1.08 .21
1.08
1.95
2.82

2.2.1 Normal

• The distribution is symmetric and bell-shaped

• The variable may itself be the sum of a large number of individual effects.

Example: Heights of the adult male population.

Example: Dimension of a fabricated part.

2.2.2 Lognormal

• The variable can increase without bound, but is limited to a ﬁnite value at the lower limit

• The distribution is positively skewed (most of the values being closer to the lower limit).

• The logarithm of the variable yields a normal distribution.

Example: Real estate values, river flow rates, strengths of materi- als, fracture toughness.

July 12, 2000 11:53 am

4ProbabilisticDesign

Page 2 of 52

TYPES OF PROBABILITY DISTRIBUTIONS

2 - 3

2.2.3 Weibull

• A distribution possessing three parameters enabling it to be adjusted to cover all stages of the “bathtub” reliability curve.

• A shape parameter of 1 gives an exponential distribution. A shape parameter of 3.25 gives an approximation to the normal.

• Finds principal application in situations involving wear, fatigue and fracture.

Example: Failure rates, life-time expectancies

2.2.4 Exponential

• Describes the amount of time between occurrences.

Poisson

• Complements

the

distribution

(which

describes

number of occurrences per unit time.

the

Example: Time between telephone calls.

Example: Mean time between failures .00
1.27
2.55
3.82 0.00
1.15
2.30
3.45
 2.2.5 Triangular • Used when the only information known is the minimum , the most likely , and the maximum values of a variable. Example: Item costs from different suppliers or future estimation. .50 0.65 0.80 0.95 1.1 2.2.6 Uniform • All values between the minimum and maximum are equally likely Example: A number from a random number generator. .90 0.95 1.00 1.05 1.1

2.2.7 Poisson (discrete)

• Describes the number of times an event occurs in a given interval.

• The number of possible occurrences in the interval is not limited.

• The occurrences are independent.

• The average number of occurrences is ﬁxed.

Example: Number of telephone calls per minute. .00
3.25
6.50
9.75

July 12, 2000 11:53 am

4ProbabilisticDesign

Page 3 of 52

2 - 4

PROBABILISTIC DESIGN

Example: Number of errors per page in a document

Example: Number of defects per square metre in sheets of steel. 0
4
7
10

2.2.8 Binomial (discrete)

Describes the number of successes in a ﬁxed number of trials.

• For each trial only two outcomes are possible - success or failure.

• The trials are independent

• The probability of success remains the same from trial to trial.

Example: Number of heads in ten tosses of a coin

Example: Number of defective items in a given batch, given that the average rate of producing defectives is known. 1 7
13
1 9

2.2.9 Geometric (discrete)

Describes the number of trials until the ﬁrst successful occurrence.

• The number of trials is not ﬁxed and continue until the ﬁrst success

• The probability of success is the same from trial to trial

Example: Number of times to spin a roulette wheel before you win.

Example: Number of wells you would dig before the next gusher. 00 3.25 5.50 7.75 10.0

2.2.10 Custom

Used to describe a unique situation that cannot be described by any of the standard distributions.

• The area under the curve should equal 1.

2.2.11 Comparison of distributions

• Poisson: Number of times an event occurs in a given interval.

• Exponential: Interval until next occurrence of event.

• Binomial: Number of successes in a ﬁxed number of trials.

• Geometric: Number of trials until the next success.

• Large number of trials: Binomial approaches normal.

July 12, 2000 11:53 am

4ProbabilisticDesign

Page 4 of 52

DESCRIBING PROBABILITY DISTRIBUTIONS

2 - 5

2.3 DESCRIBING PROBABILITY DISTRIBUTIONS

2.3.1 Types of description

The types of description we will use for describing probability dis- tributions include its parameters, its probability density function (pdf), its cumulative distribution function (cdf), and its set of moments.

• The parameters of a given type of distribution are the mathematical parameters in the formula for the distribution (not to be confused with design parameters).

• The probability density function describes the basic shape and location of the distribution. The graphs shown in the previous section are probability density functions.

• The cumulative distribution function allows us to read off the area under the probability density function in a given range. This area represents the probability that the random variable will lie in this range.

It is the probability density function, the parameters which describe it, and the ﬁrst few of its moments that will be of most use to us in prob- abilistic design. For brevity we may use the terms “pdf”, “distribution”, or “density function” instead of “probability density function”. Since the distribution is the main description of a random variable we will sometimes use the terms interchangeably.

2.3.2 Properties of probability density functions

In order to be called a probability density function, a function must have the following properties: (Any function that looks like a blob of goo on the axis is probably a good candidate)

• It is indeed a function (no undercuts)

• The area between it and the axis is unity

The support of the probability density function is the domain of the random variable over which the function is deﬁned. The full deﬁnition of a probability density function comprises the speciﬁcation of its formula and its support.

July 12, 2000 11:53 am

4ProbabilisticDesign

Page 5 of 52

2 - 6

PROBABILISTIC DESIGN

2.3.3 Functions of random variables

Functions of random variables are central to the design of products

for quality and reliability. Since the performance of a product is gener- ally a function of its design parameters, and the design parameters are random variables, the performance is a function of random variables, and is thus itself a random variable.

A central tool in the design of quality products therefore, is the

ability to calculate functions of random variables.

2.3.4 Notation

Generally we will denote a probability density function of a ran- dom variable x by f(x). However, when we are considering a function z = g(x) we will distinguish the two probability density functions by de- noting them f x (x) and f z (z).

2.3.5 In sum

Design parameters and the quality variables which depend on them are most often random variables which we describe by probability den- sity functions.

Problem 2.1

A design parameter is a random variable uniformly distributed be-

tween 1 and 3. Sketch its probability density function and its cumulative distribution function showing pertinent values on the axes.

Uniformly distributed design parameter

2.4 GRAPHICAL FUNCTIONS OF A RANDOM VARIABLE

In this section we shall describe the concept of a function of a ran-

dom variable in graphical terms. The most important attribute of a function when applied to a ran- dom variable is whether it has an inverse over the support of the density function of the random variable. If it does, then it is straightforward to compute the function of the random variable. If not, then the function must be broken up into pieces so that each piece has an inverse, the

July 12, 2000 11:53 am

4ProbabilisticDesign

Page 6 of 52

GRAPHICAL FUNCTIONS OF A RANDOM VARIABLE

2 - 7

transformation associated with each piece applied, and the results summed.

2.4.1 The concept

Suppose x is a random variable with probability density function f x (x), and that z = g(x) is an invertible function of x over the support of f x (x). The central concept is as follows:

The probability of x being in the interval [x 1 , x 2 ] is equal to the probability that z is in the interval [z 1 , z 2 ] = [f x (x 1 ), f x (x 2 )]. Geometrically, this is equivalent to saying that the area under the probability density function of x in the interval [x 1 , x 2 ] is equal to the area under the probability density function of z in the interval [f x (x 1 ), f x (x 2 )]. z
x
2.4.2
The fundamental formula

Equating the two probabilities (areas) we obtain

A

=

| f z (z) dz |

=

| f x (x) dx |

=> f z (z) = f x (x) / | dz/dx | Note that because the probability (area) is always positive, the same relationship will exist whether the gradient of the function g(x) is

July 12, 2000 11:53 am

4ProbabilisticDesign

Page 7 of 52

2 - 8

PROBABILISTIC DESIGN

positive or negative. Hence we always take the absolute value of the de- rivative dz/dx.

2.4.3 Dimensional considerations

Probability is dimensionless. However, x and z may have (differ- ent) dimensions (units) [x] and[z], say. The probability density func- tions f x (x) and f z (z) must have dimensions 1/[x] and 1/[z] respectively. This fact corroborates with the formula above and may be used as a check on the correctness of any functional transformation.

2.4.4 Examples

This simple relationship between area elements of the two density functions may be used to perform a graphical determination of a func- tion of a random variable. It is of course generally more accurate to de- termine the result analytically, however it is useful to be able to visualize the process graphically.

A linear function through zero July 12, 2000 11:53 am

4ProbabilisticDesign

Page 8 of 52

GRAPHICAL FUNCTIONS OF A RANDOM VARIABLE

2 - 9

A general linear function A
concave function July 12, 2000 11:53 am

4ProbabilisticDesign

Page 9 of 52

2 - 10

PROBABILISTIC DESIGN

A convex function Note that the lower gradient of the transformation function leads to a concentration of the probability in the corresponding region of the transformed probability density function. Much of our success in the design of quality products will depend on our being able to tune the design to make use of these regions of low gradient, hence minimizing the variability of the design output distribu- tions.

Problem 2.2

Sketch yourself a distribution and a transformation function. Sketch the shape of the resulting transformed distribution.

Sketching distributions

Problem 2.3

The sound output of a product has been determined to follow a tri- angular distribution with mode 2 units, lower limit 1 unit and upper limit 3 units. Graphically determine the probability density function for the (nat- ural) logarithm of the sound output.

Sound output

July 12, 2000 11:53 am

4ProbabilisticDesign

Page 10 of 52

ANALYTICAL FUNCTIONS OF A RANDOM VARIABLE 2 - 11

2.5 ANALYTICAL FUNCTIONS OF A RANDOM VARIABLE

2.5.1 Deﬁnition of an invertible function

The process above is straightforward if, over the support of x, there is only one value of x for each value of z. Functions with this property are called invertible.

2.5.2 Non-invertible functions of a random variable

If the function is not invertible, the following process may be ap- plied:

• 1. Break the function up into piecewise invertible pieces over intervals [x i , x j ]

• 2. For each piece, follow the procedure below for an invertible function. The result will be valid over the interval [g(x i ), g(x j )] (or [g(x j ), g(x i )], whichever is in the correct order), and zero outside of it.

Functions which are constant (ﬂat) over an interval give rise to a discrete jump in the probability density function of z.

2.5.3 Examples of invertible and non-invertible functions

Invertible functions July 12, 2000 11:53 am

4ProbabilisticDesign

Page 11 of 52

2 - 12

PROBABILISTIC DESIGN

Non-invertible functions 2.5.4 Invertible functions of a random variable

The procedure for calculating an invertible function of a random variable is as follows:

Given:

A. A probability density function: f x (x), x 1 x x 2 B. A transformation function: g(x)

1. Calculate dz/dx from z = g(x)

2. Solve for x in terms of z to get x = g -1 (z) = h(z). (There should

be only one solution since the function is invertible) 3. Substitute h(z) for x in f x (x) / |dz/dx| to get f z (z)

4. Determine the new support: g(x 1 ) z g(x 2 )

We apply this procedure to some simple cases below. We assume a general (undeﬁned) pdf f x (x) transformed by an invertible function g(x) which we can differentiate. The original probability density function is shown in light grey and the result of the function (or transformation) in darker grey.

Addition of a constant: [z = x + a]

1. dz/dx = 1

2. h(z) = z - a

July 12, 2000 11:53 am

4ProbabilisticDesign

Page 12 of 52

ANALYTICAL FUNCTIONS OF A RANDOM VARIABLE 2 - 13

3. f z (z) = f x (z-a)

4. x 1 +a z x 2 +a

Geometrically, the addition of a constant to a random variable sim- ply gives another random variable all values of which are increased (dis- placed to the right) by that constant. Example: The conversion of a random temperature expressed in Celsius to one expressed in Kelvin.

Multiplication by a constant: [z = a x]

1. dz/dx = a

2. h(z) = z/a

3. f z (z) = f x (z/a) / |a|

4. a x 1 z a x 2

Geometrically, the multiplication of random variable by a constant simply gives another random variable all values of which are multiplied by (stretched to the right) by that constant. Example: The conversion of a dimension expressed in metres to one expressed in millimetres.

The general linear transformation: [z = a x + b]

1. dz/dx = a

2. h(z) = (z-b)/a

3. f z (z) = f x ((z-b)/a) / |a|

4. a x 1 +b z a x 2 +b

Geometrically, a general linear function of a random variable pro- duces both a shift and a change in scale. The form of the function re- mains the same. Example: The conversion of a random temperature expressed in Celsius to one expressed in Fahrenheit.

The exponential transformation: [z = e x ]

1. dz/dx = e x

2. h(z) = ln z

3. f z (z) = f x (ln z) / |z|

4. exp(x 1 ) z exp(x 2 )   July 12, 2000 11:53 am

4ProbabilisticDesign

Page 13 of 52

2 - 14

PROBABILISTIC DESIGN

Example: The conversion of a variable expressed on a logarithmic scale back to one expressed on a linear scale.

2.5.5

Example

Exponential transformation of a Uniform distribution

A. Probability density function: f x (x) = 1/(b-a), a x b, a > 0 B. Transformation function: z = c exp(k x) where c and k are con-

stants

1.

Calculate dz/dx from z = c exp(k x):

dz/dx = k c exp(k x)

2. Solve for x in terms of z to get x = g -1 (z) = h(z):

x = h(z) = ln(z/c)/k

3. Substitute h(z) for x in f x (x) / |dz/dx| to get f z (z):

f z (z) = (1/(b-a)) (1/|k c exp(k x)|)

= (1/(b-a)) (1/|k c exp(k (ln(z/c)/k))|)

= (1/(b-a)) (1/|k z|)

4. Determine the new support for f z (z):

c exp(k a) z c exp(k b)

Problem 2.4

A manufacturer makes spheres to meet a speciﬁcation on the vol- ume. The process is known to deliver spheres with their diameters nor- mally distributed with mean 10 mm and standard deviation 1 mm.

1. Determine the formula for the probability density function of the

volume.

2. Compare the true mean volume with the approximate mean vol-

ume calculated from

Sphere volume

π

---10 3

6

Alloy steel

Problem 2.5

The percentage x of an alloy in a steel is exponentially distributed with probability density function f x (x) = a exp(- a x), 0 x ≤ ∞, a constant.

July 12, 2000 11:53 am

4ProbabilisticDesign

Page 14 of 52

MOMENTS

2 - 15

The ultimate tensile strength of the steel, z, is logarithmically relat- ed to the percentage of alloy by z = log e (x/b).

Derive the formula for the probability density function f z (z) of the ultimate tensile strength, and state its support.

2.6

MOMENTS

Moments of a distribution are a way of summarizing the important characteristics of a distribution as single numbers, without having to cope with too much detail. The ﬁrst few (lower order) moments are gen- erally of most interest to us. An analogy might be to the reduction of a vibration trace to its ﬁrst few harmonics. More precise mechanical analogies are 1. The mean is the centre of area of the distribution - summarizing the location properties of the distribution. 2. The variance is the second moment of area of the distribution about the mean - summarizing the way in which the area is spread over the object. Because we generally lack detailed information about the probabil- ity density functions of our design parameters, we will usually be mak- ing our calculations with the ﬁrst few moments, often just the mean and variance. Following are some deﬁnitions of moments and coefﬁcients based on them.

2.6.1 (Non-central) moments

The nth (non-central) moment origin is

µ' (n)x

of a distribution f(x) about the

µ' (n)x

=

x n f (x)dx

The ﬁrst non-central moment is called the mean. The mean of a random variable x will be denoted µ x , or simply µ where the context is clear. The mean is also the expectation of x, denoted E[x].

July 12, 2000 11:53 am

4ProbabilisticDesign

Page 15 of 52

2 - 16

PROBABILISTIC DESIGN

2.6.2 Central moments

The nth central moment µ (n)x of a distribution f(x) about the mean µ of a distribution is

µ (n)x

=

(x µ) n f (x)dx

The ﬁrst central moment of any distribution is zero. The second central moment is called the variance, denoted v x . The third central moment is called the skew, denoted s x . The fourth central moment is called the kurtosis, denoted k x . Since we will be dealing mostly with central moments, we will of- ten refer to them simply as moments.

2.6.3 Variance

The variance is, after the mean, the most important moment of a distribution. Its unit is the square of the unit of the random variable and hence is always positive. It measures the spread of the distribution. A zero variance thus implies a deterministic variable.

2.6.4 Skew

The skew is the next most important moment. Its unit is the cube of

the unit of the random variable and hence may be positive or negative.

A positively skewed distribution has its longer tail to the right. A nega-

tively skewed distribution has its longer tail to the left. We will some- times use the skew to test how valid it is to assume a given distribution

is symmetric (and hence perhaps approximatable by a Normal distribu-

tion).

2.6.5 Kurtosis

We include here the kurtosis mainly for completeness. Since the kurtosis measures the “squatness” of the distribution, it is useful for dif- ferentiating different types of symmetric distributions (for example the Normal and the Uniform). However since most of the distributions we will be using are bell-shaped, we will not use the kurtosis much. It is al- ways positive.

July 12, 2000 11:53 am

4ProbabilisticDesign

Page 16 of 52

MOMENTS

2 - 17

2.6.6 Standard deviation

The standard deviation of a distribution is the (positive) square root of the variance. The standard deviation has the same dimensions as the mean but it is the variance that is the more fundamental quantity. The standard deviation of a random variable x is denoted σ x .

2.6.7 Coefﬁcient of variation

The coefﬁcient of variation is the ratio of the standard deviation to

the mean, and is thus a measure of the relative spread of the distribution. This ratio is dimensionless and so may often be used to cast formulae in a dimensionless form. The coefﬁcient of variation of a random variable x will be denoted

by

xˆ

.

2.6.8 Variance ratio

The variance ratio is the (dimensionless) ratio of the variance to

the square of the mean. We will ﬁnd this measure of relative spread to occur more commonly in our applications than the coefﬁcient of varia-

tion. The variance ratio will be denoted by u x (=

xˆ 2 ).

2.6.9 Coefﬁcient of skewness

The coefﬁcient of skewness is the (dimensionless) ratio of the skew to the cube of the standard deviation. The normal distribution has a co- efﬁcient of skewness of 0. The exponential distribution has a coefﬁcient of skewness of 2.

2.6.10 Coefﬁcient of kurtosis

The coefﬁcient of kurtosis is the (dimensionless) ratio of the kurto- sis to the fourth power of the standard deviation (the square of the vari- ance). The coefﬁcient of kurtosis measures the peakedness of the type of distribution. Uniform distributions have a kurtosis coefﬁcient of 1.8, triangular of 2.4, normal of 3, and exponential of 9.

July 12, 2000 11:53 am

4ProbabilisticDesign

Page 17 of 52

2 - 18

PROBABILISTIC DESIGN

2.6.11 Terminology

There are varying deﬁnitions in the literature for skew and kurtosis and their dimensionless ratios. It is wise to check the deﬁnition the au- thor is using.

2.6.12 A note on notation

In situations where there are several random variables, for exam- ple, x, y, … we will use µ x , µ y , …for the mean of x, y, …, and ν x , ν y , … for the their variance. If we dealing with a single random variable, we will often drop the subscripts.

2.7 THE NORMAL DISTRIBUTION

The normal distribution is the most important distribution in the ap- plication of probability theory to science and engineering. The Central Limit Theorem (to be discussed later) tells us that the Normal distribu- tion has an interesting involvement in the description of complex prob- abilistic systems. It will be worth getting a good intuitive feel for its properties.

• It is symmetric

• Its support is from -Inﬁnity to +Inﬁnity

• 99.7% of the distribution lies within ±3 standard deviations of the mean

• 95% of the distribution lies within ±2 standard deviations of the mean

• 68% of the distribution lies within ±1 standard deviations of the mean. The inﬂection point on the curve is at this point.

• Because of its symmetry, its odd central moments are zero.

• Its even central moments are given by (where ν is the variance):

{ν, 3 ν 2 , 3x5 ν 3 , 3x5x7 ν 4 , 3x5x7x9 ν 5 , …} = {ν, 3 ν 2 , 15 ν 3 , 105 ν 4 , 945 ν 5 , …}

July 12, 2000 11:53 am

4ProbabilisticDesign

Page 18 of 52

THE NORMAL DISTRIBUTION

2 - 19

• Its probability density function is 1
x – µ
1 –---
2
 2
------------
σ 
--------------e
σ

• The graph of its probability density function for µ = 0 and σ = 1 is • Its cumulative distribution function is

1

---

2 x – µ
1
+ Erf
------------
σ
2

The graph of it cumulative distribution function µ = 0 and σ = 1 is

July 12, 2000 11:53 am

4ProbabilisticDesign

Page 19 of 52

2 - 20

PROBABILISTIC DESIGN Problem 2.6

Probability of a continuous random variable

1. What is the probability that a normally distributed random vari-

able has its mean value?

2. What is the probability that a normally distributed random vari-

able lies between µ − σ and µ + 2σ?

3. What is the probability that a normally distributed random vari-

able is greater than µ + 6σ?

Problem 2.7

Sketch carefully a normal distribution with mean 9 and variance 9.

A random variable is distributed as above. What is the probability that it is less than zero?

Sketching a Normal distribution

2.8 MEANS FROM NOMINAL VALUES

The usual design speciﬁcation on a parameter is given by a nominal value and a tolerance. The nominal value is usually the value given as n in the speciﬁcation [n - t 1 , n + t 2 ]. The question arises: Given only a speciﬁcation on a parameter in this form, what should we assume the

July 12, 2000 11:53 am

4ProbabilisticDesign

Page 20 of 52

STANDARD DEVIATIONS FROM TOLERANCES

2 - 21

mean value of the parameter to be? Until more research is done in this area, we propose that the mean be estimated as µ = (t 1 + t 2 )/2.

2.9 STANDARD DEVIATIONS FROM TOLERANCES

While mean values are often easy to ﬁnd from data sources, it is usually more difﬁcult to obtain an estimate of the variance (or standard deviation) of a design parameter. This section discusses some rules of thumb for estimating standard deviations from tolerances.

 2.9.1 Estimation from tolerance range If we know that the parameter is approximately normally distrib-

uted and the proportion of product that is expected to lie within a certain tolerance range, then a rule of thumb for estimating the random vari- able’s standard deviation from the properties of the normal distribution is:

If expect 68% If expect 95%

If expect 99.7% to lie within ±∆

to lie within ±∆ to lie within ±∆

then set σ x . = then set σ x . = ∆/2 then set σ x . = /3

2.9.2 Estimation from limited data

A rule of thumb which enables standard deviations to be estimated

from limited data is given by Haugen:

If the estimate of the tolerance that is required is obtained:

then set σ x = then set σ x = ∆/2

From about 500 samples then set σ x = /3

2.9.3 Estimation from knowledge of manufacturer

A further rule of thumb given by Shooman is:

If the product is being made by:

July 12, 2000 11:53 am

4ProbabilisticDesign

Page 21 of 52

2 - 22

PROBABILISTIC DESIGN

• Commercial, early development, little known or inexperienced manufacturers, then set σ x =

• Military, mature, reputable, or experienced manufacturers, then set σ x = /3

2.10 THE EXPECTATION OPERATOR

The expectation of a function g(x, y, …) of random variables x, y, … with probability density function f(x, y, …) is denoted E[g(x, y, …)] and is deﬁned as the integral:

Egxy[ ( ,

, …)]

=

gxy( ,

, …) fxy( ,

, …)dxdy

The following properties may be proven from the deﬁnition:

2.10.1 The expectation of sums and products

Constant The expectation of a constant c is the constant itself. That is

E[c] = c

Linear sum If a, b, … are constants, then

E[a g 1 (x, y,…) + b g 2 (x, y,…) +…] = a E[g 1 (x, y,…)] + b E[g 2 (x, y,…)]

Product of independent random variables If x, y, … are independent random variables, then

E[g 1 (x) g 2 (y) …] = E[g 1 (x)] E[g 2 (y)] …

2.10.2 Relation of moments to the Expectation

Non-central moments as Expectations E = 1 E[x] = µ x

E[x n ] =

µ' (n)x

July 12, 2000 11:53 am

4ProbabilisticDesign

Page 22 of 52

THE EXPECTATION OPERATOR

2 - 23

Central moments as Expectations E[x - µ x ] = 0 E[(x - µ x ) 2 ] = ν x E[(x - µ x ) 3 ] = s x E[(x - µ x ) 4 ] = k x E[(x - µ x ) n ] = µ (n)x

Problem 2.8

Prove the formulae relating moments and Expectations above from the deﬁnitions of moment and Expectation.

Moments as Expectations

2.10.3 Relation of central and non-central moments

Central moments in terms of non-central moments

 E [ ( x – E [ ( x –

µ

x

µ

x

) n =

]

) n =

]

E

n

  ∑  n  x i (– µ x i 

i = 0

) n i

n

i = 0

n

i

(

µ

x

)

n i

E

[

i

x

]

Non-central moments in terms of central moments

E

[

E [

x

n

]

n

x

=

]

=

n

i = 0

E

[ (

(

x

n

i

E

[

(

x

µ

x

+

x

)

n ]

µ

x

)

i

x

(

) n i

Problem 2.9

From the deﬁnition of Expectation, prove the formula for the Ex- pectation of a linear sum of functions of random variables.

Expectation of a linear sum

Problem 2.10

First central moment

Show that E[x - µ x ] = 0.

July 12, 2000 11:53 am

4ProbabilisticDesign

Page 23 of 52

 2 - 24 PROBABILISTIC DESIGN Problem 2.11 Central and non-central moments

Fill out the detail in the above derivations relating central and non- central moments.

Problem 2.12

1. Derive expressions from ﬁrst principles for the variance and

skew in terms of non-central moments. Use the binomial expansion and the properties of the Expectation operator.

Variance and skew

2. Verify your results using the general formulae above.

Problem 2.13 Mean values of a power

1. Derive expressions from ﬁrst principles for the mean values of

second, third, and fourth powers of a random variable in terms of its central moments. Use the binomial expansion and the properties of the Expectation operator.

2. Verify your results using the general formulae above.

Problem 2.14

A beam of circular cross-section has a normally distributed diame- ter D with mean 100 mm and standard deviation 2 mm.

diameter.

2. Compare this with the nominal second moment of area based on

a nominal diameter of 100 mm. (Hint: The kurtosis of a normal distribution is 3σ 4 ).

Mean second moment of area

1. Calculate the mean second moment of area (

I

= ------D π 4

64

Problem 2.15 Volume of sphere

The performance of a product is dependent on the volume V of a contained steel sphere of diameter D remaining within tight speciﬁca- tions. The machine manufacturing the spheres is controlled by the spec- iﬁcation on the nominal diameter.

By using the expectation operator and the identity a n = ((a - b) +

July 12, 2000 11:53 am

4ProbabilisticDesign

Page 24 of 52

LINEAR FUNCTIONS

2 - 25

b) n , or otherwise, derive an exact formula for the mean µ V in terms of µ D , and higher order central moments.

2.11 LINEAR FUNCTIONS

2.11.1 Introduction

In this section we begin to explore an approximate method for computing with random variables by considering only the ﬁrst few mo- ments of a distribution (typically only the mean and variance, but some- times the skew and kurtosis). Exact methods in computation with random variables are often exceedingly complex and insufﬁciently gen- eral. However even the approximate probabilistic approaches developed here are an order of magnitude more powerful in engineering design for quality and reliability than the traditional “factor of safety” approach. In this section we look only at linear functions. In a later section we will look at more general function types.

2.11.2 General formulae

For the special case of a linear function of several variables, the moments may be derived exactly and take particularly simple forms. Note that the relations below are true, independent of the types of underlying distributions possessed by the x i . However in the general case, z will not have a distribution of any known standard type. If x 1 , x 2 , x 3 , … are independent random variables, a 1 , a 2 , a 3 , … are constants, and z = a 1 x 1 + a 2 x 2 + a 3 x 3 + …, then the ﬁrst four moments of z are given by:

µ z = a 1 µ x1 + a 2 µ x2 + a 3 µ x3 + … = Σ a i µ xi

ν z = a 1 2 ν x1 + a 2 2 ν x2 + a 3 2 ν x3 + … = Σ a i 2 ν xi

s z = a 1 3 s x1 + a 2 3 s x2 + a 3 3 s x3 + … = Σ a i 3 s xi

k z = a 1 4 k x1 + a 2 4 k x2 + a 3 4 k x3 + …

+ 6{a 1 2 ν x1 a 2 2 ν x2 + a 2 2 ν x2 a 3 2 ν x3 + a 1 2 ν x1 a 3 2 ν x3 + }

July 12, 2000 11:53 am

4ProbabilisticDesign

Page 25 of 52

2 - 26

PROBABILISTIC DESIGN

= Σ a i 4 k xi + 6 Σ a i 4 ν xi a j 4 ν xj [i<j]

Note that for moments of order higher than 3, the relations are no longer simple sums of the same order moments.

2.11.3 Sums, differences and multiples

In the special case of sums and differences of two random vari- ables; and ﬁxed scalar multiples of a simple random variable the above relations reduce to

Table 1: Moments of sums, differences and multiples

 SUM DIFFERENCE MULTIPLE FUNCTION z = x + y z = x - y z = a x MEAN µ z = µ x + µ y µ z = µ x - µ y µ z = a µ x VARIANCE ν z = ν x + ν y ν z = ν x + ν y ν z = a 2 ν x SKEW s z = s x + s y s z = s x - s y s z = a 3 s x KURTOSIS k z = k x + k y + 6 ν x ν y k z = k x + k y + 6 ν x ν y k z = a 4 k x

2.11.4 Linear functions of normal distributions

Linear combinations of normally distributed random variables are a special case. They are themselves normal. This means that we can ﬁnd the actual normal distribution resulting from a linear sum by simply calculating the mean and variance from the formulae above. Below, we graphically depict the addition of two normal random variables.

July 12, 2000 11:53 am

4ProbabilisticDesign

Page 26 of 52

LINEAR FUNCTIONS

2 - 27 ADDITION OF TWO NORMAL RANDOM VARIABLES

2.11.5 The Central Limit Theorem

The sum of a number of independent but not necessarily identically distributed random variables tends to become normally distributed as the number increases, provided that no one random variable contributes appreciably more than the others to the sum; that is, no type of distribu- tion dominates. This is an important result for designers. It means, for example, that the overall dimension of an assembly of component parts, independent of the distribution types of each component dimension, will tend to be normally distributed. Knowing this, the designer can work back from the individual component tolerances to get an estimate of the proportion of assemblies which will lie outside any given speciﬁcation. The six greyed graphs below are, sequentially, the distributions of the average of 1, 2, 3, 4, 5, and 6 independent identically Uniformly dis- tributed random variables on [-1, 1]. The full line is the Normal distri- bution which has the same variance as the average. It can be seen that even the average of only three Uniform distribu- tions gives a result surprisingly close to Normal.

July 12, 2000 11:53 am

4ProbabilisticDesign

Page 27 of 52

2 - 28

PROBABILISTIC DESIGN Problem 2.16

Inconsistency?

If x = y in the formula for the variance of a difference, that is, z =

x - y = 0, does this imply ν z = 2ν x = 2ν y ?

Problem 2.17

An assembly is made up of several components whose nominal lengths are L 1 , L 2 , L 3 , and L 4 . It is important that the distance L = L 1 +L 2

- (L 3 +L 4 ) be kept within speciﬁcation limits.

1. Write down a formula for the standard deviation σ L as a function of σ 1 , σ 2 , σ 3 , and σ 4 .

Tolerance build-up

2. What can be said about the type of distribution that L has?

July 12, 2000 11:53 am

4ProbabilisticDesign

Page 28 of 52

LINEAR FUNCTIONS

2 - 29

Problem 2.18

An assembly contains two springs of stiffness K 1 and K 2 connected in parallel. Their stiffness distributions have moments of K 1 : {µ 1 = 500 N/m, ν 1 = 144 (N/m) 2 , s 1 = 10 (N/m) 3 } K 2 : {µ 2 = 300 N/m, ν 2 = 25 (N/m) 2 , s 2 = -10 (N/m) 3 }.

1. Calculate the mean, variance and skew of the distribution of the

overall stiffness K of the system.

Springs in parallel

2. Is the resulting distribution symmetric?

Problem 2.19 Counterweights

Two designs are proposed for a sensitive counterweight. Design A utilizes 4 spheres each of mass m. Design B utilizes two spheres each of mass 2m.

Assuming that the coefﬁcient of variation of the mass of each of the spheres is the same, determine the ratio of the standard deviation of the total mass of Design A to that of Design B.

Problem 2.20 Algenon and Biggles

Bricks are manufactured with heights of a given mean and vari- ance. Algenon Ant climbs straight up a vertical stack of N bricks (no mortar). His brother Biggles (a little disoriented) goes straight up and down the ﬁrst brick for the same number of brick traverses (hence cov- ering the same mean distance).

Determine the ratio of the standard deviation of Biggles' journey to the standard deviation of Algenon's journey.

Problem 2.21 Machine support

Suppose that a machine is to be supported with a number of springs of the same nominal stiffness, and that the overall stiffness of the spring assembly is to be within a given tolerance of a ﬁxed target value K. Sup- pose also that all the springs in the assembly have the same percentage tolerance on their stiffness no matter what size they are. That is, the stiff- nesses of the springs have the same coefﬁcient of variation.

July 12, 2000 11:53 am

4ProbabilisticDesign

Page 29 of 52

2 - 30

PROBABILISTIC DESIGN

Discuss the inﬂuence of the number of springs in the assembly on the variability of its stiffness.

Problem 2.22 Moon lander

In a design analysis of a suspension system for a moon lander, it has been determined that the overall damping coefﬁcient of the system is a critical quality variable, and should be held within tight speciﬁca- tions. Suppose that the shock absorbers are linear over their range of ap- plication, and that the standard deviation of the damping coefﬁcient of each shock absorber is a ﬁxed fraction α of its mean value for any size shock absorber. Suppose also that there are two systems proposed: System F with 4 parallel shock absorbers, and System G with 16 parallel shock absorb- ers, where both systems have the same total mean damping coefﬁcient. (You may assume that the damping coefﬁcients are additive).

Determine the ratio of the standard deviation of the overall damp- ing coefﬁcient of assembly G to that of assembly F.

2.12

RELIABILITY

The reliability R of a system is the probability that the system will perform as expected.

The unreliability Q of a system is the probability that the system will fail to perform as expected.

R + Q = 1 Remark on terminology:

The term reliability is commonly used to refer to the probability of failure of one item due to degradation over time. It is not generally used for the general conformance to speciﬁcation of the product coming off the end of a production line. For simplicity however we will often use the term “reliability” to mean “probability of conforming to speciﬁca- tion”. Thus the reliability of a mass-produced product is equivalent to the proportion of the product within speciﬁcation.

July 12, 2000 11:53 am

4ProbabilisticDesign

Page 30 of 52

RELIABILITY

2 - 31

2.12.1 Operating windows

Many systems (products, designs) depend for their correct perfor- mance on the values of their quality variables (design parameters or functions of design parameters) remaining within given bounds, limits,

or tolerances. This leads to viewing these bounds as the frame of an op- erating window.

A design speciﬁcation might read something like:

“The parameter x must lie in the range x L to x U ”.

Since x will usually have a distribution of values it is most likely that not all values will lie in this range.

If a product’s function depends only on the single parameter x, then

its reliability is the probability that x lies in the range x L to x U . That is

R = Pr (x L x x U )

and this is represented by the area under the probability density function which can be seen through the operating window. Conversely the unreliability is the area under the curve outside the window. OPERATING WINDOW

2.12.2 Example: Paper feeder operating window

Many photocopiers have paper feeders which use the frictional

July 12, 2000 11:53 am

4ProbabilisticDesign

Page 31 of 52

2 - 32

PROBABILISTIC DESIGN

driving force of an elastomeric covered roll which sits on top of the pa- per stack. It is clear that if the normal force exerted by the roll on the paper is too low, the paper will not move. This failure mode is called a misfeed. Conversely, if the normal force is too high, more than one sheet will be driven forward. This failure mode is called a multifeed. The normal force thus becomes a quality variable which must be kept within deﬁned upper and lower speciﬁcation limits. Considering the normal force as a random variable, the proportion of its probability density function that we can see through the window frame formed by the upper and lower speciﬁcation limits is the reliabil- ity of the feeder for the normal force failure modes.

2.12.3 Supply and demand

Another type of system depends for its correct performance on the demand x being less than the supply x s .

R = Pr (x < x s )

where both x and x s are independent random variables. MARGIN OF SAFETY

“Supply and demand” here should be taken in the most general sense of any imposed physical variable: force, stress, deﬂection, tem- perature, time, ﬂow-rate, …

July 12, 2000 11:53 am

4ProbabilisticDesign

Page 32 of 52

RELIABILITY

2 - 33

The margin of safety is deﬁned by y = x s - x. Hence the reliability may be written

R = Pr (y > 0)

The operating window for y is then 0 y for a margin of safety problem.

2.12.4 Estimation of the reliability

1. Calculate the mean and variance of y.

2. If the type of distribution for y is known, use a formula or table for its cumulative distribution function to calculate the area 0 y. Otherwise use a table for the normal distribution as follows:

3. Calculate the distance z of the mean of y from zero in units of the standard deviation of y.

4. Read off the required probability (reliability or unreliability) in the table.

The parameter z (measured in standard deviations of y) is often called the reliability index or safety index. It can be seen from the table below that the reliability is quite sensitive to small changes in z for z greater than about 2. A doubling of z from 2.4 to 4.8 decreases the prob- ability of failure by a factor of approximately 10 000! You can visualize this geometrically by imagining what happens to the area of the distribution for y 0 as you shift the distribution to the right.

Table 2: Probability of failure versus reliability index for a Normal Distribution

 RELIABILITY UNRELIABILITY Q PER MILLION RELIABILITY UNRELIABILITY Q PER MILLION INDEX Z INDEX Z 0.00 500 000 2.33 10 000 0.67 250 000 3.10 1 000 1.00 160 000 3.72 100 1.28 100 000 4.25 10 1.65 50 000 4.75 1

July 12, 2000 11:53 am

4ProbabilisticDesign

Page 33 of 52

 2 - 34 PROBABILISTIC DESIGN Problem 2.23 Bolt strength reliability

A production run of bolts has a normally distributed ultimate ten-

sile strength with mean 100 MN and standard deviation 2 MN.

The applied load is expected to be normally distributed with mean 90 MN and standard deviation 4 MN.

What proportion may be expected to fail?

Problem 2.24

A design calculation predicts that the buoyancy force B acting on

a sonar device is normally distributed with mean 800 N and standard de- viation 24 N, and that the weight force W is normally distributed with mean 800 N and standard deviation 8 N. The sonar device fails to operate as intended if: a) it sinks, or b) its buoyancy force exceeds its weight force by more than 32 N.

Calculate the probability of failure to operate as intended (to 3 dec- imal places).

Buoyancy force reliability

Problem 2.25 Bearing ﬁt

A mass-produced bearing of a journal bearing has a normally dis-

tributed diameter with mean 50 mm and tolerance ±0.03 mm. The journal has a normally distributed diameter with mean 49.9

mm and tolerance ±0.03 mm. The assembly fails if (a) the journal will not ﬁt in the bearing, or (b) the diametral clearance is greater than 0.1 mm.

1. Estimate the proportion of assemblies that might be expected to

fail if the manufacturer is very inexperienced in this area of manufac- ture.

2. Estimate the proportion of assemblies that might be expected to

fail if the manufacturer is highly experienced in this area of manufac- ture.

Problem 2.26

Shaft failure

A production run of shafts has a normally distributed failure torque

July 12, 2000 11:53 am

4ProbabilisticDesign

Page 34 of 52

PRODUCTS OF RANDOM VARIABLES

2 - 35

with mean 100 kNm and variance 9 (kNm) 2 . The applied torque is expected to be normally distributed with mean 80 kNm and variance 16 (kNm) 2 .

What proportion of product may be expected to fail? (Express your answer as number of failures per million)

Problem 2.27

Discuss the potential application of the margin of safety concept to the ﬁtting together of components in the automobile industry, for exam- ple, doors and windshields.

Fitting of car doors and windshields

2.13 PRODUCTS OF RANDOM VARIABLES

To develop formulae for the moments of products of independent random variables we use the fact that if x, y, … are any independent ran- dom variables, then

E[x y …] = E[x] E[y] … The formulae developed below will be exact independent of the type of distribution to which the random variables belong. The formulae are used by calculating the mean, variance and skew in succession. Suppose z is a product of any number of independent random vari- ables x i : z = x 1 x 2 x 3

2.13.1 The mean of a product

The mean of a product is a direct application of the formula above.

z = x 1 x 2 x 3

E[z] = E[x 1 ] E[x 2 ] E[x 3 ] …

µ z = µ x1 µ x2 µ x3

The mean of a product of independent random variables is simply the product of their means.

July 12, 2000 11:53 am

4ProbabilisticDesign

Page 35 of 52

2 - 36

PROBABILISTIC DESIGN

2.13.2 The variance of a product

The variance of a product is obtained by taking the expectation of the square of z

z 2 = x 1 2 x 2 2 x 3 2

E[z 2 ] = E[x 1 2 ] E[x 2 2 ] E[x 3 2 ] …

(µ z 2 + ν z ) = (µ x1 2 + ν x1 ) (µ x2 2 + ν x2 ) (µ x3 2 + ν x3 ) …

To calculate the variance ν z of a product of independent random variables, ﬁrst compute the product on the right hand side of the equa- tion above and then subtract the square of the mean µ z 2 calculated pre- viously.

2.13.3 The skew of a product

z 3 = x 1 3 x 2 3 x 3 3

E[z 3 ] = E[x 1 3 ] E[x 2 3 ] E[x 3 3 ] …

(µ z 3 + 3 µ z ν z + s z ) = (µ x1 3 + 3 µ x1 ν x1 + s x1 )

(µ x2 3 + 3 µ x2 ν x2 + s x2 ) (µ x3 3 + 3 µ x3 ν x3 + s x1 ) …

To calculate the skew s z of a product of independent random vari- ables, ﬁrst compute the product on the right hand side of the equation above and then subtract the term µ z 3 + 3 µ z ν z calculated from the pre- vious steps. Higher moments are calculated in a similar fashion.

Problem 2.28 Volume of a cube

Calculate the mean and variance of the volume of a cube of side L where the sides are machined independently by the same machining process and are therefore considered to be identically distributed inde- pendent random variables each with mean µ and variance ν.

Comment on how this calculation differs from one based simply on the formula V = L 3 (see below).

July 12, 2000 11:53 am

4ProbabilisticDesign

Page 36 of 52

POSITIVE INTEGER POWERS

2 - 37

2.14 POSITIVE INTEGER POWERS

2.14.1 The mean of a positive integer power

We have already derived the formula for the Expectation of a pos- itive integer power of a random variable in terms of central moments. Since the expectation gives the mean value, we have immediately that for z = x n :

µ z

=

n

i = 0

n

i

E

[

(

x

µ

x

)

i

x

(

) n i

2.14.2 Tables for a Normally distributed random variable

Since the central moments of a normal distribution can all be ex- pressed in terms of its mean and variance (see the listing in the section on the Normal distribution), its powers can therefore be expressed via the above formula in terms of them also. The tables below thus give exact formulae for calculating the mean, variance and skew of positive integer powers of a normally distributed random variable x with mean µ and variance ν.

The entries in the table are expressed in the form (ﬁrst order approximation) (1 + terms in the variance ratio u) where the variance ratio u has been deﬁned as the square of the co- efﬁcient of variation u = v/µ 2 .

Table 3: Moments of a square

 z = x 2 µ z µ 2 (1 + u) ν z 4 µ 2 ν (1 + u/2) s z 24 µ 2 ν 2 (1 + u/3)

July 12, 2000 11:53 am

4ProbabilisticDesign

Page 37 of 52

2 - 38

PROBABILISTIC DESIGN

Table 4: Moments of a cube

 z = x 3 µ z µ 3 (1 + 3 u) ν z 9 µ 4 ν (1 + 4 u + (5/3) u 2 ) s z 162 µ 5 ν 2 (1 + (16/3) u + 5 u 2 )

Table 5: Moments of a fourth power

 z = x 4 µ z µ 4 (1 + 6 u + 3 u 2 ) ν z 16 µ 6 ν (1 + (21/2)u + 24 u 2 + 6 u 3 ) s z 576 µ 8 ν 2 (1 + 16 u + (149/2) u 2 + 99 u 3 + 33 u 4 )

Table 6: Moments of a ﬁfth power

 z = x 5 µ z µ 5 (1 + 10 u + 15 u 2 ) ν z 25 µ 8 ν (1 + 20 u + 114 u 2 + 180 u 3 + (189/5) u 4 ) s z 1500 µ 11 ν 2 (1 + (97/3)u + 366 u 2 + 1710 u 3 + 2997 u 4 + 1323 u 5 )

Problem 2.29

Inconsistency?

If µ = 0, does this imply that µ z , ν z , s z are all zero?

2.15 GENERAL FUNCTIONS

2.15.1

Introduction

We complete our introductory discussion of probabilistic design by describing a method (called the Moment Analysis Method) by which you can calculate the moments of any differentiable function of inde-

July 12, 2000 11:53 am

4ProbabilisticDesign

Page 38 of 52

GENERAL FUNCTIONS

2 - 39

pendent random variables, and hence get an estimate of the variability inherent in a given design. Indeed, all the formulae we have introduced so far may be derived by this method. The basic principle of the Moment Analysis Method is the speciﬁ-

cation of each probability distribution by its set of moments in the form

{mean, variance, skew, kurtosis,

Then, if we wish to calculate a

function of several random variables, the moments of that function will be functions of the moments of those several random variables.

}.

The two techniques that we will use are

1. Expansion of the function in a Taylor series

2. Application of the Expectation operator to the series

2.15.2 The basic algorithm

Calculation of the mean

1. Expand the function z = g(x, y,

) as a Taylor series about the

mean values (µ x , µ y , …) of the independent random variables.

2. Calculate the mean of the function (µ z ) by calculating the expec-

tation of the terms in the expansion.

Calculation of the nth central moment

1. Expand the function [z - µ z ] n as a Taylor series about the mean

values (µ x , µ y , …) of the independent random variables.

2. Calculate the nth central moment of the function (ν z , s z , k z , …)

by calculating the expectation of the terms in the expansion.

3. Calculate µ z and substitute for it in the expression.

2.15.3 The theoretical foundation

Assumptions

The fundamental assumptions upon which the method is based are:

1. The random variables x, y, … are independent. (Very important!)

2. The pertinent information content of each of the distributions is

sufﬁciently well represented by a ﬁnite (small) number of moments.

July 12, 2000 11:53 am

4ProbabilisticDesign

Page 39 of 52

 2 - 40 PROBABILISTIC DESIGN 3. The function is sufﬁciently well represented by a ﬁnite (small)

number of terms of its Taylor series.

Formulae

The fundamental formulae upon which the method is based are:

1. The mean µ z of a function z = g(x, y,

) is the expectation of the

function.

2. The nth central moment µ (n)z of a function z = g(x, y,

expectation of (z - µ z ) n

) is the

3. The expectation of a linear sum is the sum of the expectations of

the terms.

4. The expectation of a product of independent random variables is

the product of their expectations.

2.15.4 How to write down a Taylor series

In this section we discuss a mnemonic method for easily writing

down a Taylor series expansion of a function of several variables.

) and you wish to write

down the Taylor series for z expanded about the point: x = µ x , y = µ y ,

Suppose you have a function z = g(x, y,

…. A simple mnemonic way of doing this is as follows:

 1 Write down the power series for exp(X+Y+…): 1 + (X+Y+…) + (1/2!)(X+Y+…) 2 + (1/3!)(X+Y+…) 3 + … 2 Expand the terms: 1 + (X+Y+…) + (1/2!)(X 2 +2XY+Y 2 +…)

+ (1/3!)(X 3 +3X 2 Y+3XY 2 +Y 3 +…) + …

3. Make the following replacements:

→ [z]

1 (
=
g
(
n
∂ z
(
x
x n
µ

µ

x ,

µ

µ

x ) n

µ

X n

y ))

July 12, 2000 11:53 am

4ProbabilisticDesign

Page 40 of 52

 GENERAL FUNCTIONS 2 - 41 X n Y m → (n + m) ∂ z ∂ x n ∂ y m µ ( x – µ x ) n ( y – µ y ) m

and so on for more products of more than two variables. Remember that the notation […] µ means that the bracketed func- tion is evaluated at the point x = µ x , y = µ y , ….

2.15.5 How to write down the expectation of a function

) and you wish to

write down an expression for the expectation E[z] of z. The normal pro- cedure for doing this is:

Again suppose you have a function z = g(x, y,

1. Write down the Taylor series with x 0 = µ x , y 0 = µ y , …

2. Apply the expectation operator to the series, remembering its

properties when it acts on a constant, a linear sum, and a product of in- dependent random variables.

3. Make the following replacements:

E

E

[ (

x

[

x

µ

x

µ

x

] → 0

) n

]µ (n)x

The resulting expression is a series expressing E[z] in terms of the moments of x, y, …. If the series terminates the expression will be exact. A polynomial function, for example, will terminate.

2.15.6 The shortest way to write down the expectation

It may be somewhat shorter to ﬁrst simplify the terms in our origi- nal mnemonic expansion 1 + (X+Y+…) + (1/2!)(X 2 +2XY+Y 2 +…) + … before replacing them with their corresponding terms in the Taylor se- ries expansion. We list possible simpliﬁcation rules below, and illustrate them with the example of a function of two variables for which we know only their means and variances:

July 12, 2000 11:53 am

4ProbabilisticDesign

Page 41 of 52

 2 - 42 PROBABILISTIC DESIGN 1 + (X+Y) + (1/2!)(X 2 +2XY+Y 2 ) + (1/3!)(X 3 +3X 2 Y+3XY 2 +Y 3 ) + (1/4!)(X 4 +4X 3 Y+6X 2 Y 2 +4XY 3 +Y 4 ) + (1/5!)(X 5 +5X 4 Y+10X 3 Y 2 +10X 2 Y 3 +5XY 4 +Y 5 ) + … 1. Any term involving a variable to the ﬁrst power is zero since the

expectation E[x - µ x ] is zero.

1

+ (1/2!)(X 2 +Y 2 ) + (1/3!)(X 3 +Y 3 )

+ (1/4!)(X 4 +6X 2 Y 2 +Y 4 )

+

2.

(1/5!)(X 5 +10X 3 Y 2 +10X 2 Y 3 +Y 5 ) + …

Any term involving a higher power leading to a moment for

which you have no information must be omitted. In this example we only know means and variances, hence the ex-

pression reduces to

 1 + (1/2!)(X 2 +Y 2 ) + (1/4!)(6X 2 Y 2 ) 3 If the coefﬁcients resulting from the higher derivatives in the ex-

pansion are small enough compared to those resulting from the lower ones, the corresponding terms may be neglected. This is often the case for functions which are not too far off linear in the region near the point

µ = (µ x , µ y ). In this example we would look at the comparative size of 4
∂ z
x 2 y 2

µ

Assuming the term can be neglected the expression reduces to

1 + (1/2!)(X 2 +Y 2 )

leading ﬁnally to a general second order approximation for µ z :

µ z

=

g

(

µ

x

,

µ

y

)

1

+ ---

2 2 2
∂ z
+ ∂ z
ν x
∂ x 2
∂ y 2
µ

µ

ν

y

It is evident from this process that the same form is valid for any number of variables.

µ z

=

g

(

µ

x

,

µ

y

, …)

1

+ ---

2 2 2
∂ z
+ ∂ z
ν x
∂ x 2
∂ y 2
µ

µ

ν

y

+

July 12, 2000 11:53 am

4ProbabilisticDesign

Page 42 of 52

GENERAL FUNCTIONS

2 - 43

Note carefully that the mean of a general function is only equal to the function of the means as a ﬁrst order approximation.

2.15.7 Calculation of the variance of a function

As an example of the method described for calculating higher order moments of a differentiable function of random variables, we will cal-

culate an expression for the second order approximation to the variance

of z = g(x, y,

, we can let Z = (z -

µ z ) 2 . E[(z - µ z ) 2 ] then becomes µ Z which we can write down directly from the result derived in the section above:

),

that is, E[(z - µ z ) 2 ].

1. Since (z - µ z ) 2 is still a function of x, y, 2
2