Академический Документы
Профессиональный Документы
Культура Документы
David M. Himmelblau
University of Texas
r._ _--.,.-r-..--.---'- -_
-r'"
.~.
:, .
10 9 8 7 6 5 4 3 2
To Betty
Preface.
1969
Contents
PART I
1 Introduction
3
5
8
8
PART II
4
10
14
16
17
18
19
21
22
23
23
29
30
31
34
35
36
40
42
42
3.1 Introduction
49
50
53
68
Paired Observations .
68
69
71
dependence
77
Chart
86
86
Problems .
95
Region
. 118
Reverse
120
105
107
111
111
112
112
ix
CONTENTS
143
143
146
147
-147
148
151
154
154
155
158
164
166
167
168
170
6 Nonlinear Models
6.1 Introduction .
6.2 "Nonlinear Estimation by Least Squares
6.2-1 Direct Search Methods .
6.2-2 Flexible Geometric Simplex Method .
6.2-3 Linearization of the Model .
6.2-4 Linearization of the Objective Function
6.3 Resolution of Certain Practical Difficulties in
Nonlinear Estimation
6.4 Hypothesis Tests and the Confidence Region.
6.4-1 Linearization of the Model in the Region About the Minimum Sum of the
Squares
6.4-2 The Williams Method
6.4-3 The Method of Hartley and Booker .
6.5 Transformations to Linear Form .
PART III
130
130
132
135
136
137
176
177
178
181
186
191
193
197
197
199
200
200
208
212
214
216
218
221
222
9.2
295
296
298
298
9.3
Least
9.2-1
9.2-2
9.2-3
9.2-4
Squares Estimation
Discrete Observations
Computational Problems
Continuous Observations
Repetitive Integration of Experimental
Data
Maximum Likelihood Estimation
299
300
300
303
307
307
CONTENTS
330
332
334
339
339
345
350
350
354
357
xi
357
360
361
365
365
366
366
367
370
371
Supplementary References .
Problems.
374
376
378
379
380
382
386
394
394
APPENDIX
A.
Concepts of Probability
Supplementary References.
B.
400
Mathematical Tools
B.1 Properties and Characteristics of Linear
401
Equations
401
B.2 Linear and Nonlinear Operators
402
B.3 Linear Systems .
. 402
B.4 Matrix Algebra .
404
B.4-1 Solution of Algebraic Equations.
405
B.4-2 Eigenvalues and Eigenvectors
. 406
B.4-3 Normalization .
B.4-4 Transformation to Canonical Form. 406
B.4-5 Solution of Nonlinear Sets of Equa
tions .
409
B.5 Solutions of Single Ordinary Differential
Equations
. 409
B.5-1 Single Second- and Higher Order
Linear Equations .
. 410
C. Tables
D. Notation
411
411
411
416
416
420
421
422
423
449
Author Index
457
Subject Index .
459
PART I
INTRODUCTION
Extent of Use
by Engineers
Stratum of Physico
chemical Description
Molecular and
atomic
Fundamental
background
Microscopic
Applicable only to
special cases
Multiple gradient
Applicable only to
special cases
Maximum gradient
Macroscopic
Topical Designations
statistical theories of
turbulence
1.1-2
Phenomenological coefficients;
coefficients of viscosity, diffusion,
thermal conduction; Soret
coefficient
"Effective" transport coefficients
MATHEMATICAL STRUCTURE
Algebraic Equations
(steady state,
lumped parameter)
I
I
Integral Equations
(continuous changes)
Steady State
(distributed
parameter)
Difference Equations
(finite changes,
steady state)
Differential Equations
(continuous changes)
Nonsteady State
(distributed
parameter)
I
I
Steady State
(one distributed
parameter)
..
Nonsteady State
(lumped
parameter)
One-Dimensional
Difference Equation
(one-dimensional
connection of
lumped-parameter
subsystems)
Multidimensional
Difference Equation
(more than one
dimensional connection
of lumped-parameter
subsystems)
Difference-Differential Equations
or distributed-parameter steady-
or nonsteady-state subsystems)
CHAPTER 1
Introduction
(1.1-1)
INTRODUCTION.
6seconds~
(A)
o~o
0
I------+---I------+------+---+~~--+--+---+---l--+---+--+---+--
an(/)
(\J_
[~--+--+--I----+----I----+--+-----I---+--+---+----+--+---+----+---+----1~---+---+--+---+---+---+---+--+--f--+--I
(8)
~g
4--------------- 6
I~
Seconds - - - - - - - - - - - - - - - - - - .
~+__+_4__+__+_+_+_____+___+___\___\__+_____+_+_____+_____+_~~
lOW
N_
E~---+--+--+---+-~r.-...r---+--t----+--+--t--t---+---t--t--t--t
~-----------,
Second ------'---------~
(C)
1.2-1 Field data taken from a differential pressure transmitter on three different time
scales. (From B. D. Stanton, [SA J., p, 77, Nov. 1964.)
FIGURE
In
Xl(t)
(a)
FIGURE
(b)
1.2-2 Sample random functions from an ensemble showing part of the ensemble X(t):
XW4_:
Deterministic
model
~y(tJ
(a)
Error E
x(t)
Deterministic .
~-~
model
(b)
FIGURE
Yet)
X(tJ--j......
Deterministic
model
(e)
~Y(tJ
INTRODUCTION
1 d do;
; dr r dr =
(~LP)
where:
r = radial direction
(b)
FIGURE
1.2-4
Vz
pressure drop
compactform.
er
where:
T == temperature
= radial direction
a =
= UA
~T
where:
q =
heat transfer
U == constant
A == area for heat transfer
dT = temperature difference
ot +
constant
er
Supplementary References
er
a 0
-==--r
8t
r 8r or
er
u OZ
02T
=
OZ2
ae == w(t)
dz
+ a dx + bz ==
w(x)
Problems
1.1 Indicate the appropriate classification of models in
terms of Table 1.1-1 for each of the following cases.
(a) Laminar flow through a circular tube
PROBLEMS
TABLE
P1.5
Author
A
Number of
Data Points
Used
14
16
Temperature
Range
Min Max
100
103
100
135
167
190
Average
Deviation of
All Points for
. Heat of
Vaporization
1.12
1.43
0.98
Average value
1.11
CHAPTER 2
Experiment
Random outcome
List of experimental
outcomes
All possible outcomes
Asymptotic relative
frequency of an outcome
(" in the 1_~J!g run")
List of asymptotic relative
frequencies of each
outcome
Cumulative sum of relative
frequencies
Mathematical
Representation
Random variable
Sample space
Population
Probability of an event
P{X(t)
x} == P(x; t)
(2.1-1)
Probability (density)
function
Probability distribution
nt
P(x; t) = --!
N ... co
N
11
b
c
Time, t
FIGURE
X2,
and
x
(a)
XI,
p ( x; t )
oP(x; t)
ox
(2.1-3a)
(2.1-3b)
P(x; t)
= P{X(t)
=5: Xk}
2:
P(Xi; t)
i=l
P(x;t) = J:oop(x';t)dX'
(b)
P(X2; t) - P(X1; t)
X2
J p(x; t) dx
Xl
= P{X1 S XCI)
S X2}
12
x(t)
X(t)
= a sin(21rwot + W)
OH--t---f--l~--t---f--+-+-t-f-+-+--+--+-~I---+-+--+--+---
x(t)
X(t)
(a)
= a sin (21rwot + 8) + E
0
(b)
p(x)
(b)
x(t)
-Y2i
(c)
p(x)
x(t)
p(x)
OI---~'-+---4--1--+---I-+--~---t------#---
= _1_e- x 2 / 2<Ti
<Tx V2i
x
(d)
(d)
FIGURE 2.1-3 Typical process records (left) and their corresponding (time-independent) prob
ability densities (right): (a) sine wave (with random initial phase angle W, (b) sine wave plus
random. noise, (c) narrow-band random noise, and (d) wide-band random noise. (From J. S.
Bendat and A. G. Piersol, Measurement and Analysis of Random Data, John Wiley, New York,
1966, pp. 17-18.)
x; yet)
y}
(2.1-5)
I X ==
+ ~x)
13
p(x,y)
.1
J.
8
!.
8
,--
3~
__
/
(b)
(a)
P(y
IX
x)
oP(x, y)jox
oP(x)jox
= lim
IX =
x)
= p{x,.y)
p{x)
(2.1-6)
== p{y I X
(2.1-7a)
An equivalent relation among the density functions is
P{X1' .. , Xn; t b ... , tn) = P(X1; t 1) ... p{Xn; tn) (2.1-7b)
= P(XllC) . P(Xnk)
x)
Because
p(y) =
f~
00
p(x, y) dx
la
fb
p(X, y; 11 , 12 ) dy dx
(a)
(b)
al Jb 1
we can write
. p(y) =
f~
00
p(y I x)p(x) dx
-00
< Y::;; y)
I: I:
00
eo
and
(c)
14
bz t----------r7''7''''7''''::'rTT'7"'7''''''7....,....,...,
FIGURE
E2.1-1A
for X
0, y
p(x, y; II, 12 ) = 0
elsewhere
Then
P(t < X < 2; 0 < Y < 4)
J"4J2
eo %
= (e-%
e: dx dy
- e- 2 )(1 - e- 4 )
= 0.462
(X :::;;
a-; Y::; b2 )
P(Xl' X2; r)
+ a, t 2 + a)
(2.1-9)
E 4 = (X :::;;
aI,
Y:::;; b2 )
= rp(E1 )
a2,
P(E3 ) ]
hI < Y:::;; b2 }
rp(E2 )
P(E4 ) ] .
(d)
+ a)
p(x)
(2.1-8)
ENSEMBLE AVERAGES
15
X(t)
(a)
X(t)
L-------------=w-~+_1'V_-----~~
Time
(b)
X(t)
X(t)
Time
o'--------------~-IlI-------(d)
FIGURE
and
d y <n)} _ dc{y<n)}
C{ - - - -~dt
dt
C{JP[X(t)]} = [C{X(t)}]
(2.2-1a)
C{dY}" = dC{Y}
dt
dt
and
J:
X(t).p(t) dt
is a deterministic function
tff{y}
where
If
fLx(t)
J:
=
tff{X(t)}.p(t) dt
J:
JLx<t>.p(t) dt
(2.2-10')'
16
If
X(t)
then
n
C{Y}
L>IC{..;}
i=l
(2.2-1d)
alf'xc
i=l
E2.2-1
FIGURE
2.2-1 Mean
The ensemble mean of a stochastic variable
expected value of the variable
~x(t)
the
IS
/kx(t) =
(2.2-2)
<f{X(t)}
<f{X 2(t)} =
f
-
(2.2-3)
= C{X(t)}C{ yet)}
(2.2-4)
C{Z} =
00
2
00
= f'x(t)f'y(t)
(c)
e-x2/2at
dx ==
at
(d)
V27Tcx,t
tit + aY(t)=
yp(y; t) dY]
/kx(t) = 0
00
dY(t)
(b)
dx
+ C{Y{t)}
+ /Ly{t)
e-x2/2at
V27Tcx,t
00
or
/kw(t) = ftx(t)
00
X(t)
YeO)
=0
Example 2.2-1
Enseulble Mean
1 e- x 22
/ at
---===
V Zatat
(a)
dg{~(t)} +
<f{ YeO)}
=0
~ + a/ky(t)
= /Lx
/Ly(O)
= 0
(a)
= /Lx [1
a
- e - at]
(b)
ENSEMBLE AVERAGES
rXX(t b t2 )
<f{X(t1)X(t2 ) }
f'"" J:""
rxx( 7")
(2.2-5)
rxxC -7")
XlX2P(X1> X2;
rxx(t 1 , t2 )
11> 12 ) dXl dX2
17
J: f:
co
00
<f{X(t + 7")X(t)}
XlX2P(X1> X 2,
(a)
(b)
(c)
(d)
T)dx,
dX2
(2.2-6)
18
P(~
C{X(t
C{X(t)} = /Lx
(2.2-7a)
T)X(t)} = rxx(T)
(2.2-7b)
Autocorrelation Function
V t I(t2
I)
[X~
exp - 2a 2 / 1
(X2 - X1)2]
-
2a 2(t2
()
t 1)
III
P-x
FIGURE
2.2-2
means.
X(tI)]} = cf{X(/I)X(/2 ) }
III
I .&....-------X2
..
= <c{X2(t)} - /Li(t)
Note that because X(t I) and X(t 2 ) are not independent, the
product of the first-order probability densities is not equal
to Equation (a).
However, rather than directly integrate to obtain
rxx(t I, 12 ) as indicated by Equation 2.2-5, it is alternately
possible to use the property of the Brownian particle that,
although X(/I) and X(t 2 ) are not independent variables,
changes in position over two nonoverlapping intervals are
independent. Specifically, X(/I) and [X(/ 2 ) - X(t I)] are
independent variables. Thus, by Equation 2.2-4,
c9'{[X(t I)]tX(t 2 )
:L
p(xa)
(1~{t)
P(XI, X2, t I , t 2)
= 27Ta 2
---------%1
= ai(X - /LX)2 +
-
cf{X 2(t1 ) }
+ .
+ .
- /Ly)2
/Lx)(Y - /LY)
c9'{X2(tI)} = ati .
+ 2ala2(X -
a~(Y
]2
(b)
2.2-3 Variance
Just as the mean characterizes the central value of a
random variable, a single parameter can be used to.
+ 2a1a2<C{(X -
,ui)(Y - /Ly)}
+ ...'
{2.2-~
a~Var{X}
+ a~Var{Y} + ...
(2.2-9a)
19
X(t)
Strong
correlation
0
rxy(r)
tl
r=
t2 - tl
:>
./
Y(t)
r
Correlogram
FIGURE
yx(t)
= flox(t)
= rxx(r) - floi
(2.2-lOa)
f:oo f:
00
= ryxCt2 , tl)
xyp(x, y ; i; t 2 ) dx dy (2.2-11)
t Two
will
where
t >O
(a)
20
Y(t) =d n Y(t)/dt n
= nth derivative of
y<n)(t)
and take the expected value of both sides, term by term, using
Equation 2.2-1b to obtain
at
y<n-2)(0)
= -.. =
Y(O)
an
(b)
+ ao/Ly
fJ.,y<n - 2)(0) =
fJ.,y<n)(t)
/LY(O)
= /Lx
(C)
(d)
8{ y<n)(t)}
G"{X(t1)X(t2)}
= orXX(tl'
~
ut
t2)
Hence,
rX'X,(tl, t 2)
02rxX(tl,t2)
"
0 0 . = 8{X (t 1)X (t 2)}
t 1 t2
= rxx(th
aOrXy(tI, t 2 )
- -0~-1
- - - = ..
drxx'(T)
dr
+
on- 1 ryy(O, t2)
ot1- 1
aOryy(t1, t 2)
rXy(t 1, t 2) (m)
G"{Co(t)}
8{[CO(t1) - 0][CO(t2 )
where
(n)
= 0
On =
(0)
rCoco(th t 2 )
2r(T)
dT2
F
C
Co
dm Y(t
)}
2
1
= <f{ , - - --.-
dt~
dt~
on+m rXy(tl, t 2 )
ot~ ot}!
FIGURE
+ ..+
X(t 1) y<n-l)(O)
= .. = X(t 1) Y(O) = 0
(g)
(h)
atl
F
Input----t
d nX (t
(1)
In general,
rx(n>y(m>(t1, t 2)
(k)
= drxx(T)
(i)
(J')
rXy(t l , 0 = 0
d'T
rx'x,(T)
(2)
(f)
on-l rXy(tI, t 2)
~ n- 1
+ ...
ut 2
rxx(t 1, (2 )
an- 1
[an y<n)(t 1)
Thus, the deterministic model for Py, Equation (c) and (d),
will give the solution we seek for J-ty or
on-l rXy(t h 0)
= 8{X(t)}
fJ.,x(t)
/Ly(t)
onrXy(tI, t 2)
~ n
ut 2
E2.2-4
C(O) = 0
where
Co = input concentration, a random variable
ENSEMBLE AVERAGES
PXy=O
PX Y
= all
where 1*
=1
- - - - 4 - - - - PX y
21
=0
(p)
PX Y =-1
VIF.
droo(tl, t2)
)
d
+ roo(
tl, t2
tl
roo(O, t2)
and have the solution
t*
FIGURE
()
rooo t1 , t2
=0
(q)
O'Xy(th t2 )
G{[X(t1 )
r, f:
co
ftx(t1 ) ] [ Y(t 2 )
/-ty(t2 ) ]}
-pix, y;
th
t 2 ) dx dy (2.2-13)
+y
p(x,Y)
p(x, y)
= 0
(x -JLx)(Y - JLy)
for 0 .s X ::; 1
0::; Y::; 1
otherwise
O'Xy(T)
rXy(T) - /-txJLy
(2.2-13a)
ftx)
X ( <Tx(O)
and
'(Y - /-ty)
,
(2.2-14)
(1 "'1
Jo Jo
l
fLy
fLx'
O'y(O)
_-1_.
/-tx =
= fo
f:
-l-f
x(x
+ y)dxdy
y(x
+ y) dx dy = i-z
(1~)2 =
-l-i"4
22
since
tB'{XY} =
f: f:
TABLE
(xy)(x
+ y)
dx dy =
Param
eter
Then
PXY
aXY
=-=
aXay
Tl4
-_.Ll_
144
2.2-1
1
11
a~(t)
ax(t)
/,x(t)
rxx(T)
rXY(T)
axx(T)
2.2-6
Crosscovariance*
2.2-2
Name of Function
Mean
Mean square
Variance
Standard deviation
Coefficient of variation'
Autocorrelation *
Crosscorrelation *
Autocovariance*
J-tx(t)
J-t~(t)
Correlation coefficient*
Expected Value
cf{X(t)}
If{X2(t)}
If{[X(t) - p,x(t)]2}
+ vai(t)
ax(t )/J-tx(t)
If{X(t 1)X(t2)}
If{X(i1) Y(t 2 ) }
If{[X(t1) - JLx(t 1 ) ]
. [X(t 2 ) - JLx(t 2 ) ]}
t9'{[X(t 1 ) - JLx(t 1 ) ]
. [Y(t2 ) - /1.}'(t 2 ) ]}
aXY(T)/ax(O)uy(O)
Continuous
Discrete
Moment
Raw Moments
00
2: XPP(Xi) 1
i=1
2: XiP(Xi)
i=1
2: x~P(x.) =
i=1
J-to
Zeroth moment
J-tx
J-tl
First moment
(ensemble mean of X)
J-tx 2
J-t2
Second moment
J-tn
nth moment
00
00
00
f:<Xl x"p(x) dx =
2: XfP(Xi) = J-tx
i=1
/Lx"
Central Moments
00
2: (Xi - J-tx)Op(Xt)
2: (Xi - J-tX)P(Xi)
i=1
2: (Xi - J-tX)2P(Xi)
i=1
~0
Zeroth moment
= 0
~1
First moment
.",{t 2
Second moment
(ensemble variance of .1')
Jlt n
nth moment
i=1
f:
f:
00
co
co
f:
eo
(x - /Lx)p(x) dx = 0
00
(x - /LX)2p (X) dx =
ai
ai
00
(x - /Lx)''p(x) dx
2:
i=1
(x, - J-tx)np(xD
NORMAL AND
x2
PROBABILITY DISTRIBUTIONS
n = 10
() =0.20
(2.2-15)
1.00
t oo",
J~ co xixgp(xb
X2) dX2 dX I
/LXi
= C{XI}
VI
Ii:;'
J: J~
/Lll
.All
I : ",
co
x~X~P(Xb
J~ co (Xl
= C{X I X 2}
=
0.75
>i'
/LOl
co
23
/LX2
0.50
= C{X2}
0.25
- /LXi/LX 2
/Lll - /LIO/LOl
aXY
I
Example 2.2-6
X =
() =
c is
0.30
= g {(X - /LX)2}
I
I
n = 10
Moments
0.20
>i'
II 0 .20 I
Ii:;'
No te that
g{(X - /Lx)(,JLx - c)}
= CJLx-
c)g(X - /Lx) =
CJLx -
c)O
0.10
since viti == O.
o L...l----l...-L.l.-L...J.----I.......l-..L. J
I..J.]
567
Binomial variable, x
~l
l.0
(b)
FIGUR E 2.3-1 Binomial d istribution : (a) binomial dist ribu tion,
and (b) binomial probabili ty function.
,1/_ exp _
ax 'v2~
((X ~
rx)2).
ax
-00
<
X<
00
(2.3-1)
24
TABLE
2.3-1
Probability Function
Name
P(x)
Binomial
P(x)
P(x)
Poisson
(nO)X e: n()
x!
0, 1,2, ...
n!
Xl! X2! Xk!
2Xi
n =
t=l
Hypergeometric
* The
/Lx
= 0, 1, 2, ... , n
Multinomial
(:)ox(l - o)n-x
Variance
Mean
= P{X = x}
x = 0, 1,2, ...
total number of defectives
in the N total items
ai = Var{X}
= ff{X}
nO
n8(1 - 8)
nO
nO
Each variable
= nOt
nD
Each variable
= nOt(l 8t)
nD(N - D)(N n)
N2(N - 1)
~) == xl (nn~ x)!
By direct integration it can be shown that the two param
eters in Equation 2.3-1 are the mean and variance of X:
fO"" p(x) dx = 1
1
foo exp ~ /ax'v 2'1T - 00
(zeroth moment)
L:
xp(x) dx
<S'{(X - /LX)2} =
L""oo
(x - /LX)2p (X) dx
rff{X}
Solution:
/Lx
(first moment)
= ai
L"""" p(x) dx = 1
/Lx
2
ax
)2) dx
(a)
[f
00
00
2
(X
- -)
1
---exp
axV2'1T
Show that
(X 2-
2ai
dx ]
= .-11'12
a x2
f f
00
00
(y2
00
+ Z2)
exp (
exp - - 2
2
dy dz
00
ax
-2:i)dz]
NORMAL AND
TABLE
2.3-2
x2 PROBABILITY DISTRIBUTIONS
Variance
Var {X}
Mean
Density Function
Name
Log-normal
p(x)
= vi 1
271'[3
(In x -
exp [
2[32
Applications or Remarks
)2]
= cf{ln Xl
[32 = Var {In Xl
a
Exponential
p(x)
x t8
= 0 e
C)
Weibull
p(x)
a~xa-1 e- Px C1.
Gamma
p(x)
fJa+l
= - - - e-pxxa
rea
+ 1)
(0:::; X
< 00)
a > -1
00
1
= -z
7T
ii
2n
00
82
2 )
2
!-tx)
ax
dr dO =
(2.3-2)
1 e- u 2 / 2
(2.3-3)
V27T
L:
I:<Xl up(u) du =
a + 1
-fJ
u2p(u) du = 1
p2
(second moment)
(zeroth moment)
(first moment)
u} =
1.
IU
V27T -
e- u / 2 / 2 du'
(2.3-4)
00
I} = P{U
I} - P{CI ~ O}
-0.3}
0.999 - 0.618
0.381
p(u)du = 1
-r2(~+1 )]
= (X -
eo
(~)-1Iar(~+ 1)
p{x).
ai =
[f{}
e 2a + p2(ep2 -1)
I:
r exp (r
- -2
ax
2"
ax
ea+(~)
OO
ILx
[f
25
1 IU e- t 2/ 2 dt
V27T -u
(2.3-5a)
26
1.4 --.....--r---r--r---,--,----r-,.---r-,-----,
p(x)
0'1=0,,10
1.2
ex 7.80
Gamma
13=8.80
=- 0.05
2
1.0
Log-normal ex
,13 =0.11
Weibull
-. 0.8
ex=3.26
13=0.70
~ 0.6
0.4
0.2
o0
0.20 0.40 0.60 0.80 1.00 1.20 1.40 1.60 1.80 2.00
x
Qi=0.50
1.2
=
a =1.44
Gamma
a 1.00
13 = 2.00
Log-normal ex= -0.20 {32=0.41
1.0
Weibull
Solution:
13 =0.87
6"{ U}
-. 0.8
~
Q.
0.6
r",
up(u) du
1 foo
V27T -
----=:'
e- U 2/2du
(a)
00
0.4
0.2
Vi1T U: e-
6"{U} =
o0
0.20 0.40 0.60 0.80 1.00 1.20 1.40 1.60 1.80' 2.00
x
Gamma
a=O.OO
log-normal a= -0.35
1.0
Weibull
a=I.00
fJ=1.00
13 2 =0.69
dt
t =
-00,
+ 10'" e :' dt J =
00.
(b)
P=1.00
p(u) 0.4
0.4
0.2
0.20 0.40 0.60 0.80 1.00 1.20 1.40 1.60 1.80 2.00
X
-3
-2
I
I
(2.3-5b)
JU . ;, :" dt
+ fU . }
e:" dt
(2.3-6)
I
I
I
I
I
I
I
I
I
00
2.3-5.,
Example 2.3-2 Mean and Variance of the Standardized
Normal Variable
ance of U is 1.
I
I
I
I
2
I
\
\
I
I
I
I
/
'v 27T
Jo v 27T
as can be intuitively seen from the symmetry of Figure
-
I
1
I
0
u
1
-2
-1
I
p(x)
\
\
I
I
I
I
\
\
\
\'
\
\
\
\
O"x
\
\
\
\
\
\
\
I
f
\
\
\
\
\
\
NORMAL AND
:l 0.8
ii:;
III
~0.6
Q.,
~0.4
:.c
III
.<:l
0.2
Ol.oo:::=--'-_ _.1.-_-'-_ _.1.-_----:-_-----'.
-1
0
1
3
-3
Standard normal variable, U
(c)
(u - O)2p(U) du
(d)
V 1T Jo
f.'" t, -1 e -
dt
(n - I)!
and
r(~) = ~;
the integral in Equation (d) is
and
2
Var {U} = --=
V1T
27
:::l
r""
PROBABILITY DISTRIBUTIONS
1.0 ...------,----,-----,.-----r---r---::::,....,
Var { U } =
X2
(V;)
=1
2
t The use of special graph paper which linearizes the normal and
many other distributions is described in the booklet by J. R.
King, " Gra phical Data Analysis with Probability Papers,"
available from Team, 104 BelroseAve., Lowell, Mass., 1965, and
in the article by E. B. Ferrell, Ind. Qual. Control, p. 12, July 1958.
L: ranks
observed frequency
m =-:--=-,--:---
Finally, the following relation
p =
-.!!!..
+1
28
Number of
Particles
Cumulated
Frequency
Ranks
Under 0.30
0.31-0.40
0.41-0.50
0.51-1.00
1.01-2.00
2.01-4.00
4.01-6.00
6.01-8.00
8.01-10.00
10.01-20.00
2
33
67
5
63
5
11
1
11
1
2
35
102
107
170
175
186
187
198
199
1-2
3-35
36-102
103-107
108-170
171-175
176-186
187
188-198
199
Average Rank
m
n+1
Percent
1t/200
19/200
69/200
105/200
139/200
173/200
181/200
187/200
193/200
199/200
0.75
9.50
34.50
52.50
69.50
86.50
90.50
93.50
96.50
99.50
(m)
It
19
69
105
139
173
181
187
193
199
ke- q / 2
(2.3-7)
99.9
99.9
99
--------------'-----~--,
99
c:
ro
c:
co
90
:5
:5
.!!
cn
cn
en
en
.s
50
.s
(ij
(ij
::J
C'"
<U
::J
tT
<U
.....c:
<U
90
50
......
10
c::
Q)
~
Q)
c,
~
0
Q)
10
0.1
0.01
0.1
10
15
Particle diameter in microns
L....
0.1
.....L..
J.......
10
~___'
100
E2.3-3.
29
where
I
..
(det f-1)%
k = positive constant = (2'Il)n/2 = (27T)n/2Ifl %
p
ff.J
k exp ( -~)
a~a~(l
=1
. a~
Tii =
and
- p2)
aW -
.p2)
Then
!LT
=
T
x =
q=
an n x nmatrix
where
Ul
IfI =
determinant of f
The /L'S and a's are constants; the /Lt's represent the
ensemble means of the respective X's; the aj/s represent
the variance and covariances of (XjXj ) . (Note that
all == af.)
Example 2.3-4
Xl - /Ll
= --
aj
i = 1,2, . . .
2+U2)
.
(U
P(Xh X2) = -2-1- exp
_..2...:..2
7Ta1a2
det
f = IfI =
0'110'12 - a~2
= ~a~ - ~2
with
all
0'22
== O'~
== a~
q=
Now let
+ (X2 -
/L2)2a i]
i959.
t A good
30
x2
Ur + U~ + . + U;
= U? = (XI ~ /Lf
x2
0.20
i=l
i=l
(2.3-9)
~ 0.15
~
~
Q.
0.10
0.05
(0 <
vf2X2
0)2} = 1 + 1 + ...
j -
=v
== P{X
x2
is
S X~}
(2.3-11)
2.3-3
DISTRIBUTION OF
1
10
25
2.3-6
1.57 x 10- 4
2.558
35
40
DISTRIBUTIONS
X2
P{X 2 ~
0.01
30
v=
Degrees
, Freedom
20
x2 < (0)
(2.3-10)
and is illustrated in Figure 2.3-6. Some special cases of
the x2 density of interest are: (1) the square root of x2 for
v = 2, called the Rayleigh density function, (2) x2 for
v = 4, the Maxwell function for molecular speeds, and
(3)
for v > 30, which is distributed approximately
as a normal variable with ft = v2v - i and a 2 = 1. The
ensemble mean of X2 is equal to the number of degrees of
freedom:
C{x2}
15
x2
FIGURE
1
v
x
p(x2) = (2)V/2r(v/2) (X2)';;-1 e-2"
10
x2}
0.05,
0.50
0.90
0.95
0.99
0.999
0.00395
3.940
0.455
9.342
2.706
15.987
3.841
18.307
6.635
23.209
10.827
29.588
31
- 12:
X = -n
Xin,
t t
i]
~ (Xi
= ax = n _1 1 ~
X-)2 n,
(2.4-2)
(2.4-3b)
Sx
(2.4-4)
=-=
X
L (xl) n
ftx
(2.4-3 a)
i]
1 [(Xn - (X)2]
(2.4-1)
"'2.
Xi
= n :
2 SX
l=~
Xi)2
L (Xi 11
i=l
x)2
i=l
2:p(Xi)Xi
= O(l-z-) +
+
X =
2: niX;
= 10[(0)(1)
1(6)
4(/i)
+ 5(n) = 2.5
+ 3(7)
+ 4(5) + 5(1)] = 2.4
2(10)
32
TABLE E2.4-1
Values of Random Variable X
Sum
_.1_
1.
crcr-
= n!(nn!-x)! 2
_L
.s,
32
Experimental data
(30 tries) n, =
ui
L: P(Xi)(Xt -
[0 2(3.12)
f.Lx)2 =
12(-1T)
32(t~)
+
2
Sx
2: nt(Xi
32
10
L: Pi(Xt)x't - x2L: Pi
22(t~)
10
32
42(3~)
u= X-
30
f'x
ax/Viz
X)2
32
n -1
= /9"[(0 -
10
32
2.4)2(1)
(1 - 2.4)2(6)
(3 - 2.4)2(7)
(2 - 2.4)2(10)
(4 - 2.4)2(5)
(5 - 2.4)2(1)]
= 1.42
(2.4-7)
(2.4-5)
and by using Equation 2.2-9a for independent variables
with Var {Xi} = vi that
Var {X} = Var
(n - l)si =
{~ ~. Xi n} = ~2 ~ n, Var {Xi}
(2.4-8)
p(f)
(2.4-6)
The positive square root of Var {X} is termed the stand
ard error or sample standard deviation. Thus the sample
means themselves are random variables with an expected
value the same as that of X and with an ensemble
standard deviation ofax/vn. Figure 2.4-1 indicates how
the dispersion is reduced for increasing sample sizes as
called for by Equation 2.4-6.
One important theorem in statistics, the central limit
theorem, states that under fairly general conditions the
sum of n independent random variables tends to the
normal distribution as n -+ 00. Thus, the probability
density for sample means computed from nonnormal
random variables will be more symmetric than the under
lying distribution and have less dispersion, as illustrated
in Figure 2.4-2.
J.l.x
Probability
densityof X
33
p(x)
Probability
densityof X
",,/
/'
Probability
densityof X ~.--
!J
",/
/
x,x
J.L
x,x
(a)
(b)
2.4-2 Probability densities of sample means showing the reduced dispersion for the X
distribution: (a) distribution of X is normal, and (b) distribution of X is not normal.
FIGURE
nax2 - nVar(X)
tB"{(n - l)sk}
ai
nax2 - n
n
= ai-en - 1)
(2.4-9)
si = ai~,
v=n-l
(2.4-10)
Var { ai
s;,
Sp
L ViS~
i=l
-k--
L
i= 1
(2.4-12)
Vi
sr.
FIGURE
2.4-3
Probability density of
si.
s;,
34
Dr
1 (XiI - Xi2 )2
2 _ 1
2
s~
2_
Si
(b)
s; =
2:
I=
VtSr
= K
2:
Vi
11
(Xi - 70.89)2 =
9 i=l
11~~07 =
5.89
then
22: (XiI
X1 )( X i 2
n - 1
or
(c)
Vx 2 / v :
U
3.03 _
(2)(10) - 0.152
.':
(X - I-tx)
%2)
20
"2
2:f=l Diln,
Vi
where 15 =
i=l
sl =
X- /Lx
SJl
(2.4-13)
E2.4-2
D = Difference
73.2
68.2
70.9
74.3
70.7
66.6
69.5
70.8
68.8
73.3
74.0
68.8
71.2
74.2
71.8
66.4
69.8
71.3
69.3
73.6
0.8
0.6
0.3
-0.1
1.1
-0.2
0.3
0.5
0.5
0.3
D2
Total
0.64
0.36
0.09
0.01
1.21
0.04
0.09
0.25
0.25
0.09
-3.03
p(t) =
r(~) (.
V1TV .
rG)
t2)-C;1)
+ -;.
(-00
pet)
-2.571
-4
-3
-2
-1
35
2.571
FIGURE 2.4-6
= 5.
FIGURE 2.4-4
P(t)
99.5
99
0.90
0.95
0.975 0.99
0.727
Example 2.4-3
0.75
If P{ -2
Solution:
From Table C.3 in Appendix C for the t distribution, the
P{t ~ 2) ~ 0.96; hence P{t > 2} ~ 1 - 0.96 = 0.04. By
symmetry, P{t s - 2} = 0.04. The total area from -00 up
to t* is P = 0.04 + 0.25 = 0.29, which corresponds to
P{t ~ t*) = 0.29. By use of symmetry again, P{t ~ - t:;J ~
1 - 0.29 = 0.71, and from Table C.3, t* = -0.56.
95
pet)
~
90
50
30
5
p= 0.04
-4 -2
FlGURE 2.4-5
0
t
.2
10
FIGURE E2.4-3
2.4-3
36
S2/ O'2
(2.4-15)
f:
(2.4-17)
and is illustrated in Figure 2.4-7.
The ensemble mean and variance of Fare
_V_2_
V2 -
S2
C{F}
A useful relation is
(2.4-18)
Let V1 = 10,
what is F* ?
V2
1.0 ----~----or_---__r_---....,
0.8
. 0.6
~
"'-"
~ 0.4
If P{O ::; F ::; F*} = 0.95, then P{F > F*} = 0.05. From
TableC.4 in Appendix C, F*(10, 4) = 5.96.
It is also true that 1/5.96 = F* for P{F ==:; Fa} = 0.05 with
VI = 4 and V2 = 10 degrees of freedom.
2.4-4
"P.ropagation of Error"
acf{X}
1.0
FIGURE
2.4-7
2.0
F
4.0
+b
(2.4-20)
a2 Var {X}
(2.4-21)
0.2
0.95,
Solution:
Var {Y}
(2.4-19)
a2X2
37
FIGURE
E2.4-5
2
j(x)=j(a) + df(a) (x-a)+ d j (a) (x-a)2 + ...
dx
dx"
21
"Error"
Constants
(3 ax )
Signal
at
~x
Xl
X2
5
2
100
150
5%
470
270 of mean value
Controller
Solution:
=
=
+ a2g{x2} + 100
5(100) + 2(150) + 100 = 900 mv
a,g{Xl}
30'2
Var {Y}
a~
Var
{El}
a~
Var
0.05(100)
or
0'1
=0.04(150)
or
0'2
30'1=
25(t)2
+ 4(i)2 = 19
= 0.02(900)
or
Ucontr
e- X
1- x
+ -1 x 2 + ...
2!
~l-x
{E2}
t;
!;
9
a~ =
(t)2
a~= (i)2
(mv)2
(2.4-22)
(2.4-23)
where the superscript zeros refer to the reference state
for the expansion.
!.l-;
l.i.2.
= V 1.Qg.2.~
3~~
10g!U
(mv)2
J.l
0.4
0.2
0.0
FIGURE
3
x
38
- Yo)
X n ) } ~ [f(x~, . . . , x~)]
ern" )
38.6 ( - - I '
g-mo e
for air, T = 300oK, and that the ensemble mean and, variance
for Yare, respectively, 100 ern" and 1 ern", find the mean and
variance of P in atm. The ideal gas constant
R = 82 06 (cm
2a]
n
V8
2a
xP)}]
(2.4-24)
nRT _ 3n
[
V o - nb
vg
+ [-
i=l
(2.4-25)
where x? might be the value of the sample mean Xi' for
example.
A special case of Equation 2.4-25 occurs when the
original function is of the form
nRT
(Vo - nb)2
nRT
+ - (Vo - nb)2 +
+
2a]
vg
(V -
Vo)
VonRT]
(Vo -
n
2
nb)2
a]
vg V
+ {3V
= ex
cf{P} = ex
Var {P}
ex =
+ {3cf{ V}
[32 Var
{V}
(100)(82.06)(300)]
because then
Var{Y} ~
3)(atm)
(OK)(g-mole)
Solution:
Since Van der Waals' equation is nonlinear in V, it
must first be linearized. Expand the function in a Taylor
series, dropping terms of higher order than the first.
nRT
o
[af(X~'a;i" X~)]
x [C{(Xt
. ., Xn) }
)2
P ~ [ V - nb -
t=l
Var {I(X!>
of(xo, Yo) (
)
ox
x - Xo
+ af(~; Yo) (y
lff{f(X1 ,
Assuming n = 1 g-mole,
+ (100 - 38.6)2
'QyO)2
(Q yO)2
( ~~ Var{X1} + ... + :~ Var{Xn}
~ = [
fJ
(2.4-26)
cf{P}
Var {P}
Example 2.4-6 Mean and Variance of a Nonlinear Function
of a Random Variable
Van der Waals' equation can be solved explicitly for P as
follows:
P
nRT
(V - nb) -
2a
n
V2
where
P = pressure (a random variable)
648 atm
= - 3.84 atm/cm"
= 264 atm
=
14.75 atrn?
constants
WC (dT)
dt
p
o,
n = number. of moles
a, b
(82.06)(300)
(2)(1.347)10 6 ] atm
(100 - 3816)2 +
106
ern"
t Adapted from
A D..Ta
where
W = weight of water, lb
C"
+ . .. +
Ws
o; :
aA:
a~ ~ ~ =
a aT a :
0.45
= -3 = 0.15 ft
0.0225 ft 2
0.25 ("F)2
Tw . )
0.0138 ("F)2
a~Ta ~ S~T. =
W2
aelT /elt:
S,
0F
Solution:
From Equation 2.4-26, assuming that the variables are
independent,
W = WI
1.5 = 05
=T
.
Tw = (t)(TW 1
ST,
aw:
39
0.25
+ 0.01
0.26 ("F)2
tu
S Ua -
2[ 0.08
0.0225
(68.6) (200)2 + (8.74)2
(68.6)2[2 x 10- 6
(4706)(0.00570)
+ 2.95 X
+ 7.22
0.26
0.048]
(60)2 + (3.W
10- 4
5
X 10-
+ 0.00533]
26.82
40
* -
(2.4-27)
i=l
~ (Xi
~ njXjYj
- X)(Yi - Y)nj =
i=l
- nXY
i=l
as
SXY
= (n
i=l
i=l
i=l
(2.4-27a)
(-1 ~
PXY
FIGURE 2.4~9
coefficient, PXy.
+ 1)
*(Xi - X)(Y
z* =
Y)n;
tanh- 1
tIn 1 +
1-
PXY =
~Xy
PXY
(2.4-28)
_1_
(n - 1) ~
Sx
i Sy
1=1
..
x
(a)
FIGURE
x
(b)
41
. ...
\
:
......
x
(a)
(b)
FIGURE 2.4-11
t For various tests which can be made for PXY together with
tables and charts, see E. S. Pearson and H. O. Hartley, Biometrica
Tables for Statisticians, Vol. I (2nd ed.), Cambridge Univ. Press,
1958; R. A. Fisher and F. Yates, Statistical Tables for Biological,
Agricultural, and Medical Research (3rd ed.), Oliver and Boyd,
Edinburgh, 1948; and F. N. David, Tables of the Correlation
Coefficients, Biometrika Office, University College, London, 1938.
TABLE
15 11
8 8
7 543 2
6 4 3
Solution:
The sample correlation coefficient can be calculated from
Equation 2.4-28 as shown in Table E2.4-8.
In general, a value of PXy = 0.937 is "high"; consult the
aforementioned references for appropriate tests.
E2.4-8
Yf
Xh
Sedimentation
Crystallinity
Rate
1
2
3
4
1
3
4
6
5
7
8
8
11
15
56
40
:%=5
(yt -
(xt - X)
-4
-3
-2
-1
0
2
3
3
0
Y)
(Xt
(yt _ Y)2
X)2
36
16
16
-6
-4
-3
-1
1
1
9
4
1
1
1
16
64
144
1
0
8
0
4
9
9
52
Y=7
PXY
81
V(52)(144)
X)2
Y)
2 (Yf
= 0.937
y)2
(xt - X)(Yf -
24
12
6
1
0
2
12
24
81
Y)
42
Supplementary References
x < 0
Wiley, New York, 1952.
O sx sn
PCx) P IX -s x}
Hoe!, P. G., Introduction to Math ematical Statistics (3rd ed.),
John Wiley, New York, 1962.
x >n
Huff, D., How to Lie with Statistics, Norton , New York, 1954.
Kenney, J. F . and Keeping, E. S., Mathematics of Statistics,
plot the probability distribution (versus x) and
D . Van Nostrand , New York, 1951.
determine and plot the relation for the probability
Mandel, J., The Statistical Analysis of Experimental Data , John
density.
Wiley, New York, 1964.
Mosteller, F., Rourke , R. E. K., and Thomas, G. B., Probability
2.4 The Rayleigh probability density function is
and Statistics, Addison Wesley, Reading, Mass., 1961.
Wallis, W. A. and Roberts, H. V., Statistics : A New Approach,
r> 0
Free Press, Glencoe, Ill., 1956.
Wilks, S. S., Mathematical Statistics, John Wiley, New York,
1962.
where 0 2 is a constant. Determine the Rayleigh prob
Wine, R. L., Statistics for Sci entists and Engineers, Prentice-Hall,
ability distribution function , P(r), and plot both
Englewood Cliffs, N.J., 1964.
per) and Per) for several values of 0 2. (P(r) corre
sponds to the probability that a point on a plane,
who se coordinates are independent no rmall y distribu
ted random variables, lies within a circle of radius r.)
2.5 By analogy with thermodynamics, the "entropy" for
Problems
a discrete probability function can be defined as
~ {~
k=l
2.6
..2 e- x 2 I 2 a 2 U( x)
p(x) = -V2
- _ .r
..1
.:~
a 3 V7T
f::
2.7
x
1
Y
0
1
2
i\
.L
12
-h
4
i~
.L
12
.L
12
'1. -4
-lz
.L
12
6
-.L
12
I
J
PROBLEMS
2.8
= ~
p(x)
1
j-
exp
[(x.
-2 I-tX)2]
-
V 27TClX
2
ClX
1
[(y
---===
exp -
V 27TCly
2
20'y
8i
d2 y
(a) dt 2
d2y
2: iY?
t=1
d2 y
dt 2
BY
(b) -
8t
x-+
+a
dY
dt
X(t)
8Y
a - = b(Y- y)
8x
=0
= Co
=0
00
p(x)
=-
1
a
for
p(x)
elsewhere
(-~2
< ~)
-< X -
2
5X
(b)
(d)
+ 7)
(X ~ 3)
(d) fey)
C(O, x)
C(t,O)
lim C(t, x)
82 C
= a 8x2
(c) (X
(b) dt 2
TI O
T(X2) = T 20
8C
(c) f(Y)
T(XI)
- l-ty)2]
43
yet) = A ei rot
(b)
V=1)
(c)
yet)
Densities:
(a) pea) =
pea) = 0
(b) pea)
= c e- a 2 / b
IAI
b
~"2
b
IAI >"2
where c is a constant to be
determined
44
.1
Y 2
2.29
t
!
t
- - - - -1 -- 1
1-6
t
16
r - - - - - - -- - 1
1
t
16
f6
2.30
for 0
X ~ 1
o~ Y
p(x, y)
= 0
bxy
cy2
dx
+ ey)
where
ax 2
bxy
+ cy2 +
dx
+ ey
~ 0
+ 2 Y(t) = X(t)
Y(O)
=0
2.32
elsewhere
2.31
6.00-6.19
6.20-6.29
6.30-6.39
6.40-6.49
6.50-6.59
PROBLEMS
(d) P{X 2 = 3}
10(1 + ~r(1 - ~)
y=
o~
X~
Lifetime (he)
Number
2000-2999
3000-3999
4000-4999
5000-5999
12
64
35
14
45
Tree
A (%)
--37
35
43
34
36
48
33
33
Tree
P(%)
Difference (%)
37
38
36
47
48
57
28
42
0
3
-7
13
12
9
-5
9
---
P2.48
Composition
Terphenyl
X 2 (grams),
TPBD
Xa (grams),
Zinc
Stearate
207
212
220
210
205 .
213
200
217
25
25
25
25
25
25
25
25
8.3
7.9
7.2
8.0
7.7
8.2
7.8
7.8
Code
Number
Relative
Sensitivity
Number of
Samples
Xl (grams),
53
54
55
57
59
60
61
63
29.4
26.9
26.3
21.2
26.3
23.1
26.8
25.4
2
3
5
2
2
3
3
2
46
2.57
where:
2.51 For
5 and {t
V1
ko(4~\+bb) (4~2)
3/2
sr
where A
2.58
YA
(c)
(d) q = UA
2.52 Given that P{F > F*} = 0.05, compute F* for the
variance ratio distribution with V1 = V2 = 5 and for
V1 = 3 and V2 ~ 10.
(b)
(a)
(b)
(c)
(d)
+ aT)
PT
U and
JL, SCFlInin
Stream
Variance, 0'2
100
350
170
30
Fresh feed
Recycle
Inert .gas
Instrumental
analysis stream
250
500
150
10
Ps
T- 43
PROBLEMS
47
Orifice
is
tAp
= 12 in. Hg
~p
2/V
2Lp
dV
dt
Ap
ftcx(W/A)
where:
ft" of filtrate
time, minutes
p = pressure drop
fL = viscosity of fluid
ex = average specific cake resistance (0
W = weight of dry cake solids
A = filtration area
V
t
a ~ 1)
L
D
p
=
=
gc =
F (feed)
G (gas)
'----~
L (liquid)
""-------~
C (coke)
where:
'f) =
G =
DE
C;
k =
D1 , D2 =
viscosity
heat transfer coefficient
mass velocity per unit area
(4)(mean hydraulic radius)
heat capacity
thermal conductivity
diameter of tubes 1 and 2, respectively
(~:)
(D~G)
(Ct)
100,000
0.77
0.5/,0
1%
48
2.67
(a)
Energy Interval
(b)
x
(c)
:s-:
0.
2.68
x
(d)
Run
..:,:.
(..1
x
(e)
E.A.
J.E.
0.3-0.4
12
11
0.4-0.5
32
30
0.5-0.6
26
59
0.6-0.7
21
22
17
0.7-0.8
3
9
0.8-0.9
8
9
0.9-1.0
5
6
1.0-1.1
4
1
1.1-1.2
5
4
5
1.2-1.3
In a fluidized bed oxidation process, seven runs were
carried out at 375C. The conversion of different
naphthalenic feeds to phthalic anhydride (PA) is
shown below. What are the sample correlation
coefficients between the percent conversion and:
(a) the contact time and (b) the air-feed ratio? What
is the sample correlation coefficient between the
contact time and the air-feed ratio? What interpre
tation can you give your results:
I
2
3
4
5
6
7
Contact
Time
(sec)
--0.69
0.66
0.45
0.49
0.48
0.48
0.41
Air-Feed
Ratio
(air/g feed)
Mole percent
Conversion to
PA
29
91
82
99
148
165
133
50.5
30.9
37.4
37.8
19.7
15.5
49.0
CHAPTER 3
3.1
INTRODUCTION
.x,
instead of
"" (Xi - X)
-2
sx2 = (n _1 1) ~
ni
s'; is a
p{b -:-+ 8} -+ 1
as n -+ 00
49
50
...
I Xl)
= P{Xl; 81 , 82 ,
L{ 8b 82 ,
I Xb X2, . , X n )
3.2-1
L ( 81 , 82 ,
I xJ
i=l
. ) . .
p{Xn ; 8h 82 , . )
(3~2-1)
= Inp(x1; 81 , 82 "
. )
(3.2-2)
Solution:
First, form the likelihood function for the sample
{Xl, X2, ... , x n } of measurements of the random variable X:
rr
n
P(Xi;
(}1' (}2)
i=l
i= 1
(}2
1 )n exp - [1"26
~ (Xi - 81"2
]
-0- J
2-
V 21T
(a)
vlnL
~=
(3.2-3)
51
(b)
lim C{D~} = 8i
n~
(2)
oln L
-()- = 0
o2
2 (Xi n
( 1)2
2 i=1
6
n
(x, - (Jl)2]
(d)
co
1
1
7} + (}3
-~ [n - ~
(3)
= - n
[c{(81n
n
o8i
)2}]-1 (1-:---_1_
_
- PJI02)
fJ2 =
(Xl - %)2
i=1
[n : 1 s~r
s.
<f{X}
Hi
00
P-i
P-i
I:",
(In)
(discrete)
(Jl' ., (In)
dx (continuous)
XiP(X k ; (Jl' .,
or
x'ptx;
52
+ (1 - w){3
= w(1 + ex2) + (1 - w)(1 + f32')
= w(3ex + ex3) +. (1 - w)(3{3 + fP)
tf{ Y} = coa
tf{ y2}
tf{ y3}
fJ
(d)
(e)
w(ex - {3)
al -
al -
Method of Moments
{3
(-00 <Y<oo)
(-00 <Y<oo)
Solution:
The method of maximum likelihood would require that
n
n
i=l
(g)
(f)
uv = a2 - 1
Example 3.2-2
(c)
w=ex-{3
(b)
(normal)
(a)
= aa
(h)
(i)
(k)
(3.2-4)
p(O I x) = p(O, x)
p(x)
(3.2-5)
53
t N.
54
pet)
Area
a
t{j
t-y
t{j
(a)
t-y
tot
FIGURE
~ /Lx
:5 t y } = Y
(3.3-1)
t1-i
(c)
(b)
= O.
(t 1 -
::'
2
P{t > t y } = 1 - y
X-
J1-x ~ (t1-~)SX
2
and
P{tp < t :5 t y} = p{tp <
= P(t y )
or
X ~fJ-X
:5 t y}
pete)
y - {3
X- t 1- !!.sx
2
(3.3-2)
f:"
p{t::. < X -
2
Sx
ftx
~ t1 - ::.}2
= 1 - a
(3.3-3)
~ ftx <
X+
t1-~X
2
(3.3-4)
= X-
ftx
ax
X - U1 _::'ax
2
~ ftx <
X+
U1_::'ax
2
(3.3-5)
Sx
P{x~ <
x2
Area
=P(x~) -
(3.3-6)
P(xp)
o
FIGURE 3.3-2 Graphical representation of the probability
statement (Equation 3.3-6) for the x2 distribution.
INTERVAL ESTIMATION
XIJ
<
siv
ax
1
ai
1
->->
siv -
siv
X~
< ax <
X~ -
. (3-.3-11)
siv
and
Xy
because then
- 2 :::;
55
X~
If
f3=~
and
siv
ax2
-2- ~
Xl-~
Example 3.3-1
and Variance
y=I-
siv
(3.3-7)
<-2
x~
c}
(3.3-8)
l!{j(X)}
~
c and
g*
be the
I~co j(x)p(x) dx
I/(x)p(x) dx
By definitionj(X)
C{f(X)}
+ LJ(x)p(x) dx
(3.3-9)
f/(x)p(x) dx
Values of X in cc
76.48
76.43
77.20
76.45
2 _
Sx -
- SJ(
t L Xi
76.546
L (Xi
~ %)2 -_ 0.5543 _ 0 0790 2
n _ 1
7- - .
cc
Jn =
S~
=n- 1=7
JO.0790
- 8 - = 0.099 cc
~ /Lx
76.31
~ /Lx
< 76.79
siv
x~
cP{f(X)
h2 aj.,
76.25
76.48
76.48
76.60
Solution:
~ c f~p(x) dx =
C =
or
h> 1
c} (3.3-10)
(0.0790)(7)
2
(0.0790)(7)
1.690
16.013 ~ ax <
0.03452
Example 3.3-2
~ a~
< 0.3262
56
C c Ib C/hr
b IbB/hr
FIGURE
E3.3-2
Material
Sample Mean
(lb/hr)
Sample
Standard
Deviation
(lb/hr)
A
B
10
5
0.20
0.10
oa
Number
in Sample
5
5
+ fLB
sgvc
X
S~VA
S~VB
-2 = -2 + 2
X
va = 8
f2
-
00
p(D) dD = c:..
2
(3.4-1)
J01-~
p(O) dO = -2
(3.4-2)
= fLo
rejection I
I
I
3.4
HYPOTHESIS TESTING
I
I
HYPOTHESIS TESTING
S1
Dispersion of X about
the true ensemble
mean J.lA + 0
Dispersion of X
about the assumed
(I - {J): Probability of
rejection when the
hypothesis J.l = J.lA is false
a : Probability of
rejection of the
hypothesis J.l = J.lA
when true
'-"-"-''"---x
Xo
i .':
"2
J.lA
FIGURE
(J.lA
+ 0)
1.0..-------,------,--------r----.-------,----...,
0.8
fl
c:
~
:c
tV
U
w
0.6
-a;
(5
"0
c:
a,.,
;E
:c
tV
0.4
.D
c.
/I
ctl.
0.2
00
FIGURE 3.4-3a
= II-' -
a =
1-'0 =
I-'B =
::'00.70
c:
'"
0.60
"0
(50.50
c:
;0.40
~ 0.30
.D
c. 0.20
/I
ctl.
O.10
0.8 1.0 1.2 1.4 1.6 1.8 2.0 2.2 2.4 2.6
nAnS
]~
- - or d..- II'A
- --I'sl
~
d= 11'-1'01
t1
t1
(nA +ns+ l)(nA + ns)
FIGURE 3.4-3b
HYPOTHESIS TESTING
Sx
Vn
=8
0.50
3
0 167
.
= 0.05
is 2.306.
0.30
0.39
If we write
x-
X - /1-1
/1-1 - /1-0 ax
sx/ v n
ax/vn Sx
(1 - (3) ~ P {U ~ u}
<,
t!:. - A
7.19
ta
v'I +
Itl}
(one-sided test)
+A
(tt -a/2v)
- 2.306 - 2.25
1.15
_ 3.96
-2.306 + 2.25
1.15
-0.0488
+ (1.000
(1 - (3)
(0)
f1
0.519
E3.4-1
---;:=~::;:::::===
U2 =
- {3
- 0.519)
= 0.481
/1- = /1-0
vI
I'"Region of
- fJ
----;:===:;:::==
2f!./2v)
+--_rejection
--L-
FIGURE
vi1 + (t 2!:./2v)
= vi1 + [(- 2.306)2/(8)2] = 1.15
2
v=8
6.41
U1
+ (t 2
tf!.2 + A
U2=---;-:====
v i + (t2~/2v)
2
2"
_
~
_
~
(one-sided test)
where
Ul
on of
ction
= t 1 + AVv/X2
ta
Region of
acceptance
/1-0
sx/Vn
---=---+
S9
9),
= 0.75
60
= ~:~ =
f3)
= 0.80,
0.75
f3
= 0.05
or
L 2 = (V21Tux)- nexp
[-2~ii(Xt -
/L2)2]
t= 1
200 r----.,...----,---,-----,
f3
II ~ -1
-a
150
n
L Xi'
1= 1
100
~a
< exp [ -
2~~ ~ {(Xt -
50
< I - f3
or
Observation number. n
FIGURE 3.4-4 Sequential test chart to detect a difference in
ensemble means for gasoline knock rat ing, a = {3 = 0.05,
1-'1 = 55, 1-'2 = 65, and ui ~ 9.5.
< In
C: f3)
which reduces to
ai
tt2 - ttl
In
2 Xi
i=l
ai
<
fL2 -
In ( 1 - f3) + nil
ttl
2 In
(-,{j)
+
1- a
1
(3.4-4)
where p. = (fL1 + tt2)/2. Thus, in a test for one of two
ensemble means, the sum of the observations up through
the nth observation can be bounded if ai is known and
values of fL2 and ttl and X and {j are chosen. Figure
3.4-4 shows how the bounds on .If= 1 Xi increase as n
increases, as indicated by the terms nil in Equation 3.4-4.
The data in the figure are for knockmeter readings of
gasoline with fL1 = 55 and tt2 = 65 for two different
octane numbers; X = f3 = 0.05; and ai estimated. from
earlier tests by si = 9.5 with 20 degrees of freedom.
Inequality 3.4-4 is then approximately
n
2 Xi < 60n +
2.80
i=l
TABLE
3.5-1
HI:
Ux
H2 :
ax
=
=
0'1
0'2
In (CT~)
al"",
< ~ (Xi -
ttx)
i=1
2In
<
(~) + n In (~)
1
a~ - ~
(3.4-5)
If ttx is unknown, substitute
n
(Xi - /1-x)2
i=1
(Xl - 1')2
i=1
TESTS FOR COMPARING THE MEAN OF A NEW PRODUCT OR A VARIABLE WITH A STANDARD
Hypothesis
Test to Be Made Is t
(If the inequality is satisfied)
the hypothesis is accepted)
a unknown;
Remarks
/1-01 >
t1-~C~;)
Two-sided r-test
/1-01 >
U1-~(~:)
Two-sided U-test
(X - /1-0) >
t1-,,(;,)
One-sided z-test
a known
(X - /1-0) >
U1-,,(~x;J
One-sided U-test
(sx)
vii
One-sided t-test
(ax)
vii
One-sided U-test
IX IX -
a unknown;
tt > tto
a unknown;
(}Lo - X) >
II-a
tt < tto
a known
* Adapted from
61
(}Lo - X) > U 1 -
M. G. Natrella, Experimental Statistics, Nat. Bur. of Standards, Handbook 91, U.S. Dept. of Commerce,
t In each case look up t or U for the selected significance level a; t is for the n - 1 degrees of freedom. The tests
62
1002
996
998
1002
983
Solution:
We shall test the hypothesis that the ensemble mean of
the readings of the ten thermometers, fL, has changed from
fLo = 1000 by selecting as Hi, the first hypothesis in Table
3.5-1, namely fL :j:. fLo. The test to be made, since ax is
unknown, is
I?
tl-~C:iJ
2.26(~:t
= 2.26(2.55) = 5.76
X A1, X A2 ,
X AnA
X B1, X B2 ,
fLB
2'
UA
ui
nA
X- A
'"
=n1 ~XAi
A i=l
tl_~
2: X t = 994.0
n
nB
2:
s~
_1_
(XBi - %B)2
1 i=l
nB -
VA
nA -
VB
nB
fLA
(sx
vii )
x=
fLo I = 6.0
-IX
v=n-1=9
IX -
X Bns
3.5-2
Hypothesis
Knowledge of the
Standard Deviation of
A and B
Test to Be Made Is t
hypothesis is accepted.)
UA ~ UB both unknown
IX- A -
B)Y.
Remarks
Sp
UA
!LA
63
- -
i= UB both unknown
IXA - XBI ==
'C~
+ S~)Y.
nA
i= !LB
nB
- -
UA ~ UB both unknown
- A - X- B) >
(X
UA
(XA - X B) >
(n A + nB)Y.
nAnB
'r~
+ s~)
nA
- -
(XA - X B)
U1 -
= (s~/nA + s~/nB)2 _ 2
(s~/nA)2 + (s~/nB)2
n: + 1
nB + 1
nB
_ [(nA - l)s~
+ (nB -
nB - 2
Sp
nA
nA
Y.
nB - 2
(s~/nA + s~/nA)2
2
= (s~/nA)2 + (s~/nB)2
nB + 1
nA + 1
C~ + -a~r
a -
nA
nB
t nA and
14
nB - 2
(I-aSp - -
+ a~r
- -
i= UB both unknown
nA
1)4] Y.
l)s~ + (nB nA + nB - 2
= [(nA -
Sp
v = vA
+ vB =
nA
nB -
Also
(3.5-1)
If s~ does not differ significantly from
test hypotheses
s~,
we set up the
has a t-distribution with v = nA + nB - 2 degrees
freedom. A significant value of t is interpreted to mean
that /LA =1= !LB
and
(3.5-2)
64
90 Octane
22.7
0.45
21.3
0.55
e~ 5)Y.
=
Sp
tl-~
5 - 2
= 2.306
v=5+5-2=8
IX9 4
(2.306)(0.50)(0.632)
0.73
Since 1.4 > 0.73, the hypothesis is accepted and fL94 i:- fL90,
Next, we test the hypothesis that fL94 > fL90, assuming still
that 0'94 ~ 0'90.
(5
si
1- a
Because F~(Vb
V2) = 1/(Fl-~(Vb
V2)) < 1 always, the left
2
2
hand inequality in the probability statement is always
satisfied, and we need only test to determine if si/s~
~ Fl-~.
Example 3.6-1
Twin pilot plant units have been designed and put into
operation on a given process, The output for the first ten
material balances obtained on each of the two units are
listed below (basis is 100 lb):
si
Unit A (lb)
Unit B (lb)
97.8
98.9
101.2
98.8
102.0
99.0
99.1
100.8
100.9
100.5
99.9
1.69
97.2
100.5
98.2
98.3
97.5
99.9
97.9
96.8
97.4
97.2
98.1
1.44
Example 3.6-2
1.69
1.44
3.6-1
= 0.0193
SB
0.1930
sp =
=
(nA - l)s~
(nA.- 1)
+ (nH + (nH -
l)s~
1)
n - 1
= n - 1
S2
-------- < ~
<
Fl-~[(nA -
s~
VI
= nA - 1
= nB -
+ 14(0.193)
+ 14
*
Remarks
If within range,
hypothesis is accepted
Two-sided X2 test
One-sided X2 test
One-sided X2 test
If within range,
hypothesis is accepted
If inequality is accepted,
hypothesis is accepted
1), in - 1)]
V2
15(0.208)
15
0.201
Decision
Test to Be Made Is
FI-~[(nH -
0.439
nB = 15
1 17
.
a~t
~~
= 0.456
VA
v=n-l
s~= 0.028
s~
a~
1.219
Hypothesis
am Mean
Catalyst B
XB = 1.179
nA = 16
Catalyst A
XA
SA
Solution:
s~
S~
6S
66
~ s~(-!
+ -!) = 0.201(T-6 + i\-) = 0.026
nA nB
0':,
TABLE E3.6-3
?
[
31 ] ~
0.040 > (2.045)(0.201) (16)(15)
1
2
3
= I=~
ViS;
1~1 VI
2:
n
C~1 PI -
n)
(PI -
1.28
1.30
1.40
1.48
10
10
10
10
6.34
5.95
5.23
4.55
6.36
6.04
5.27
4.65
6.41
6.11
5.32
4.68
6.42
6.31
5.39
4.68
6.80
6.36
5.40
4.72
1
2
3
4
6.85
6.52
5.52
4.73
6.91
6.60
5.52
4.78
6.91
6.62
5.53
4.78
7.02
6.64
5.60
4.84
7.12
6.71
5.78
4.86
6.71
6.39
5.46
4.72
0.091
0.076
0.020
0.009
l)s~ (3.6-1)
7.50,---...,..-----r----------,r---
1=1
(3.6-2)
t=l
= 1 + 3(n
-
~
.-
6.00
~ 1) (~~ - ~1 Pi)
t=l
7.00
6.50
where
c
Yi 4
Pi
Since 0.040 < 0.145, the hypothesis P-A '# P,B is rejected, and
we conclude that P,A = P,B.
S2
= 29
0.040 .; 0.145
.-2:
0'2
i=l
(3.6-3)
X
FIGURE
= 1 + 3(n 1~
2 _
(i "it - ,t
1
1)
1 =1
,~ (
In) L
(,t=L1 Pi
- n
p, -
PI
1 (4
= 1 + 3(3)
10 -
1)
10
Example 3.6-4
,2 _
1. ~ (9) 2
)~i .~ (40 _ 4) L
Si
i=l
0.049
1
A =
1)
30
__-.,.....---~-...,.....---r----r----r----.----,
2
-~ 2PI In (~)
+ 10
i=l
2 1n (0.~49)
a
25
e
t=l
"0
15.3
For a = 0.05, with (n - 1) = 3 degrees of freedom, X2
from the appendix tables is 7.81; thus the hypothesis of
equal variances for the Xi'S is rejected. Figure E3.6-3
illustrates how the dispersion varies as a function of x.
~ 20
'0
(ij
e 15
.E
.E 10
00
20
40
60
80
100 120
N = drop number
140 160
(a)
~15
c:
Q)
"0
~10
:.cS
..c
:t
5
Ol...-..&.=~...a._...a._
0.6
0.8
1.0
___a
__L.--.L
1.2
1.4
1.6
1.8
2.0
Q..
30
y=ln _x_
l-x
20
2
OL....-l~----IL.-----IL.-......IlIt....l_----"
10
x
2.4
40
10
~___..
2.2
50 r--~---'---~-----r------,..------.
""'
~
67
_ _L......;;::I
10
20
y
(a)
(b)
...l...-~
30
40
50
68
t H.
t I.
NONPARAMETRIC TESTS
i=O
(~)mn
(3.7-1)
-2
P~1
or
em + n)! =
Sign Test
mIn!
P{r= O} + P{r
l}
+ P{r
= 2}
+ P{r
(m
+ n)~
distinguishable patterns.
If two samples are drawn from the same ensemble,
each of the patterns is equally likely; but if they come
from different ensembles, one would expect to find
patterns in which the x's cluster at one end of the list
and the y's at the other. Let the test statistic U* be the
number of times a y precedes an x. U* is the number of
y's preceding the smallest x plus the number of y's
preceding the next larger x, including all the y's counted
in the first batch, and so on until the number of y's
preceding the last x in the list is counted and included
in the sum. The probability of U* occurring when the
=' 3}
69
0.001
(m ; n)
(m ; n) patterns is
:f: H. B. Mann and D. R. Whitney, Ann. Math. Stat.. 18, 50, 1947.
TABLE E3.7-1
Pulverization
A
B
A-B
2.4
2.6
2.7
2.6
2.0
2.0
0
10
1.9 2.2
1.8 2.0
2.3
2.0
2.3
2.4
2.1
2.1
2.4
2.1
2.6
2.5
70
t; =
=
i=l
i=l
L r. = L (i +
+ m(m 2+
(3.7-2)
1) - U"
(3.7-3)
Var {U*}
mn(m ;2 n
+ 1)
(3.7-4)i
vVar {U*}
Mann-Whitney Test
Gain, %
i=l
_!
U* _ mn
Z=-~=:==::::;:--
u;)
-1.4
-1.2
-1.2
-1.0
-0.2
0.2
Rank sum
B
Rank
Gain, %
Rank
-0.3
0.5
0.7
0.8
0.9
1.5
2.4
5
8
9
10
11
12
13
1
2t
2t
4
6
-
23
Rank sum
68
In the first list the rank of each gain, as determined from the
second list, is placed in the second column of the table, the
ranks going from 1 to 13.
One way to compute U* is to replace the observations in
the second list by A or B, depending upon which sample the
observations came from:
AAAABAABBBBBB
AAAAAABBBBBBB
AAAAABABBBBBB
AAAAABBABBBBB
AAA.ABAABBBBBB
In total there are (6
7)
U"
U*
U*
U*
=
=
=
=
(b)
NONPARAMETRIC TESTS
3.7-2:
T; = 23
EQUATION
r. =
U*
= 23 _ 6(6 + 1) = 2
2
3.7-3:
68
u* =
7(7 + 1) + (7)(6) - 68 = 2
2
Z =
In, -
nl(nl
+ n2 +
1)
1- !2
2
. Jn1(n1 + n2 + l)n2
.
12
n 2 > 10)
( n, > 10
(3.7-5)
71
Sample
Rank
-1.4
-1.2
-1.2
-1.0
-0.3
-0.2
0.2
0.5
0.7
0.8
0.9
1.5
2.4
A
A
A
A
1
4t
4t
8
9
12
13
11
10
7
6
3
2
B
A
A
B
B
B
B
B
B
nl = 6
n2 = 7
33 - 6(6 + 7 + 1) 1_ 1
z= 1
2
2 = 0.496
6(6 + 7 + 1)7
)
12
72
5 1 642 759 8 7
--+--+-+++
2
211
= 2nln2 .
nl
+ n2 +
(3.7-6)
3 5 142 6
there are six inversions: 3 is followed by two smaller
numbers, 1 and 2; 5 is followed by three smaller numbers,
1, 4, and 2; and 4 is followed by one smaller number, 2.
If the order of the numbers in the sequence is random,
then each of the n! permutations of the n numbers is
equally probable; the a priori probability of obtaining a
random sequence with exactly 1* inversions is simply the
number of permutations containing exactly 1* inversions
divided by n!, the total number of possible permutations.
The number of times a number is followed by a larger
number in the sequence is the compliment of 1* and is
designated as T*. A third measure which can be used is
S* = T* - 1*. Mann I tabulated exact T* probabilities
for 3 ~ n ~ 10, and Kendal1 listed probabilities for S*.
1* has a mean and variance of
n(n - 1)
-_ /U+ - f.1'u+/ -
(3.7-8)
u+
2n 3
+ 3n2 -
5n
0;*=------
72
(3.7-11) .
(3.7-10)
f.1'1* =
(3.7-9)
f.1'u+
= LPnj
1* -
f.1'1*
v' 0;*
(3.7-12)
NONPARAMETRIC TESTS
<)
, <nX)
, <nX2)
, <2,nX2)
73
Period
Time Average
1
2
3
4
5
6
7
8
9
10
36.5
43.0
44.5
38.9
38.1
32.6
38.7
41.7
41.1
36.8
Solution:
WALD-WOLFOWITZ TEST. By inspection of the sequence,
the median value of the ten values is (38.7 + 38.9)/2 = 38.8.
A plus is assigned to a value above 38.8 and a minus to a
value below, yielding the following sequence:
I
= 0.05,
Ui = 9;
Number of Inversions
1
7
7
4
2
41.7
41.1
1
2
1
36.8
Total
25
74
accepted again.
To determine stationarity, the sequence of mean square
values would also have to be formed and tested. For the
given time record the null hypothesis was accepted by both
tests; hence the mean square values are not tabulated.
3.7-5 Tests for Randomness
The nonparametric tests described above in connection
with stationarity also in effect test for randomness,
except for possible periodic components. If the segments
of the time record pass the stationarity test, then periodic
components which are not detected by visual inspection
of the time record or by the test for stationarity are best
detected by visual inspection of the time average power
spectral density or autocorrelation function (defined in
Section 12.3-3). Because a sine wave will have an auto
correlation function which will persist over all values of
T, as opposed to random data for which r( TXX) ---+ 0 as
T ---+ 00 (for ftx = 0), the time average autocorrelation
function can be plotted and examined. In this connection,
refer back to the autocorrelograms in Figure 2.2-1. A
periodic component in the data will show up as a peak
in the power spectral density function, especially when
the amplitude of the periodic component becomes
larger than the associated noise.
3.7-6
(3.7-13)
which will be approximately normally distributed for
large values of nO(l - 0) with a mean of zero and a
variance of 1. Furthermore the variable
k
"" Z~
~ (Xi - nOi)
~ nOi(l -
i=1
i=1
fJ i )
L(
v=k-1
nt).
i=1
(3.7-14)
-2
X
=
~ 2
X t - nf})
l
=k -
1- g
(3.7-15)
n{}i
=1
i=l
*t 2
n t. - n.)
*
(3.7-16)
nt
~,
and
nOi(l - 0i)
(0
k)
NONPARAMETRIC TESTS
X2 Test
Grade
Batches
A
B
340
130
100
30
Solution:
The procedure is to tabulate the observed frequencies n,
and compute the theoretical frequencies nt based on a
total equal to the sum of the observed frequencies.
Sum
Observed
ni
Theoretical
340
130
100
30
320
160
80
40
600
n~
1
(n, - nr)2
n:t:t
400
320
900
160
400
-8-0
100
-4-0
TABLE
600
14.4
Number of
Occurrences
0
1
2
3
4
5
6
7
8
9
27
18
23
31
21
23
28
25
22
32
t J.
7.2
A
B
Solution:
t=l
Grade
75
E3.7-7
Number of
Failures
Observed
Poisson
Negative
Binomial
0
1
2
3
4
5
Total
331
104
27
8
1
2
473
317
127
25
3
1
0
473
333
100
29
8
2
1
473
(n, - nt)2
i=l
n:te
17.2
9.21
Calculated:
X~.99 from
0.31
6.63
Table C.2:
We compare 17.2 with the value of X~-a = 9.21 from
Table C.2 in Appendix C (for ex = 0.01 and v = k - 1
g =;= 4 - 1 - 1 = 2 degrees of freedom) and do not find a
good fit. Note that the classes of size less than 5 must be
combined so that k = 4. However, the same test for the
negative binomial density
p(x) =
+ ~ - 1) {IT(1
{I)i
i = 1,2, ...
= positive integer
o<
8 < 1
:I: W. S. Connor, Ind. Eng. Chern. 52 (2), 74A, 1960; 52 (4), 71A,
L. Hodges and E. L. Lehmann, J. Stat. Soc. B16, 261, 1954.
1960.
76
so that
TABLE
3.7-1
TWO-WAY CLASSIFICATION
...
YP
j= 1
h.
Xl
X2
111 h2
121 122
lIP
12P
12.
Xm
Im1 1m2
Imp
r;
f1
f2
fp
i= 1
t Refer
(3.7-18)
If Equation 3.7-18 is introduced into Equation 3.7-17, we
obtain
x2=~L
li.lj
i-_l
J_-l
h .f j
mp - (m
+p
1) = mp - m - p
Column
Sums:
(3.7-19)
Classifi
cations
of Vari
able X
i=l j=l
Row
Sums:
Y2
(3.7-17)
b. rvA
= n
Classifications of
Variable Y
Y1
v = mp - 1
1)(p - 1)
= mp - m - p + 1
1) - (m - 1) - (p - 1)
(m - 1){p - 1)
Test of Independence
-250to -50
-50 to +50
50 to 200
Total
0-1200
1200-1800
1800-2700
5
7
8
9
5
21
7
9
16
21
21
45
35
32
87
Total
20
Solution:
The minimum predicted frequency is greater than 5. The
degrees of freedom are 4.
= 87(0.232) = 20.2
We find that X~.95 = 9.488 from Table C.2 in Appendix C
for a
77
78
= 1,2, ... , n
c - (0.75)
~ Xj =
X_
Y i
v =
n -
(3.8-1)
)% -_ 0.831
o 09 5
_
1)/(3)]
o 09 5
%( 1 + [(3F3F
Test of an Outlier
In the series
1=1
j*i
X _ (Y 1 +
Y2
+ ... +
Xn the
23.2
23.4
23.5
24.1
25.5
Yr )
Xs
(3.8-2)
n-r
Solution:
Compute X = 23.9 and then Yi = Xi - X = 25.5
23.9 '= 1.6; Sx = 0.77. For ex = 0.05, v = 4, and n = 5,
from Equation 3.8-3 we compute by trial and error that
5C2(3)
[
and c
3(3 _
4;2)
] %
2.776
and observation
Xs
1.05
is retained.
(3.8-3)
3.9
1"..1
1"..1
(nv)%(1 + [(3F1-3F
l-q
q -
I)/(v
+ Yo)]
)%
(3.8-4)
_ _ _ _ _ _ ~pe~ntr~mit
xr------"'---=---~z...----~-------
309-1
79
TABLE
3.9-1
CHANGE IN A PROCESS
*
Control Chart
Cause of Change
Gross error (blunder)
Shift in average,
Shift in variability
Slow fluctuation (trend)
Rapid fluctuation (cycle)
Mean
Range
Standard
Deviation
(X)
(R)
(s)
Cumulative
Sum
(CS)
2
3
3
1
2
2
= not appropriate.
80
Gas
consumption
100.0
99.5
99.0
98.5
98.0
-...
..... ..
97.5
.1
97.0
1.0
Exam ple of
leading
0.8
0.6
0.4
0.2
0.0
79111357911135791113579111357
a.m.
FIGUR E
3.9-2
p.m.
a.m.
Time
p.m.
a.m.
3.9-2
Sample
Size
* (rt
0.0027)
Chart for Ranges
Factors for
Central Line
Ao
3
4
5
2.121
1.732
1.500
1.342
3.760
3.070
2.914
2.884
3.760
2.394
1.880
1.596
1.880
1.023
0.729
0.577
0.5642
0.7236
0.7979
0.8407
1.7725
1.3820
1.2533
1.1894
o
o
o
o
1.843
1.858
1.808
1.756
0
0
0
0
3.267
2.568
2.266
2.089
1.128
1.693
2.059
2.326
0.8862
0.5908
0.4857
0.4299
6
7
8
9
10
1.225
1.134
1.061
1.000
0.949
2.899
2.935
2.980
3.030
3.085
1.410
1.277
1.175
1.094
1.028
0.483
0.419
0.373
0.337
0.308
0.8686
0.8882
0.9027
0.9139
0.9227
1.1512
1.1259
1.1078
1.0942
1.0837
0.026
0.105
0.167
0.219
0.262
1.711
1.672
1.638
1.609
1.584
0.030
0.118
0.185
0.239
0.284
1.970
1.882
1.815
1.761
1.716
2.534
2.704
2.847
2.970
3.078
0.3946
0.3698
0.3512
0.3367
0.3249
0.205
0.387
0.546
0.687
5.078
5.203
5.307
5.394
5.469
11
12
13
14
15
0.905
0.866
0.832
0.802
0.775
3.136
3.189
3.242
3.295
3.347
0.973
0.925
0.884
0.848
0.816
0.285
0.266
0.249
0.235
0.223
0.9300
0.9359
0.9410
0.9453
0.9490
1.0753
1.0684
1.0627
1.0579
J .0537
0.299
0.331
0.359
0.384
0.406
1.561
1.541
1.523
1.507
1.492
0.321
0.354
0.382
0.406
0.428
1.679
1.646
1.618
1.594
1.572
3.173
3.258
3.336
3.407
3.472
0.3152
0.3069
0.2998
0.2935
0.2880
0.812
0.924
1.026
1.121
1.207
16
17
18
19
20
0.750
.0.723
0.707
0.688
0.671
3.398
3.448
3.497
3.545
3.592
0.788
0.762
0.738
0.717
0.697
0.212
0.203
0.194
0.187
0.180
0.9523
0.9551
0.9576
0.9599
0.9619
1.0501
1.0470
1.0442
1.0418
1.0396
0.427
0.445
0.461
0.477
0.491
1.478
1.465
1.454
1.443
1.433
0.448
0.466
0.482
0.497
0.510
1.552
1.534
1.518
1.503
1.490
3.532
3.588
3.640
3.689
3.735
0.2831
0.2787
0.2747
0.2711
0.2677
21
0.655
0.640
0.626
0.612
0.600
3.639
3.684
3.729
3.773
3.816
0.679
0.662
0.647
0.632
0.619
0.173
0.167
0.162
0.157
0.153
0.9638
0.9655
0.9670
0.9684
0.9696
1.0376
1.0358
1.0342
1.0327
1.0313
0.504
0.516
0.527
0.538
0.548
1.424
1.415
1.407
1.399
1.392
1.523
0.534
0.545
0.555
0.565
1.477
1.466
1.455
1.445
1.435
3.778
3.819
3.858
3.895
3.931
0.2647
0.2618
0.2592
0.2567
0.2544
22
23
24
25
>25
Vn
Vn
1.0000 1.0000
81
o
o
o
o
3.267
2.575
2.282
2.115
o
0.076
0.136
0.184
0.223
2.004
1.924
1.864
1.816
1.777
5.534
5.592
5.646
5.693
5.737
0.256
0.284
0.308
0.329
0.348
1.744
1.716
1.692
1.671
1.652
1.285
1.359
1.426
1.490
1.548
5.779
5.817
5.854
5.888
5.922
0.364
0.379
0.392
0.404
0.414
1.636
1.621
1.608
1.596
1.586
1.606
1.659
1.710
1.759
1.804
'5.950
5.979
6.006
6.031
6.058
0.425
0.434
0.443
0.452
0.459
1.575
1.566
1.557
1.548
1.541
o
o
o
o
3.686
4.358
4.698
4.918
t .
* Use explained in Table 3.9-5. The relation A o = 3 vii d2 holds. Adapted with permission of the American Society for Testing
Materials from ASTM Manual on Quality Control of Materials, Philadelphia, Jan. 1951, p. 115.
t l - _3_
V2n
~1+_3_
V2n
TABLE
3.9-5
Sample Statistic
Controlled
Central Line
Lower Control*
Limit (LCL)
ftx
Mean X
Range R
Standard
deviation
and
O'x
Upper Control
Limit (UCL)
Sample size
Specified
n
ftx
Small, preferably
10 or fewer
d2 0'x
Sx
UXC2Jn ~ 1
ftx
and
O'x
Mean X
Mean X
Mean X
x _' 3sx
SumS
S - AoR
Range R
X= -
SXAl
Unknown
In=t
In=t
Small, preferably
10 or fewer
25 or fewer,
= SXAl -n
-n
X+
constant size
Vfl
3sx
.=:
+ Vfl
S + AoR
Standard
deviation
Standard
deviation
Sx
Sx
_
Sx
3sx
Sx - -
Sx
v2ii
_
sx
3sx
A
.y
2ii
nISI + n2
...-+-nksk
= - - - -+
nl + n2 + ... + nk
TABLE E3.9-1
Sample
Mean X
Range
Sample
Sample
Sample
Mean X
Range
R
1
2
3
4
5
6
7
8
9
10
11
12
13
64.97
64.60
64.12
68.52
68.35
67.87
64.97
64.60
64.12
63.22
62.85
62.37
66.97
9.8
9.8
8.4
3.9
7.6
8.7
0.1
9.7
7.7
7.5
1.2
9.8
6.4
14
15
16
17
18
19
20
21
22
23
24
25
66.60
66.12
63.22
62.85
62.37
61.97
61.60
61.12
65.72
65.35
64.87
61.97
0.6
6.3
7.5
6.7
4.9
6.7
9.9
6.9
0.1
8.3
5.2
3.2
)~ X = 1611.29
s2
17=64.452
3.9-2
Sample
Number
1
2
3
4
5
68.2
66.2
72.4
67.8
67.0
7
3
6
7
8
9
10
11
12
66.8
67.0
65.8
62.6
69.0
67.6
66.0
4
4
7
8
4
8
9
Rt
= 156.9
6.28
X CHART
LCL,
UCL,
A2*
X- A 2 R
X+ A 2 R
68.32
67.30
5.20
5.00
5
4
0.720
0.760
67.14
5.60
10
0.647
64.6
63.5
63.5
63.5
63.5
63.5
63.5
63.5
63.8
72.1
71.1
71.1
71.1
71.1
71.1
71.1
71.1
70.8
Revised
83
* Values taken from Table 3.94; the value from Table 3.9-2 would be 0.577.
84
u)
After prepackaging
725
725
(a)
(b)
700
700
654
- 675
X
650
6~
639
617
Ir--.....,I'-~----"._.,--+--------.~~-----JI
625
625
600
600
75
50
R 50
25 ........+ -I---I-+------+---+-......
R 25
...-+---+---+-+-t-~_t_t
~~~ofo-&-+__+_'_.......... .,..-....-,..-I-.....-.+-..,.....+_&_...........
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
"Preisomerization" begun
(c)
Preisomerization begun
700
675
661
639
X 650
625
---------------
600
50
R 25
725
700
_ 675
(d)
- - - - - - - - - - - - - - - - 708.6
I---I--~____+_-- _ _-~=__-____::l~-__t664.7
X 650
660~8
625
612
- - - - - - - - - - - - - - - 44
13
600
50
R 25
........_ _- - - - - t -.......--:"'II~~..,......'"T""T""___f"'____r_~,............
O.......,;:..L-Lr:-=-;~~~~....&._.:;!_..,..L...&..T:....a.:.Lr..L..Lr__L,o..a...a..,r___I_f
- - - - - - - - - - - - - - - 44.1
1--=--f--........ ........,...----r----::::I--r-,....,..-F--:;;;--P--:+-it-l--,.---1
O-.....:l~\-&--I,r_&_,~,.....-,~...L..Lfll....l..r~--LT.......,LII.LIIf-
13.5
.........L.:.I.f
2 4 6 8
(e)
12,000
11,500
11,000
10,678
10,377
10,078
X 10,500
10,000
9,500
11,500
_ 11,000
X
10,500
10,000
9,500
1500
1000
1000
500
523
oL.........lI~~__.-+_+_+_____t__+__+_+_+_'f__t 180
12345678
FIGURE
3.9-3
(f)
--------------
12,315
--------------
11
11,766
-------------
500 H-t---..-,t-9--~-t_HH_____:_~--_f__f_I
OIL-L.jL..l..rlL.l.rlI........,.Io...L.r-..L..rL-""'r.L.J~.,.LL,~:L..&rJ~.......,.....&..j
10 11 12 13 14 15
1216
1013
310
85
86
AQL
RQL
I
I
I
--~-----'
Fraction defective
FIGURE
3.9-3
Z: = weighted
then
zt =Z
zt = WZI + (1
Z: = wZ2 + (1
w)zt
w)zt
Z: is
.
-w
As k becomes large,
Var{Z:}
a}-2
w
-w
t S. W. Roberts,
3.9-4
TABLE
3.9-6
SUM CHARTS
Type of Chart
Deviation from reference
(target) value, h
Cumulative Sum
n
(Xi - h)
i=l
>
n
.:....-.J
=1
Successive differences
Absolute value of successive
differences from the
expected 'absolute value
Range of two successive
pairs of observations
from the expected value
>[IDil n
.:....-.J
8{IDil}]
i=l
? [R
.:....-.J
i -
8{RJ]
i=l
87
88
0.05
0.04 I-+-+-,'-+-"o-+-'
0.Q3 I-+-+--+--+--+--+--=""p.....:
~
I
1:0<
1,><
S~ ~
II
0.02 f-+-+-+-+--+--+--+---i
0.01 f-+-+--t-+-+-+--l---1
Application Rule s
I . Place point P on the most recently plotted point on control
chart.
2. A change has occurred if an y plotted po int is covered by the
mask.
- 0.011-+-+-",
~ (X, - h)
,=,
ux
versus m
( 2~)
8 = arc tan
u5 =
1)2
u~
I)
In a
d=
d _ -21n a
-
~ Rf
= D
ax
Sf)%
2:
Ii
' =1
m
standard variance
Values of c and
v,
Sample
size n
v,
3
4
5
6
7
8
9
10
1.378
1.302
1.268
1.237
1.207
1.184
I.l64
1.146
1.93
2.95
2.83
4.69
5.50
6.26
6.99
7.69
FIGURE 3.9-4 Template for cumulative sum control chart. (Rules taken fr om N . L. Johnson
and F. C. Leon e, Ind. Qual. Control 18 (12), 15; 19 (1) , 29 ; 19 (2), 22, 1962.)
ARLr =
a
ARLn = 1 _
ARL II
20
r---.,..-oro-.,..--ro-.....-....,-__,__-.,............----...-_ _
ARL u
200 ....--oro--ro--ro-......-....,---r-.....,...........,-__,___ _........--__
ARL I = 18
100
'\
10
80
60
40
ARLI=150
20
10
-,
Cumulative "
sum
-,
<,
<,
<,
<,
l'-..L.........L....-....a.-....l....-....r.....-......I...-......I""............. ~----L...............~
0.50
1.00
1.50
2.00
2.50
3.00
0.50
1.00
400
,...--.,..-.,.........,..-.....-.....-....,-....,.....--r----r----r----r-......,
ARL x = 320
100
80
60
40
20
10
\
\
800 \
600
400
\
80
60
40
\
"
3.00
<,
...........
10
- --
__.
ARL I = 1500
20
Cumulative \ ,
sum
2.50
1000
100
2.00
ARL n
2000 r--,.--.,..........,........-ro-.....-.....-....,-....,-__,__--.--
200
1.50
~""'"
ARL u
-,
"
Cumulative "
sum
<,
<,
""""
1 L--...L....-....r.....-....r.....-......I...-......I...-......L................. ......L-----L................----L...-I
0.50
1.00
1.50
k
2.00
2.50 3.00
89
0.50
1.00
1.50
k
2.00
2.50 3.00
FIGURE 3.9-5 Average run lengths (ARL rr) after a process change was introduced of k units of
the standard deviation until the process change was detected for four different significance levels
(ARL 1 = lice), based on a normal independent variable with a variance of (12.
90
Igg
Ig
4
3
2
1.5
1.0 L L
-----l
--...l._ _--=::::::::f.
---l
1 2 3
Displacement of current mean by k standard deviations
1500 . - - . - - - - - - - - - - , , - - - - - - - , - - - - - - - - . - - - - - - - ,
1000
800
600
d
Tan Ii
Curve
400
300
V
0.40
2
VI
0.50
2
~200
150
VI!
2
0.55
a:::
VIII
0.60
2
<l: 100
80
0.70
IX
2
' 60
0>
40
30
20
15
Ig
I:
2
E
6
4
3
2
1.5
1.0 L..-.L
---I
-L
...L
---J
3.9-6
1.1687 tan 8
=.p
From Figure 3.9-6 we find for ARLn = 8 and k = 1
d
tan 8
1
2
5
8
0.61
0.47
0.30
0.24
.p
--
0.751
0.807
0.813
0.872
91
2000 ,.--.---------r-------~-----_r__----_
1500
1000
800
Curve
Tan 0
d
600
X
5
0.30
400
XI
5
300
0.35
XII
5
0.40
200
XIII
5
0.45
~ 150
100 x
.r;." 80
~
60
c
~
40
c
30
2
C1>
20
~ 15
Cl>
~ 10
, 8
1.5
1.0--........-.
1500
1000
800
600
400
.....L..-
.....L..-
XV
300
-I
a:
.r;."
+'"
C)
Cl>
en
Cl>
>
--'
----'
1
2
3
Displacement of current mean by k standard deviations
200
150
100
80
60
40
30
20
15
10
8
6
Curve
Tan 0
XIV
XV
XVI
8
8
8
0.25
0.30
0.35
3
2
1.5
1.0
1
2
3
Displacement of current mean by k standard deviations
FIGURE
3.9-6 (continued)
t s. W.
92
~ Po
aX
+ aX ~ Xi <
+ 2ax ~ Xi <
+ 3ax ~ Xi <
Po
Po <
Xi
Po
Po
Po
IJO + 30"g
Value Assigned
Band
Po
+ 2ax
+ 3ax
00
1
2
l---~---+---+-I----+-----r..----+----t---+__---t---r------r-----r-~
#lO - 30"g
2 11
.... 1
222 1 1
..11 144 - I I 23 ..
..1
1
-. 11 1 1
1111
11111 2211 22223455
1 1
"1111
111333
11
(a)
lJ,0- O"g
/'
/
(b)
#0+ ug
lJ,0-
Ux
20 t----r-----------j-----+----+----+----t---+---I-----+---+----+----.f---
15 t----t-----t----+----+---_+_--_+_--n-+__------j--_+--___+_---1--4JM-.y////h~
10
5r----r----;-----+---:::::;R--:~f;:;#_-_+_--_+_--+__-~f__-_+--___+_----I--~
o .------jr-ctt------t.Pu"l:(---\tj~-_t_--_+_--_t_--+__-____1r---__+--_+----+----I
-5 r----r------t------t-----t----+----t----t---_f-----+---+---+----l
-10~-----:-'~-~------L----..L..--.....L...---...1---.L...--_'------l..---J---.....J....-.....-.J
o
10
20
30
40
50
60
70
80
90
100
110
120
Sample number
(d)
FIGURE E3.9-7 (a) Standard X control chart, (b) moving average chart, (c) geometric moving
average chart, and (d) cumulative sum chart. (From S. W. Roberts, Technometrics 8, 412, 1966.)
93
TABLE 3.9-7
Calculation of Plotted Point
Type of Chart
_
Shewhart X
=n2
X jj
j=1
Shewhart X plus
runs test
Moving average
X(i) = ~-k+1
+ .. +
Xt -
k
k
A;
min {i, 9}
Xl
- .
X(l) > t-
= 3.0
zr > 1-'0
Geometric moving
average
Cumulative sum
z~
p-o
Ar
3.0
Si
So
(Xl - /Lo)
9, i
w = k
Akax
Vk for
Akax
+ vi for
k < 9
Aa : w)
r X(2
y,
for i large
2
1 = 10
+ St-1
0, ... , n
or more of the following occurs: (1) Xi > 1-'0 3ax; (2) Xi and either
.. , Xi all fall on the same side of 1-'0.
Xi- 1 or Xi- 2 fall between 2 ax and related 3 ax control levels ; (3) Xi -7, Xi- 6,
change in the process variable value has occurred from
kax (where k is a constant) until the shift in
detected. The entries in the table are based on
essentially the same sensitivity for each test in calling
for corrective action in the absence of a shift in the
process mean. 'Except for small values of k the tests
roughly prove to be equally effective.
/La to P-o- +
/L would be
TABLE 3."9':8
k
Test or Chart
Shewhart X supplemented by
the two-sided runs test
Runs sum test
Moving average for k = 8
Shewhart X supplemented by
moving average for k = 8
Geometric moving average
with w = 0.25
Shewhart X supplemented by
geometric moving average
with w = 0.25
Cumulative sum of 5
0.5
1.0
2.0
3.0
740
740
740
79
50
40
19
12
10
4.4
3.8
4.6
1.9
2.4
3.3
740
50
11
3.7
1.9
740
40
10
3.5
2.2
740
740
50
34
12
10
3.3
4.3
1.7
2.9
T.2 =nsis?
t
3.9-5
sis? - siy
[(Xi - X)2
s~
(Yi -
Y)2
s?
t J.
94
T2
I)Fa
2(n -
(3.9-3)
n - 2
[Xl -
Xl' X 2
X2 , ,
Xp
Xp ]
s=
SXIX2
Bayes Estimation
Gupta, S. S. and Soebel, M., Biometrika 49, 495, 1962.
Guttman, I., Ann. Inst. Stat. Math. 13, 9, 1961.
Jeffreys, H., Theory of Probability (3rd ed.), Clarendon Press,
Oxford, 1961.
Raiffa, H. and Schlaifer, R., Applied Decision Theory, Harvard
Univ. Press, 1961.
Savage, L. J., Foundations of Statistics, John Wiley, New York,
1954.
Tiao, G. C. and A. Zellner, Biometrika 51,219, 1964.
Rejection of Outliers
si
SXIX2
si
sip
is distributed aspvFa/(v - p + 1) where Fa has p
and (v - p + 1) degrees of freedom with v being the
number of degrees of freedom in estimating the sample
variances and usually equal to n - 1.
T~
Supplementary References
General
Anscombe, F. J., "Rejection of Outliers," Technometrics 2, 123,
1960.
Birnbaum, A., "Another View on the Foundation of Statistics,"
A mer. Stat., p. 17, Feb. 1962.
Brownlee, .K. A., Statistical Theory and Methodology in Science
and Engineering, John Wiley, New York, 1960.
Cochran, W. G., Sampling Techniques, John Wiley, New York,
1950.
David, H. A. and Paulson, A. S., "The Performance of Several
Tests for Outliers," Biometrika 52, 2129, 1965.
Dixon, W. J., "Rejection of Observations" in Contribution to
Order Statistics, ed. by A. E. Sarhan and B. G. Greenberg,
John Wiley, New York, 1962, pp. 299-342.
Fraser, D. A. S., Nonparametric Methods in Statistics, John
Wiley, New York, 1957.
Gumbel, E. J., Statistics of Extremes, Columbia Univ. Press,
New York, 1958.
Hald, A., Statistical Theory with Engineering Applications, John
Wiley, New York, 1952.
Johnson, N. L. and Leone, F. C., Statistics and Experimental
Design in Engineering and the Physical Sciences, John Wiley,
New York, 1964.
Kendall, M. G., Rank Correlation Methods (2nd ed.), Griffin,
London, 1955.
Lehmann, E. L., Testing Statistical Hypotheses, John Wiley,
New York, 1959.
Mendenhal, W., "A Bibliography of Life Testing and Related
Topics," Biometrika 45, 521, 1958.
ThralJ, R. M., Coombs, C. H., and Davis, R. L., Decision Proc
esses, John Wiley, New York, 1954.
Wald A., Statistical Decision Functions, John Wiley, New York,
1950.
Wilks, S. S., Mathematical Statistics, John Wiley, New York,
1963.
PROBLEMS
Problems
3.1 The logarithmic series probability function for a
random variable X is
p(x, y) = - - exp
2'1TClXCly
[1
/1-)2 + (Y---/1-)2}]
- - {(X
--2
Clx
e: ",\x
P(x,A) = - ,
x.
1
2
3
4
5
6
7
8
9
10
95
92
90
86
86
80
75
72
64
60
Cly
Student
Number
95
where:
V g = exhaust velocity of the gases, a random variable
m; = rocket weight after burning, a random variable
m p = propellant weight, a random variable
Removed
1
2
3
4
5
6
7
8
9
10
93
60
77
92
100
90
91
82
75
50
96
h~
TABLE
P3.12
Value of
Sample
Mean
Symbol
Physical Quantity
k
G
J-t
One Sample
Standard
Deviation*
3.16
3.17
0.20
"2
0.0441
20,000
0.109
5
1
x = 248.5C
si =
70(OC)2
3.20
x = 29.1
Sx
= 1.2
3.19
0.05,
= 30.0
= 2.4
= 64.0
n
Plot f3 and (1 - f3) versus selected values of possible
J-t's above and below 30.0 for a = 0.01.
Can the power of a test ever be bigger than the
fraction a, the significance level?
Classify the following results of hypothesis testing as
to: (1) error of the "first kind," (2) error of the
"second kind," (3) neither, and (4) both. The hypoth
esis being tested is designated as Hi;
(a) H o is true, and the test indicates H o should be
accepted.
(b) H is true, and the test indicates H o should be
rejected.
(c) H o is false, and the test indicates H o should be
accepted.
(d) H o is false, and the test indicates Hs should be
rejected.
A manufacturer claimed his mixer could mix more
viscous materials than any rival's mixer. In a test of
stalling speed on nine viscous materials, the sample
mean viscosity for stall was 1600 poise with a sample
standard deviation of 400 poise. Determine whether
or not to reject the following hypotheses HI based on
a = 0.05.
(a) The true ensemble stalling speed of the mixer is
at 1700 poise (H1 A : J-t = 1700 poise versus the
alternate hypothesis H 1B : J-t '# 1700 poise).
(b) The true stalling speed is 1900 poise (H2 A :
J-t = 1900 poise versus the alternate hypothesis
H 2 B : J-t '# 1900).
(c) The true stalling speed is greater than 1400 poise
(H1 A : P; > 1400 poise versus the alternate
hypothesis H 2 A : J-t ~ 1400 poise).
A new design has been devised to improve the length
of time a circular-type aspirin pill die can be used
before it has to be replaced. The old die in 10 trials
gave an average life of 4.4 months with a standard
deviation of 0.05 month. The proposed die in 6
trials had an average life of 5.5 months with a standard
deviation of 0.9 months. Has the die been improved?
(Use a = 0.05 as the significance level.)
Two chemical solutions are measured for their index
of refraction with the following results:
A
B
Index of refraction
1.104
1.154
Standard deviation
0.011
0.017
Number in sample
5
4
Do they have the same index of refraction? (Use
a = 0.05.)
3.21 In a test of six samples of oil, the sample mean for the
specific viscosity was 0.7750 cp, with a sample stand
ard deviation of 1.45 x 10- 2 cp, The specifications
PROBLEMS
Source 2
64.0
65.0
75.0
67.0
64.5
74.0
75.0
69.0
69.0
61.5
67.5
64.0
97
Gauge 1
Gauge 2
1
2
3
4
5
6
7
8
9
4.4
-1.4
3.2
0.2
-5.0
0.3
1.2
2.2
1.3
3.2
7.1
6.4
2.7
3.1
0.6
2.6
2.2
2.2
123 4
46 33 38 49
Number of
Observations
Variances x 108
1
2
3
4
5
11
8
6
24
15
1.5636
1.1250
3.7666
4.1721
4.2666
98
Circuit A
Circuit B
0.18
0.24
0.18
0.17
1.03
0.14
0.09
0.21
0.29
0.14
0.19
0.46
0.08
0.14
II
40
27
39
46
32
46
40
44
48
47
42
41
34
45
52
49
35
43
44
Month
Precipitation
(mm)
I
2
3
4
5
6
7
8
9
10
350
370
461
306
313
455
477
250
546
274
Runoff
(mm)
---
0.0
29.6
15.2
66.5
2.3
0.5
102
12
6.1
6.2
Time
(sec)
6.5
7.0
7.5
8.0
8.5
9.0
9.5
10.0
10.5
11.0
11.5
12.0
12.5
13.0
13.5
14.0
14.5
15.0
15.5
16.0
16.5
17.0
17.5
18.0
18.5
19.0
19.5
20.0
20.5
21.0
Amplitude
(chart
divisions)
7
10
15
13
26
12
10
12
11
12
11
10
PROBLEMS
Hydrocarbon
(ppm)
Time (p.m.)
1:30
2:00
2:30
3:00
3:30
4:00
4:30
5:00
5:30
6:00
6:30
7:00
7:30
123
179
146
190
141
Frequency
206
407
564
4
9
530
273
22
27
11
7
199
142
3
1
171
BOD
63.2
63.3
63-4
63-5
63-6
63-7
63-8
63-9
63-10
63-11
6.5
7.0
7.0
9.2
6.7
6.7
3
7-8
19
8-9
16
9-10
10
10-11
Mesh
11
11-12
-0.4 to
-0.3 to
-0.2 to
-0.1 to
0.0 to
0.0 to
0.1 to
0.2 to
0.3 to
-0.5
-0.4
-0.3
-0.2
-0.1
0.1
0.2
0.3
0.4
3-4
10
4-6
40
81
115
6-8
6.3
Range of Measurement
(deviation, psia)
8-10
5.8
16.7
6.4
0.0127
0.0163
0.0159
0.0243
99
10-14
14-20
20-28
28-35
35-48
48-65
65-100
100-150
150-200
200
160
148
132
81
62
41
36
22
19
53
12345
25 17 15 23 24
6
16
100
N umber of Particles by
Color Distribution
Sample Position
Red
White
Blue
1
2
3
4
5
6
7
8
9
10
11
12
3
11
10
10
11
6
17
16
13
8
7
8
9
1
5
5
5
4
2
4
6
10
7
2
18
18
15
15
14
20
11
10
11
12
16
20
2
330
5
11 23
300 90 46
35
21
55
30
2: (Ye
X =
- Yt)2
Yt
X2
1
2
1.04
0.57
39.21
57.03
3
4
104
to
Yield (lb)
0-104
0-100
100-150
150-200
>200
3
8
6
8
5
7
6
7
104
to
104
106
6
9
5
10
Pro~uct,
Counts
In
Out
pH
12,780
1,100
130
23,780
27,520
3,610
750
60
1,100
1,440
23,780
22,980
15,510
5,200
930
130
3.8
4.0
1.3
1.4
1.9
2.5
2.8
3.4
Shift
A
Shift
B
Shift
C
21
22
23
24
25
26
64
191
154
105
94
57
37
320
240
220
72
85
90
330
250
180
66
140
Assay (X)
1
2
3
4
5
6
7
0.968
0.952
0.945
0.958
0.965
0.955
0.956
0.958
0.965
0.954
0.968
0.979
0.971
0.947
0.968
8
9
10
11
12
13
14
15
Sum
14.409
PROBLEMS
TABLE P3.49
(1)
Point
Number
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
101
H2
N2
Gas
purification
CO
CO2
H2
CO2
N 2
Inerts
inerts
(2)
Individual
Result,
(3)
Target
or Mean,
16
7
6
14
1
18
10
10
6
15
13
8
20
12
'9
12
6
18
14
15
16
9
6
12
10
17
13
9
19
12
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
(4)
(5)
Deviation Cumulative
from
Deviation
Target, D
~D
6
-3
-4
4
-9
8
0
0
-4
5
3
-2
10
2
-1
2
-4
8
4
5
6
-1
-42
0
7
3
-1
9
2
6
3
-1
3
-6
2
2
-2
3
6
4
14
16
15
17
13
21
25
30
36
35
31
33
33
40
43
42
51
53
FIGURE
P3.50
Percent
N2
Sample
Number
Percent
N2
1
2
3
4
5
24.5
24.2
28.3
29.8
26.4
31
32
33
34
35
28.3
27.3
25.8
26.0
27.5
6
7
8
9
10
29.0
27.0
27.0
22.4
25.3
36
37
38
39
40
25.2
25.8
25.5
22.8
21.7
11
12
13
14
15
30.9
28.6
28.0
28.2
26.4
41
42
43
44
45
24.7
25.6
26.5
24.6
22.0
16
17
18
19
20
23.4
25.1
25.0
23.3
23.0
46
47
48
49
50
22.7
22.0
21.0
20.7
19.6
21
22
23
24
25
23.2
24.9
25.2
24.4
24.1
51
52
53
54
55
20.6
20.0
21.2
21.4
29.6
26
27
28
29
30
24.0
26.6
22.1
23.2
23.1
56
57
58
59
60 Saturday 8 a.rn,
29.4
29.0
29.0
28.5
28.7
102
TABLE P3.50b
N 2 STREAM ANALYSIS
Day
Shift
Time
Percent N 2
12 noon
4 p.m ,
8 p.m ,
12 midnight
4a.m.
8 a.m.
12 noon
4 p.m,
8 p.m.
12 midnight
4 a.m .
8 a.m.
27.6
25.6
29.6
30.7
30.0
30.6
31.7
29.6
30.6
28.1
26.5
27.5
Saturday
Saturday
Saturday
Sunday
Sunday
Sunday
Sunday
Sunday
Sunday
Monday
Monday
Monday
7
2
3
3
1
1
2
2
3
3
1
PART
CHAPTER 4
'YJ =
'YJ =
+ e13 x
V130 + f31Xl + f32 X2
e 13 1 x l
+ f31f1 (x b
X2, .. )
+ ...
106
TABLE
4.1-1
x-axis
y-axis
Straight-Line Equation
Remarks
- = a + {1x
(2)
(3)
1
y
{1
y
y=a+
x
X
x
- = a + {1x
x
y
1
-=a+fJx
Asymptotes: x
-a;
p' y
= 0
fJ
Asymptotes: x = 0, Y =
y=a+
x
ex
x
-=a+fJx
Asymptotes: x =
-a
p' y
_ x )
( or y = ex
+ {1x
1
a
or - = - + {1
x
x
y=--+y
a + {1x
Y
(3a)
1
x
ex
-=f1+
Asymptotes: x
(1/{1) + y
y - YI
Y = ax 13
log x
log Y
(4a)
y=ax13+y
log x
log (y - y)
(4b)
= y10 a x IJ
log x
(5)
Y = a{1x
logy
logy
= Iog + {110gx
log(y - y)
= Iog
+ {110g x
10g{1
= -al{1, Y =
(equivalent forms
y = ayIJ2X
Y == 10a 1 + {1IX,
Y = a(10)13 1 X )
log ex
+ f3 log x +
(4.1-2)
Solution:
Several differences and ratios can be computed, some of
which are shown in Table 4.1-1.
TABLE
4.1-1
log Y
1
2
3
4
5
6
7
62.1
87.2
109.5
127.3
134.7
136.2
134.9
1.79246
1.93962
2.03941
2~10483
2.12937
2.13386
2.13001
~2y
25.1
22.3
17.8
7.4
1.5
-1.3
-2.8
-4.5
-10.4
-5.9
-2.8
x/Y
~(x/Y)
0.01610
0.02293
0.02739
0.03142
0.03712
0.04405
0.05189
0.00683
0.00446
0.00403
0.00570
0.00693
0.00784
107
L: f'oo
[y - f(x)]2p(x,y) dx dy
(4.2-1)
The function ./(X) that minimizes the expectation in
Equation 4.2-1 is the conditional expected value of Y,
given X, i.e., I(X) = l!{ Y I X}, as can be shown by
introducing the relation p(x, y) = p(y I x)p(x) into the
right-hand side of Equation 4.2-1 so that
Min l!{[ Y - I(X)]2}
= Min
<X)
f3I X)]2}
= MinJ~ooJ~oo (y
(4.2-2)
or
4.2
(4.2-3)
108
- =
12
10
f3x
A. ~ = -0.1 + 0.3x
B.
c. !Y =
D.
!Y =: 0.1 + 0.3x
CAB
-2
-4
+ 0.3x
-0.5
BA
-6
-8
! = 0.5 + 0.3x
Y
-10
6
x
10
12
6
x
10
12
6
x
10
Equation (1)
12
y=a+~x
A.
Y= -
10
o. 1 +0.3
B.
2 + -0.3
6
y _
x
4 + -0.3
4
C. y -
x
D.
y -- 6
+ -0.3
Eq uation (2)
12
-:
=
y
+ Bx
10
A. ::
= -0.1 + 0.3x
y
B. -:
= 0.1 +
y
O.3x
C. ::
= -0.4 + O.3x
y
D. ::
= 4 + O.3x
y
Equation (3)
FIGURE
4.1-1
12
16.----,-----,.---r-----r----r-----r------,
14
12
y = axP
A.
= 4xO. 5
B. y
= 4xO.
c. Y
D.
109
10
4x-O. 3
y = 4x-O. 5
4
2
O--......I...-----I..--....L....----L---..I...-----L-----J
6
2
4
8
10
12
14
Equation (4)
3.2 r---....,-----r--~----r---_r___---.,....-__
3.0
2.8
2.6
y = af3x
2.4
2.2
A.
y = 2(O.2)X
2.0 m = - - - - - - - - - E
B.
= 2(O.3)X
= 2(O.8)X
= 2(O.95)X
c.
D.
E.
y = 2(l.02)X
F.
G.
= 2(l.04)X
= 2(1.3)X
1.8
1.6
1.4
1.2
1.0
0.8
0.6
0.4
0.2
OL-..-_--L...-~...._::z::::::::=.._-...J....-
___L..........-_..J....._
8 .
10
____L._
12
____J
14
Equation (5)
FIGURE
f31lff{(X - ttX)2}
or
(4.2-4)
Once f31 is determined, f30 can be determined from Equa
tion 4.2-3.
Now suppose that the probability density p(x, y) is
not known. Instead we. plan to collect some experimental
data and, on the basis of the data, obtain the best esti
mates for the parameters in a linear model. Whether
4.1-1 (cont.)
110
p(x, y)
,/
~-----~~y
,/
/'
,,/
of Yi , namely, 71i
x)
= Po + Pl(Xi -
,,/
,/
Minimize
('Vi - 7Jt)2
i=1
Locus of model
a:
e., a:
16.08 -
~o
1.80~1 =
16.32 -
~o +2.10~1 =
16.77 -
~o
2.40~1 =
Xb
is
Po
111
{one observation
-1(Yij , Xi)
.~ Sample mean of observations at.x.
Yi, X i ) '
. ~
_"''f.
Predicted
Yi
at
Xi
(Yi, Xi)
~ ~{)
1\:ii"
Expected value of
Yi
at Xi
I
I
I
I
VARIANCE
= f30 + {31 (x - x)
(4.3-1)
instead of as
'YJ
f3~
+ {31 X
(4.3-2)
</> =
(~
- 'YJi)2Pi
i=1
= IPi[Yi - ~o
~l(X
x)]2
(4.3-3)
i=1
112
f30 - f3I(XI -
o~o
X)]2}
8~o
x)]
(4.3-4)
i=1
o,p
O{~I PI[l~
In L
.. / n
n In v 277 - - In
= -
ap
8~1
8~1
n
-22>tCYI -
f30 - f3I(XI -
x)] (XI - x)
t=1
8 In L _ 8 In L _ 8 In L _ 0
8po - 8~1 - 8(att )
p;
2p;; = bo 2
i=1
L[{ ; -
+ bl
i=1
2 p ;{x1 - x)
i=1
(4.3-5a)
n
+ bl
i=1
+ PI(X;
- x)]}{pJ] = 0
(4.3-9a)
[{Y'; - [Po
2p;(Y'I)(Xi - x) = bo 2 p;(xl - x)
i=1
[Po
t=1
2PI(xi
X)2
i=1
(4.3-5b)
+ PI(XI
- X)]}{PI(XI - x)}]
(4.3-9b)
- X)]}2 - nul-. = 0
(4.2-9c)
i=1
2p;{; - [Po
+ PI(XI
i=1
Note that
n
2 p ;(xl -
x)
== 0
t=1
ul-. =
~ 2PI{Y'1
i=1
(4.3-6)
as we shall see.
(4.3-7)
4.3-2
Vi}
= L PiC{ ~}
LPi
LPi
_ LPif30 _
- LPi C{h }
= c{L Pi
_ L PiC{{3o
-
+ {31(Xi
- x)
Ei}
LPi
Q
fJO
c{L:LPIPi(Xi
Y;(XI - X)}
- X)2
L Pi(Xi
- x)C{ ~}
L Pi(Xi - X)2
L Pt(Xt
(Yij - TJi)
_ (L Pi)a~t
- (L Pi)2
= (Yij
=
Var {L Pt Yt}
L Pi
113
Pi
~ ~ (Yi j
(L Pi)2
Y;)(Yi -
2;) = 0
i=1 j=1
a~i
(4.3-10)
- LPi
Var
~ (Yl j
{L:LPiPi(Xi
Y;(XI -_ ~)}
- x)
Y;) = 0
1"=1
X)2
(4.3-11)
~ (Y;
i=1
because of the second of Equations 4.3-4. After dropping
the crossproduct terms, the following is obtained for the
sum of squares:
n
Pi
~ ~ (Y/j
- 'YJ1)2
i=lj=l
. and the variance of the estimated slope can be shown to be
(4.3-11a)
= ~ Wl2 = S, + S2 + . .+ Sk
i=1
Pi
i=1
+ (bo -
f30)2
f31)2
~PI(XI
- X)2
t=1
(4.3-12)
114
at
s~ =
~ 2 ~>;(Yi -
(4.3-13)
y;)2
i=I
C{s~}
is an unbiased estimate of
a~t' if the model is correct, when we recall from Section
2.3-2 that C{x 2 (for d.f. = n)} = n:
C{n
~ 2 ~Pi(Yi -
(4.3-15)
L Pi i=I
s;,
Yi)2}
1
n
2
s;
s;
(4.3-14)
s;
s;
4.3-1
= f30 + f31(X -
X)
Degrees of
Freedom
Sum of Squares
Source of Variation
Mean Squares
n
(h o - f3o)2
s~ = Var {ho}
PL
(hI - f31)2
Pi
t=1
t=1
115
L Pt(Xt -
s~
X)2
L Pt(Xt -
Var {hI}
X)2
t=1
t= 1
; : Pt( 'Vt -
Yt)2
n - 2
i=1
Pi
LL
4. Within sets
(error of experiment)
(}ij -
Yt)2
LPt - n
i=1 j=1
t=1
2: Pi( 'Vi -
i=1
YF = hi
2:
i= 1
(Yi j
Pi(Xi - X)2
=1
(Yij -
[2 Pi(Xi -
x)( ~ -
Y)]2
2Pi(Xi - X)2
2: Pi(Xi i= 1
x)( Yi -
Y) =
2:
Pi(Xi - x)
'Vi
i= 1
4.3-2
Y) = (Yi j
~)
+ (Yi
Yi )
Y)
Source of Variation
Sum of Squares
Degrees of
Freedom
Mean Squares
n
+ (Yi
L Pi(Yi -
s~ = LPlYt
Y)2
i= 1
y)2
i=1
L Pie 'Vi -
n - 2
Yi)2
i= 1
LPi
i=1
s~
Pi
i=1
i=~
L L (J'ij L Pt - n
t=1
4. Total
~)2
116
LP,(f, -
0)2 =
1= 1
1=1
1=1
L (Y -
0)2PI
+ L crt - Y)2P1
1= 1
1=1
Pi
2 (Y
jj -
1= 1; = 1
0)2
Pj
=L L
(YI ;
b~
n
.... Estimated
regression line
(a)
:Y
==
PI(XI - X)2
'----------------x
(b)
FIGUR E 4.3-1
Experimental data for the test of the hypothesis
fJl = 0: (a) hypothesis rejected, and (b ) hypothesis accepted.
= b1
PI ~(XI - x)
1= 1
0)2
-.
L
t=1
1 =1
:Y
1=1
Y,)2
1= 1;= 1
1=1
L (f
t -
Y)2Pt
1=1
TABLE 4.3-3
117
Source of Variation
Degrees of
Freedom
Sum of Squares
Mean Squares
1. Due to regression: bo
sg
Y22:pt
t=1
Y2 LPt
t=1
b,
2: Pt ~(Xt t=1
S~ = b1
i)
L
Pt ~(Xt t=1
2: Pt( yt
t =1
2. Deviation from
regression
- Yt)2
s;
n - 2
2:
t_=_I
TABLE E4.3-1a
Data Set
1
2
3
4
5
6
7
8
9
10
10.00
10.00
20.00
20.00
30.00
30.00
40.00
40.00
50.00
50.00
0.05
-0.52
-1.41
1.82
1.35
0.42
-1.76
-0.96
0.56
-0.72
Se =
yt)2
2:
t=1
Pt - n
Pt
TJ
"Observed "
Y
12.00
12.00
14.00
14.00
16.00
16.00
18.00
18.00
20.00
20.00
12.05
11.48
12.59
15.82
17.35
16.42
16.24
17.04
20.56
19.28
Error
Pi
2: 2 (Yij t=11=1
LPt - n
t=1
4. Total
Yt)2
_
n - 2
n
Pt(~ -
i)
line, and the sample means, Yt, at each Xi; and (4) to
prepare an analysis of variance.
Solution:
Let the significance level be ex = 0.05. The calculations
will be carried out in detail so that the separate steps can be
followed. (See Table E4.3-16.)
0)2
b o
= Y = 2:
= 2: Pt yt(Xi
YtPt
LPt
10
i)
L plXi - i)2
Y = 15.89 +
= 2(187.3) = 0 1873
2(1000)
O.l873(x - 30)
= 10.26 + O.l873x
Yi)2]
(n ~ J 2: PI I? - (~ Pi
n
,1=l
n
L r.
t=1
{ [ i=1
_ [J\PI(XI - xX Y'l)
2: Pi(Xi -
f}
i)2
i= 1
10.
2(1000)
t(5.02)
n
s~ =
TABLE E4.3-16
= (79.425)(2) = 15.89
= 1.67
Pi.
L 2:
(ytj -
t= 1 1~1
2: Pt
i= 1
Yt)2
t(6.95)
1.39
- n
Pt
Xi
Pt
2 ytj
Y:t_1=1
- - - - (Xt- i )
YlXt-i)
(x, - i)2
Y t2
-235.3
-144.2
0
166.4
398.4
400
100
0
100
400
138.41
201.78
285.10
276.89
396.80
Pi
2
2
2
2
2
11.765
14.215
16.885
16.640
19.920
Total 10
79.425
10
20
30
40
50
-20
-10
0
10
20
--
187.3
1000
--
1298.98
118
significantly different and that the linear model TJ = {3o ... {3I X
adequately represents the simulated data. Next, the mean
squares are pooled by using Equation 4.3-15:
s~
= 5.02 +
8
Y't
6.95
1.50
S~t
Var{b 1 } =
2:
1.50
= - - = 7.5 x 10
i= 1
Also, if
Y=
b~
Var {b o}
s;,
-4
2000
Pi(Xj - X)2
s;,
(30)2]
(4.3-20)
Similarly for f31:
4.3-4
Y=
bo
Var {b a}
+ (Xi
(4.3-22)
(4.3-16)
i=1
(4.3-21)
t=-_
II
Pi - 2
(4.3-23)
i=1
119
![(b o -
(30)2
Jl PI + (b l -
(31)2
Jl PI(XI -
st
X)2] _ F
(4.3-27)
FIGURE
Y=
b o + b 1 (x - x)
(b o - (30)2
(bo - (30)2
L:
n .:
Pi
+ (b,
f3l)2
i=l
+ u, - f3l)2
i=l
L:
PI(Xi - X)2 =
v=2
(4.3-26)
X)2 = 2st,Fl - a
(4.3-28)
which represents an ellipse in parameter space, i.e., in
the coordinates ({3o, (31), for a given 100(1 - ex) percent
joint confidence region. Equation 4.3-28 delineates the
locus of the boundary of an area (that is itself a random
variable) that includes the parameters flo and PI with
100(1 - ex) percent confidence. Note that the contour has
been given for Model 4.3-1. If Model 4.3-2 were the one
of interest instead, we know that f3~ = flo - PIX, and
the critical confidence contour in ({3~, (31) space could be
computed from the critical contour in (f3o, fl1) space as
given by Equation 4.3-,28.
Figure 4.3-3 illustrates a confidence region for the
model 1] = f3~ + fll X in which both points A and Bare
within the individual confidence limits but B lies outside
of the joint confidence region for ex = 0.05. The principal'
axes of the ellipse are at an angle to the coordinate axes
f3~ and fl1 because the estimates h~ and b, are correlated.
Figure E4.3-1b in Example 4.3-1 illustrates an elliptical
contour in (f3o, (31) space that is not rotated.
Figure 4.3-3 illustrates only one contour, that for
ex = 0.05. We can break 4>, the sum of squares, into two
Confidence ~
olx2
i=l
Pl(Xl -
i=l
Pi
L:
---.B
A
bo J-----+--~---.
I
--f--
I
-1-T
r
I
I
Confidence
interval for
flo
'---..-J------ll....---...I..----Ih
4.3-3 Individual confidence intervals
confidence region for the model TJ = fJ~ + PIX.
FIGURE
versus joint
120
Finally, if we form
Z = y* - bo
cP for a = 0.50
cP for a = 0.25
cP for a =0.10
cP for a = 0.05
bl{j.t~ - x)
+ Var{bo} + (JL~
Var{Y*}
- x)2Var{b1 }
[ _1
2:
('Vi - 7Ji)2Pi =
= <Pmln
+ (bo -
+ 2s~.FI - =
ePmin
Po)2
ePmin
0:
+ 2 nePmin F I 2
- x)
*-
L Pi
(
X)2
%
_;;....JL_x_-.;..._
L Pi(Xi
X)2
i=1
2:Pi - 2
(4.3-31)
(4.3-28a)
1
__
i=1
_
0:
bl(JL~
y* - bo -
Z - 0
f = ---;;-
y* - bo
b3
tl-~ ba
[(
1+ 2.1) bba
m
Pi
[X(Y*) - X]2] %
)2
~ J.L~ ~
- X
+ 2. Pi (Xi
_
X
y* - bo
s
b2
- tl-~ b-2
where
4.3-5
(4.3-32)
s;.
Y* - bo
bl
or
= 7J
-h + x
f3I
2: Pi (2Xi
f31
= 2," 3
= !1.. _ f3~
or, with 7J = Y
)2
f31
E,
x = y + 8Y +
E'
(4.3-33)
where
(4.3-29)
JL~(Y*)
t~ _~S2
b, = b, - b
(4.3-30)
,
E
= -f31
21 r----,r------,-----,.----,.-----..--_
Assumed model
20
T/=Po+{'hx
19
= y=
dY
eX -
/1
x = c + dY*
(4.3-34)
17
16
y
Locus of lower
confidence limits
on 11
15
14
13
Values of
Yj
12
/!
hr.
18
and thus
8.16 ::::;
121
11
< 12.36
/30 is
14.99 ::::; {3o < 16.77
The confidence interval for (3I from Inequality 4.3-22 is
0.1874 - 2.306(7.5 x 10- 4)% ::::;
fil
< 0.1874
x
x
x
x
x
here)
fall within the confidence interval. Finally, the confidence
interval for TJ is, from Inequality 4.3-24
Y - 2.306s y
Y + 2.306s y
=
=
20:
30:
40:
50:
14.01
15.88
17.76
19.63
1.09 ::::;
0.89 ~
1.09 ~
1.55 ~
'Y]
'Y]
'Y]
'Y]
'Y]
<
<
<
<
<
12.13
14.01
15.88
17.76
19.63
0.40
0.30
-------- ---
fJl
....................
0.20
0.10
ModeJ
-
-
ll
= (jo + {31 (x -
11
= {3o + {31 x
x)
IJo
FIGURE
+
+
+
+
+
1.55
1.09
0.89
1.09
1.55
122
Source of Variation
Due to regression:
bs: Y2 L Pi
bs, after allowing for
b-: L Pi( Yt - Y)2
Deviation of residuals:
L Pi( Yi - y i)2
Deviation within sets
(error): 2 L (Yij ~)2
Total
JI =
252.81
70.28
70.28
5.02
6.95
--
1.67} poo Ie d
1.39 1.50
10
1.50
0 =
Sbl
0.1874
= 6.85
V7.5 x 10- 4
LPlY
i -
Y)2 =
hi
Yi
(Xi - X)2
so that
[
b1 [2
(XI _SYt
0.0374
2.74 X 10- 2
1.37
Data Set
1
- 0.150
V7.5 x 10- 4
s~ = 70.28 = 36 8
= b1
= 0.1874
TABLE E4.3-2a
S~f
Example 4.3-2
252.81
335.06
Mean
Squares
d.f.
,X)2]Y2] 2
bo
3
4
5
6
7
8
9
10
11
12
13
14
15
16
Y(cc/min)
x(in)
112
115
152
199
161
209
237
231
233
259
287
240
281
311
392
357
1.14
1.37
1.89
2.09
2.45
2.04
2.73
3.04
3.19
3.09
3.05
3.10
3.34
3.75
4.19
4.59
SbO
Sb'o
6.41
= 21.1
S~l =
s~,
L (Xi
140.454
L Y i = 3776 = 236
L Pi
16
13.61
Y = 236 +
13.51
79.02(x - 2.816)
79.02x
while the confidence intervals for {3~, {30, {3b and '1], based on
t1-~ = t o.9 7 5 = 2.145 for 14 degrees of freedom, are:
16
L x~
= 687 = 50.4
- X)2
oX
[236 -
2.145(~~~)] s Po <
+ 2.l45(~~2j]
[236
Ly
13.610
985,740
(L y )2
i
YiXi -
Yi
Sf
11,707 - 2.8156(3776)
891,136
Y)2 =
1075
L Y? - (L: :1)2
94,604
bo
LYi = y
=-
b~
= 13.506
=
1
236
13.61
S~i = ~2
~ (Yi
n -
yi )2
"--J
SYt
687
= 26.2
2
_
Sbo -
S~'o
S~i
_ 687 -16
-
= 443.0
42 8
[L ~(Xi - _X)]2}
L (Xi - X)2
SYi
1
[ 16
(x - X)2] %
13.61
E4.3-2b
Data Point
- X)2
+ 2.145(sy)]
e "
L (Xi
v 13.61
where
L (Y1 -
+ 2.145(. }6.2 )]
x) =
YlXi -
+ 2.145(21.1)]
(L Xi)2 = 126.844
123
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
103.59
121.76
162.85
178.66
207.10
174.70
229.23
253.73
265.58
257.68
254.52
258.47
277.43
309.83
344.60
376.21
ISYt
(i - Yt )
29.14
26.12
19.90
17.88
15.11
18.36
14.11
14.46
15.16
14.66
14.49
14.70
16.16
20.00
25.21
30.46
8.40
-6.76
-10.85
20.33
-46.10
34.29
7.76
-22.73
-32.. 58
1..31
32.47
-18.47
. 3.56
1.16
47.39
-19.21
124
84,988/687
rejected,
400
350
C 300
<,
(.)
~
Q,)
250
+'"
~
0
'+=
~
Confidence interval
for one additional
observation at x = 3.00
200
150
Experimental data
100
50
234
reading in inches
X,
FIGURE
E4.3-2
data.
fidence limits about
at, say, Xi = 3.00:
= 26 2[1
., SD
Yi
450
.!.
+ 16 +
(3.00 - 2.8156)2] ~ = 27 03
13.61
.
250.6
58.0
= 0 is
ki'" 0.1
.c
Source of Variation
Sum of
Squares
Degree of Mean
Freedom Square
E
::;,
c
+'"
Qi
~ 0.01
Due to regression:
bo : )72 2 Pi
bs, after allowing for
b.: 2Pi(Yi - Yi)2
Deviation about the
empirical regression
line: 2: (Yi - y i)2
Total
,/'
891,136
891,136
8~,988
84,988
/'
,/'
/
0.001"'""0.001
0.01
(a)
9,616
--
985,740
. 14
-
16
FIGURE
687
t P. M. Rowe, Trans.
Mar. 1963.
0.1
1.0
10
Reynolds number, Re
E4.3-3A
1.0
125
r-----r---r----r-~---r----r------r-~-------.
ci)
0.1
tOo 10
.D
::3
C
...
~:::J
z
~ 1.0
"Rogue" points? \
en
@
0.1
0.001
- - - . _ L . . . - _ - - - . - - - J L . . . - _ - - - L - - - J_ _~........... ~_~___I
0.01
(b)
0.1
1.0
10
E4.3-3B
'
10
<,
<,
.......... ~
...
C
o
~ 1.0
rQ'-Q...o 0
~
~
<,
<, <,00
6'Q
(J)
"Rogue" point?
10- 3
(c)
"
Weber number, We
FIGURE
"'0
<,
10- 2
E4.3-3c
10- 2
..........................
:::J
C
__L___.J
E4.3-30
___I_.L..__
100
Reynolds number, Re
FIGURE
0.01
1.0
10- 5
10-4
Grashof number, Gr
FIGURE
E4.3-3E
126
dR
- dt = k(R - 1)
R(O) = 0
In (1 - R) =
Po + PI t
aJri
1.0
0.8
0.6
Locus of 95 percent
confidence limits
0.5
0.4
0.3
0.2
~I
0.1
0.08
0.06
0.05
0.04
0.03
0.02
95 percent confidence
interval for ind ividual
data points
""" ",
", " "
"~"
"~~"'"''
""
" '"
'\
'\
"
",,~
0.01
0
50
100
150
200
250
300
350
Time
FIGURE 4.4-1
Radioactive tracer counting illustrates the
increasing error with time of measurement.
t J.
Uii
127
= ----.;;~~
l!{ ~ , x} = f3x
If we call Wi = (1/f(xi))2 the weight, and multiply X2 by
a~i' we 0 btain
n
ar,x 2 =
Pi
LL
Ytj -
WI(
(4.4-1)
1]1)2
2
aYtX;
a weight of!
x
t=1 j=1
Y=
b.,
+ bl(x
- x).
X =
2.
I x}
2
2;
= aytX
a weight of -;
a weight of 1
Then the corresponding estimates of f3 and the variances
of b become (all sums are from i = 1 to n):
i=1
w.p,
n
b o
= Y = i_=_~
3.
-i
Var {Y
w.p.x,
_i=_~
(II)
WiPi Ii
t=1
WiPi
hI
4.
l_o=_:
WtPi(Xi -
x) Yi
_
functional de
can be applied to estimate the required
L WiPi(Xi - X)2
ofVar
{Y}
on
x.
Or,
estimation
of the functional
pendence
t= 1
dependence can be made from analysis of known instru
An analysis of variance can be carried out which corre
mental errors in the measuring instruments.
sponds to Tables 4.3-1 and 4.3-2. The variance ratio
s;/s~ can be formed exactly as in Section 4.3 and an F
test carried out. If the ratio s;/s; is not significant, the
Example 4.4-1 Weighted Linear Regression
pooled variance is
Example 4.3-1 is repeated in part with the revised premise
n
Pi
n
..
that three types of weighting are to be compared:
L L W i( Y ij - Yi)2 + L WiPi( r, - Yi)2
i=1
S~t
i=1 j=1
WiPi(X i -
(4.4-2)
i=1
(~ PI - n) + (n -
X)2
2)
2. Weight
3. Weight
l/x~
~.
for each Yt
128
Solution:
E4.4-1
TABLE
21
x2
20
19
300
0.456
10
18
2:
10
W IP I
17
t=1
y 16
L:
X =
W IP IX t
,.-==~1_ _
30.0
~ W,Pt
t =1
158.8
V=
6.526
0.3806
15.88
14.31
L:
,-I=...;:1:..-_ _
15.88
14.31
12.99
60
o
10
20
FIGURE
x)
-11.9
-1.9
8.1
18.1
28.1
-5.6
4.4
14.4
24.4
34.4
Vt
WtP t(Xi -
x)
WIPi(Xt -
X)2
374.62
15.868
0.592
81.022
2.874
1= 1
2000
1=1
n
W,P I( X I -
x ) VI
1_=__1 - -- -
WIP t(Xt -
0.1874
0.1959
0.2057
0.1674
0.1132
0.0548
8.3 X 10- 4
6.4 X 10- 4
5.6 X 10- 4
X)2
1= 1
Var{b o}
Var"{bd
Var{b~ }
=--= t
WIPt
-20
-10
i2
11
WIP1Yt
10
20
30
40
50
b: =
Weights {
_
(Xi -
L:
Values of Yie
12
XI
2:
14
12.99
WIPt
1=1
2:
Locus of lower
confidence limits
15
13
b = V =
15.6
V,
WIPt
=.-;;1:..-_ _
:-1
1=1
21.9
0.921
0.421
0.193
E4.4-1
(4.5-6)
'YJt =
= Vi
=
Vi
[ 27TO'uav
X
vI - p~v
1
exp [ . 2(1 -
](i -~I
Puv)
i=1 j=1
{(Y
t . _
Pi
L: L:
i=1 t=1
Yij
n'
L: Pi
i=1
(4.5-7)
2: Pi
t=1
P1
proves to be of quadratic
However, the equation for
form, indicating a more difficult computation than re
quired for simple estimation with error in only one
variable:
n
- Y) - Pl(X; - X)(Y; -
Y)]
t=1
Pi)
~n ~Pi
Pt'YJi
b0-!.::..!-..
-n
~PI[(Y;
129
Pb
'Yl.)2
o
_ J_ _
av
Pi - 1
-2Puv(1I;a~7)I)(Xlja~lLi) + (~ja~lLr}]
(4.5-2)
To save space we do not show the logarithm of the likeli
hood function but only list the results after summation
over the index j of the minimization of In L with Equa
tion 4.5-1 substituted for 'YJi in Equation 4.5-2:
Partial differentiation with respect to f30 yields
n
~Pi[(Y; i=1
= 0
(4.5-3)
Pi
:2
(P'uvSuSV)i
= J';....=_1
(Xij - Xi)(Yij -
~)
_
Pi - 1
Homogeneity of variance can be tested as described in
Chapter 3, and pooled estimates can be formed by sum
ming over the index i, if warranted. If estimates of au,
U V , and Puv cannot be made from the experimental data,
certain assumptions can be presumed concerning these
values or their ratios, and {31 can again be estimated.
Discussion 'of the confidence intervals for' f30 and f31 is
beyond our scope here; the interested reader is referred
to the references at the end of the chapter. .Extension of
the method of maximum likelihood to multivariate
problems is possible in principle but results in sets of
nonlinear equations, which are often difficult to solve
for the desired parameter estimates. Satisfactory, gener
ally applicable techniques of estimation for Model 4.3-1
when both variables are stochastic have yet to 'be devised.
130
+ f31(Xi
~ = f30
- x)
Et
4.6-1
Cumulative Data
1,2, ... , n
Yj -
= f3(x j
Xj-1)
+ Ej
= 1, 2, ... , n; Xo
or
i
Y; =
2: (Y
j -
Yj -
j=l
i
1)
= (J
2:
j=l
(Xj -
i
Xj-l)
2:
j=l
= Bx,
2:
(4.6-3)
U~
(4.6-4)
-n-
2: x;
i=l
b B=
PB =
2:1 (Y
2: (x,
j=l
- Xj-1)
1_=n_
Yj
- 1)
(4.6-6)
The interpretation of Equation 4.6-6 is that the best
estimate of the slope of Model B is made by taking the
last value of the dependent variable and dividing it by"
the last value of the independent variable! Although the
intermediate results might seem useless, they are not
because: (1) they help decide if the model is really linear,
and (2) they are needed to reduce the variance of"the
estimator of the slope which is
(4.6-2)
j=l
(4.6-7)
j=l
t J.
(4.6-5)
(4.6-1)
If
u~
131
140
130
120
110
100
90
:y
80
70
60
50
40
30
20
10
00
5 10 15 20 25 30 35 40 45 50 0
5 10 15 20 25 30 35 40 45 50
(a)
30
25
20
y
15
10
5
1 2 3 4
5 6 7 8
9 10
3 4
6 7
9 10
(b)
FIGURE 4.6-1 Fitting experimental data when errors are not independent:
(a) comparison of independent and cumulative data; n = 50; and (b) comparison
of independent and cumulative data; 7J = 10. (From J. Mandel, J. Amer. Stat.
Var{b
(n
[(Yj - Yj-1)bB(Xj -
Xi_I)2]
(4.6-8)
For the special case in which each increment.Ix, 1, Equation 4.6-6 becomes
bB
Xi-I)
Yn
(4.6-9)
- (6)"5 n(n2n++1)(2n
zn + 1
Var {b}
+ 1)
2
2
GI
(4.6-10)
~,
132
(6)
"5
2n2 + 2n
(n + 1)(2n
+
+
I)
3(n + 2)
5(n + 1)(2n + I)
1.000
0.954
0.895
0.843
0.791
0.735
0.685
0.628
0.581
10
20
30
40
50
60
70
80
(4 .6-11)
b~
hI
Square root of
sum of squares
of residuals
b~
1.0024
-0.005303
0.00170
0.0000357
0.00276
1.00
-0.005238
o
0.000166
Square root of
sum of squares
0.00469
of residuals
The intercept has zero error since it is simply the first
measured value. The correctly estimated confidence interval
is - 0.00563 :$ f31 < - 0.00485, illustrating how the pre
cision determined by the wrong method of analysis appears
to be much greater than it should be.
4.6-2
Standard Error
(4.6-12)
Estimate
Correlated Residuals
Region of
rejection
Region of
acceptance .?
Positive
serial
c.orrelation
Region of
rejection
Negative
serial
correlation
(4-Dl)
4
0-------------1-------'------1----------_
2 (4-D
D
u)
Example 4.6-2
133
Flow Rate
(ft 3/sec)
1.1
2.3
2.9
2.5
3.5
4.0
4.7
5.0
5.1
4.5
5.5
6.0
6.3
6.5
6.7
6.9
8.92
15.51
20.08
16.38
19.53
22.12
24.60
25.35
25.01
23.03
29.47
32.97
35.05
36.58
38.30
40.06
Y = 2.792 +
5.0101x
134
1~2 (Et
D
Var {b}
Et E?
17.221
1)
= 51.149 = 0.336
1=1
(4.6-16)
(4.6-14)
b, =
(x, - x)Yt
2:
(x, -
t:..c=-=-n1
t=
b~
(4.6-18)
X) 2
= Y - bx
(4.6-19)
_ 1*
*
tiLXt
t =1
= 0
6"{xt Et} =
2:
1
Y=
tiL
Yt
t =1
(4.6-20)
(4.6-15)
Var {b1 } ~
The deviation of b from f3 can be written by introducing
Equation 4.6-14 for Yt in Equation 4.6-15:
-=------=---=
(4.6-21)
where
n
Co =
n- k
11.-1
2Ef
t= 1
22
E tEt +1
2
+ 22
= 2
=
(x, -'
x)Ef
t=1
22
EtEt+k
t =1
C1
+ .. . +
t =1
n- f
22
(x, -
x )E tE t + i
f =1 t=1
n-f
( Xt+ f -
x ) E tE t + f
i= 1 t=1
C2
E, = Y t
bx,
(x, -
t =1
x)2Ef
+2
n - f
.L .L (x, -
X)(Xt+i -
x)EtEt+ f
f= 1 t=1
135
lEt - Etl
(4.7-1)
Sri
4.7-1
CRITICAL VALUES OF
Sample
Size
Significance Level ex
for One-Sided Test
0.05
0.01
3
4
5
123
7.17
5.05
31.4
16.27
9.00
6
8
9
10
4.34
3.98
3.77
3.63
3.54
6.85
5.88
5.33
4.98
4.75
15
20
25
3.34
3.28
3.26
4.22
4.02
3.94
136
4.7-2
Compute r.,
Number of Residuals
3
8
11
14
n
n
~.
~ Il ~
7
10
13
25
If E 1 is Suspect
If En is Suspect
'10:
'11:
'21:
'22:
(En
(En
(En
(En
E n- 1)/(En
En -l)/(En
En - 2)/(En
E n - 2)/(E n
Supplementary References
General
Acton, F. S., Analysis of Straight Line Data, John Wiley, New
York, 1959.
Brownlee, K. A., Statistical Theory and Methodology in Science
and Engineering (2nd ed.), John Wiley, New York, 1965.
Cramer, H., Mathematical Methods of Statistics, Princeton Univ.
Press, Princeton N.J., 1954.
Davies, O. L., Statistical Methods in Research and Production
(3rd ed.), Oliver and Boyd, London, 1958.
Feller, W., An Introduction to Probability Theory and Its Appli
cations, John Wiley, New York, 1957.
Fisher, R., Statistical Methods and Scientific Inference, Oliver
and Boyd, London, 1956.
Guest, P. ,G., Numerical Methods 0/ Curve Fitting, Cambridge
Univ. Press, Cambridge, England, 1961.
Natrella, M. G., "Experimental Statistics," NBS Handbook 91,
U.S. Government Printing Office, Washington, D.C., 1963.
Plackett, R., Principles of Regression Analysis, Oxford Univ.
Press, Oxford, England, 1960.
Williams, E. J.; Regression Analysis, John Wiley, New York,
1959.
E 1)
E 2)
E 2)
E3)
(E2 - E 1)/(E n - E 1)
(E2 - E 1)/(En- 1 - E 1)
(E3 - E 1)/(En -1 - E 1)
(E3 - E 1)/(En- 2 - E 1)
PROBLEMS
Goldberger, A. S., Economic Theory, John Wiley, New York,
(c)
1964.
Johnson, J. Econometric Methods, McGraw-Hill, New York,
1963.
Sargan, J. D., "The Maximum Likelihood Estimation of Eco
nomic Relationships with Auto Regressive Residuals," Eco
nometrica 29, 414, 1961.
Zellner, A., "Econometric Estimation with Temporally Depen
dent Disturbance Terms," Int ..Econ. Rev. 2 (2), 164, 1961.
fl1X1
(a) y = kix, +
{30
+ {31 X1
(b)
8253
8215
8176
8136
8093
(f) Y = flOX~lX~2
Y = e lJi X
1
2
3
4
X2 -
X3Y),
where k =
~ = a{Xl ~ xJ + Xl ~ X2
(a)
-8290
or
In y = 130 + {31 X
1
131
(e) - = 130 +
(b)
0.0245
0.0370
0.0570
0.0855
0.1295
0.2000
0.3035
+ {32 X2
(d)
(a)
(b)
(c)
(d)
(e)
2
4
8
16
32
64
128
(c) Y = e-IJOx+lJl
--
0
20
40
60
80
100
Problems
(b) Y=
(d)
137
0.1
0.2
0.3
0.4
1.333
1.143
0.923
0.762
0.5
0.6
0.7
0.8
0.9
1.0
0.645
0.558
0.491
0.438
0.396
0.360
7
9
11
2
5
8
11
14
17
94.8,.
87.9
81.3
74.9
68.7
64.0
1 - xp
r=--
131 + f32 X
138
where
1
{:31 = kKA
{:32 =
1
kKA
+k
+
Kw - K A
kKA p
{:31{:32X
+ {:32X)2
X) ~
(Y
{32,
= ({:31{12) ~, + ({11(:32) ~ x
Basalt
Sandstone, A
Granite
Polomite
Marble
Sandstone, B
Limestone
Shale
Threshold Contact
Pressure (10- 3 )
Shear Strength
(10- 3 )
634
570
494
364
102
4.5
6.5
8.5
86
50
3
9.0
4.6
3.0
3.0
1.2
+ {1x
PROBLEMS
4.16
Given a model
'YJ
4.19
TJ
TJ
with a period
W
4.18
5
15
21
22
47
57
4.20
~)2
Y
1.0
1.26
1.86
3.31
7.08
=
=
e(!+13x
e(!+ 131x+13 2x
TJ = ax 13
= 27T
L (Y
10
20
30
40
50
139
4.21
x:
9 8 7 7 6 4 3 3 12
Y:
7 9 7 8 7 3 6 1 2 2
0.846
0.573
0.401
0.288
0.209
0.153
0.111
0.078
I1p
a -
aov(!l
.:
140
7.56
6.53
5.09
4.56
3.51
2.56
2.28
200
250
300
350
450
600
800
v
400
470
590
610
620
840
950
1200
1400
1550
0.0125
0.0165
0.0215
0.0225
0.0235
0.0420
0.0530
O~0750
0.0970
0.120
where:
Re = Reynolds number calculated on the reduced
velocity of the liquid phase
Ao = coefficient of hydraulic resistance for single
phase flow
AmiX = coefficient of hydraulic resistance for a
vapor-liquid mixture
Find a linear relation between Amix/ Ao and Re.
4.28 By dimensional analysis for a problem in heat.transfer,
it was shown that
Nu
28
2.11
2.12
2.07
2.34
2.38
2.39
2.47
2.44
2.38
2.38
2.41
2.51
2.62
2.62
2.60
2~48
2.53
2.52
2.55
= a
ReO
2~55
Nu:
2.57
800
900
1200
1600
2100
2750
3750
6000
100 100 200 200 300 300 400 400 500 500
31 36 39 40 40 42 43 45 46 49
!!....
e- k z l T
Po
Level (ft)
3290
3610
3940
4600
4930
5260
27.2
23.4
19.8
14.3
12.2
10.5
PROBLEMS
60
60
60
62
62
62
62
64
64
70
70
70
4.33
4.34
Po
Time (min)
PN 20 S
4.35
(mm Hg)
308.2
254.4
235.5
218.2
202.2
186.8
137.2
101.4
63.6
20
30
40
50
60
100
140
200
120
120
185
k(t - to)
110
160
135
145
170
In L
140
130
135
150
141
E.O.
E.G.
31.2
25.0
18.7
12.5
6.25
3.12
1.27
1.52
1.93
2.62
4.07
5.62
9.53
9.23
8.85
8.03
6.38
4.50
142
Response (mv) .
51
52
53
54
55
56
70
64
60
49
47
31
57
44
58
38
34
59
Conclusions:
1,. The higher the 12-month profit potential of a
TPO stock, the better the chance of a short-term
profit. This is illustrated by the TPO average per
formance line.
2. The average appreciation of all stocks was
20.5-6.5 percent better than the 14-percent rise in the
Dow Jones Industrial Average during the same time
period.
3. Although individual stocks mayor may not
live up to their rating, on the average the computer
ratings are accurate. Equal dollar investment in 10
stocks is required to give a 9 out of 10 chance that the
average price appreciation will be within 5 percent of
the average performance line.
4. Since the market went up 14 percent during the -
same period, the stocks (points) under the dashed
line would have shown a loss in a sideways market.
c = constant
~p
0.927
0.907
0.877
(in. Hg)
0.942
0.903
0.823
0.719
0.780
0.761
0.644
0.508
0.757
0.684
0.603
50
...
c
(1)
40
(1)
a.
.~
c
'':;
30
co
'u
(1)
aa.
co
20
Q)
.2
.'
DJIA price
------~
I
~-----------------
appreciation
...
cl:
10
:.
50
~ Ill.
100
12-month profit potential
150
200
FIGURE
CHAPTER 5
ESTIMATION OF PARAMETERS
+ ... +
fiq(xq -
xq)
(5.1-1)
Minimize
2.:
Wi(Y'i
~ 1]i)2
i=l
2.:
WiEr
(5.1-3)
i=l
1 :::;; i
O~k5:q
or its equivalent
Y = {3o
+ (3I(XI -
Xl)
+ (32(X2 -
X2)
(5.1-1a)
(5.1-2)
Y=
b.,
+ bl(XI
- Xl)
+ b2(X2
X2)
(5.1-4)
144
+ ... + bq(O) =
.L Wi ~
t=l
bo(O)
+ bl[~ WI(Xil -
Xl)(Xil - Xl)]
+ b2[~ WI(XI2 -
X2)(Xil - Xl)]
+ ...
+ bq[~ wl(xlqbo(O)
X2)]
+ b2[~ WI(XI2 -
~ ~(Xil -
i\
~ ~(Xi2 -
X2)
~ WI~(Xiq -
x q)
xq)(xu - Xl)] =
bo(O)
+ bl[~ W.(Xil -
Xl)(Xlq - Xq)]
+ b2[~ Wi(XI2 -
WI
WI
(5.1-5)
Note that inasmuch as
L WI(Xik -
Y has
Xk)
== 0
t=l
(5.1-6)
.L X,jXlk
= 0
if k =F j
i=l
Y1
)T1
Y=
Y2
)Tn
an n x
matrix
y= Y2
an n x
matrix
Yn
ESTIMATION OF PARAMETERS
~o
~=
~l
Y = Y + ib where is identical to
column of dummy 1's is deleted.
a q x 1 matrix
X=
Xl)
(Xl2 - X2)
(Xl q
(X2l - Xl)
(X22 - X2)'
(X2Q - Xq )
(Xnl - Xl)
(Xn2 - X2)
(Xnq - Xq)
an n x (q
Xq )
1) matrix
In the first column the 1's are dummy variables which are
needed only if Model 5.1-1 is to have an intercept and yet
be represented in matrix form as l) = x(3 with the corre
sponding estimated regression equation Y = xb. Each
Xi vector is
Xi =
xq)]
Wl
W2
Wn
W=
an n : n matrix
E =
[1:1
or
<f>
L Wl~ =
eTwe
t=l
(Estimate of 132
Yg
t)
an n x 1 matrix
(Third observation
~~
Vector interpretation
_ _
of observations Y
--
~q
(Xll -
145
b2
- - - - - - - - - - ,
I
I
Estimate of PI,\'
fh
bl
(b)
Yg
(c)
(d)
FIGURE 5.1-1 Interpretation of the vector of residuals as a normal to the surface of
estimates of TJ: (a) observation space (three observations), (b) parameter space (two
parameters), (c) normal vector E in observation space (Y is the best estimate of "1; Yis.a
poorer estimate of "1), and (d) experimental observation space (two independent vari
ables).
(5.1-7)
146
Because
= Y - 11
e/> =
x~,
Y -
X~)TW(Y
(Y -
x~)
(5.1-7a)
opo
oe/>
_ 2[8(Y -
X~)T]W(Y
lnL(~,
x~)
o~
x~) =
(XTWX)-I(XTWY) =
xTwxb
cG,
(5.1-9)
(x TWX) =F 0
tC{E~}
(5.1-10)
f=
5.1-2
a21
a~
an2
(5.1-11)
- xP)]
x~)Tf-l(Y
(5.1-8)
n In k - t[(Y -
(5.1-Ila)
xTwY
fl y, x) == lnL =
where
Opq
-2xTw(Y -
[:J
= a~tI
(5.1-12)
(x Tf--l X) -
I (X Tjr- - 1Y)
then
db = 2(d qT)(a q)
identified previously.
= ( xTf- IX) - I
ESTIMATION OF PARAMETERS
E = y - x(xTwx)-lxTwY = MY
=
I - x(XTWX)-lXTW
E = MY
= M(x~
(ETWE)
4>min
+ 1)
n - (q
n-q-l
(S, ;15)
(5.1-16)
s;
where
147
+ ) = M
1)u
= x(3 +
x*~*
C{b} = (3
from Equation 5.1-13. But the expected value of b,
assuming Model II applies, is
C{b} = (XTWX)-lXTWC{Y}
If we use Equation 5.1-10, we can show that the
expected value of b is ~:
C{b} =
C{(~.:Twx) -l(X
TwY)}
= C{(xTx) -l[(XT(x(3
= ~
+ )]}
(5.1-13)
~)T}
If we substitute Y = x(3 +
Covar {b} = <ff{[CXTW(X~
x*~*)
[(XTWX)-l(XTwY) _
~]T}
+ ) -
(XTWX)-lXTW(X~
'~][CXTW(X~
+ ) -
(3]T}
TIl
r)
s;
s;
(b -
~)T.
~)
"'min
.2:
Wj(
Yj -
Y;)2
i=l
0ePmin/4>min
eYi/ Yj
= () In ePmin
0 In
Yi
= (}4>~n ( )Ti )
eY
4>min
(5.1~18)
148
s;
YI
bl
= [(XTWX)-IXTW]
bq
(Y
8~ ~
= [(xTwx)-IXTW]kt(
Y
i
(5.1-19)
(Y
IO) = 2(259)(0.03) = 6 1
04>min
8 flO epmin
(14)(687)
.
(rIO)
(} YIO b;
ohl
(XIO (Xi -
= 2:
X 10- 3
(259)
x)
X)2 79.02
6.58 x 10-
79.02
a =
G ~]
and
-1
-1
(a)
(b)
FIGURE
149
ESTIMATION OF PARAMETERS
TABLE E5.1-1
{3~
. [1
x =
19.9]
1 20.0
a = [x/'x] = [ 3
60.0
60.0 ]
1200.02
1 20.1
Calculation of Error
within Sets
Xo
Xl
X2
Ytj=yield
Ll=(ij- ~)
Ll2
1
1
1
-1
-1
-1
-1
1
1
1
1
-1
-1
1
1
-1
-1
1
1
4
5
10
11
24
26
35
38
4.5
-0.5
+0.5
-0.5
+0.5
-1.0
1.0
-1.5
1.5
0.25
0.25
0.25
0.25
1.00
1.00
2.25
2.25
.1
1
1
1
1
Temperature, T (OF)
Pressure, p (atm)
Yield, Y(%)
160
160
160
160
200
200
200
200
1
1
7
7
1
1
7
7
4
5
10
11
24
26
35
38
+ {3IXI + {32 X2
Solution:
The values of the independent variables can be coded so
that the calculations are easier to follow. Let
36.5
a = (xl'x) =
= (XTX)-l
Xl
25
Sum = 7.50
Example 5.1-1
10.5
T - 180
20
X2
.,..
p - 4
=p = - 3
G = (xTY) =
8.000
0.000
0.000
8.000
0.000]
0.000
0.000
0.000
8.000
0.125 0.000
0.000 0.125
0.000]
0.000
0.000
0.125
0.000
153.0 ]
92.99
35.00
150
...----r---r---.....,---r------r---~-___.,
220
210
'" '- '- '- 30%
-.....
iL
'--.....
' - .........
200
-.....
C1)
----
-..... ..........
::s
+"
E
0.
20%
--'
----
.......
Yt
-..... ........
ao
+ L
..........'
~ 180
'
'--.....
.........
..........
-.....
..........
---..... -.....
160
-......-2
--
n-l
.........
~---
3
'--
..........
-.....-.....
-.....
---......-- -.....-.....
4
t=o
(c)
n-1
n-1
0, 1, 2, ... , n -
.......... '
-.....
.................... ~
cosjtr +
I =
-.....-
170
(aj
j=1
............................
t=O
t=o
P, Pressure (atrn)
E5.1-1
n-1
n-1
Example 5.1-2
Harmonic Analysis
ao
<Xl
cos
(a)
f3j sinjx
pj sin (jx
t=o
+ 8j )
t=o
n<xo =
Yt
t='o
n-l
aJ
(d)
Yt cosjtr
t=o
n-1
~ 13J
Yt sin jtr
>
1,2, ... ,m
t=o
ao = ao = -;, Lt Y t = Y
(b)
where
(e)
OJ = tan- 1 -!..
f3j
Yt .sinjtr
Et'S
2
aciD
a Yi
= -;
ESTIMATION OF PARAMETERS
a;t
where
is the variance of Et. An unbiased estimate of O';t
is given by S;t which can be calculated by using Equation
5.1-15 or
n-1
n m
t=O
2:
Y? - nag - -
21=1
(n - 2m -
(ar
b;)
(g)
1)
151
ao
a1
a2
a3
a4
-0.0153
0.9334
0.0391
0.0625
0.0030
b1
b2
= 2.0768
= -2.8978
0.0027
b4 = -0.0377
s;
TABLE E5.1-2b
Variance Ratio
TABLE E5.1-2a
ANALYSIS
Source of
Variation
v=Degree
of Freedom
Sum of
Squares
First harmonic
Second harmonic
2
2
-tn(ar + bi)
in(ai + bi)
-tn(a~ + b~)
-!n(a~ + b~)
2
n-21n-l
-tn(a~ + b~)
-!n(a~ + b~)
(Difference)
S~i
mth harmonic
Residual
Total
n-1
2:
Mean
Square
Y?-na5
Y (volts)
0
7T/6
7T/3
7T/2
27T/3
57T/6
7T
77T/6
47T/3
37T/2
57T/3
117T/6
27T
0.972
-0.653
-0.353
2.063
3.803
2.798
-0.977
-4.391
-4.709
-2.16S
2.324
1.048
0.814
First harmonic
Second harmonic
Third harmonic
Fourth harmonic
*F
O 9 5(2,
Mean Square
~ (a; + br)/s;i
15.552
25.197
11.73
4.29 x 10- 3
Significant*
Significant*
Significant*
Not significant
7) = 4.74
Yq(X)
fioPo,n(x)
152
Po,n(x)
+ ... + bqPq(x)
(5.1-23)
PI ,n (x) = 1 - 2n
P 2 ,n(x)
3,n
1 - 6
x(x n+ 6 n(n
-
(x) = 1 - 12 ~
n
2:
W(Xk)PtCXk)PI(Xk)
k=l
1)
1)
and in general
~
(m + k)!
X(k)
Pm.n(x) = L (-I)k (m _ k)! (k!)2 n(k)
(5.1-21)
1c=0
Po(X) = 1
P1(x) = (x -CXl)PO(X)
P 2(x)
(x - CX2)P1(X) - f3 lPO(x)
Ps(x)
(x - CXs)P2(x) - f32Pl(X)
2: Pm,n(x)Pq,n(x) = 0,
if q
=1=
ifq
x=O
~
x=O
p 2 (x) = (n
m,n
(2m
1)(m
l)n(m)
nym)
2:
2: W(Xk)[Pj(Xk)]2
k=l
CXl'+ 1 == k_=_;
2:
Y(x)Pm,n(x)
b.; =x- =-o-n----
2:
x=O
2: W(Xk)[PJ(Xk)]2
k=l
f3j = ~-----n
2: W(Xk)[Pl'_1(Xk)]2
k=l
P~,n(x)
Orthogonal Polynomials
ESTIMATION OF PARAMETERS
5.1-20. The results for the first few polynomials and for
several coefficients are:
TABLE E5.1-3a
X
0
10
20
30
40
50
60
70
80
90
100
110
120
130
140
150
160
170
180
190
200
14.534
15.144
15.831
16.435
17.034
17.567
18.050
18.440
18.764
19.028
19.193
19.248
19.226
19.100
18.880
18.578
18.187
17.748
17.243
16.644
16.072
210
220
230
240
250
260
270
280
290
300
310
320
330
340
350
360
370
380
390
400
15.386
14.716
14.029
13.293
12.590
11.871
11.1-'68
10.393
9.640
8.998
8.311
7.625
6.949
6.301
5.619
5.021
4.389
3.823
3.109
2.603
Coefficients
Polynomials
Po
2x
Pl
1 -
41
6x
P 2 = 1 - 41
6x
41
1)
Li()
(X -
etc.
bo
Yx(l)
= x=o
~ (1)2
x=o
b =
~0
:2
Yx 1 -
x=o
2
3
4
5
6
7
8
9
X
X
X
X
10- 3
10- 5
.10- 5
10- 6
10- 7
10- 7
10- 9
4I
41
Sum of
Squares
Degree of
Freedom, v
- 0)2 41
Removed
Residual
Removed
Residual
Removed
Residual
Removed
Residual
Removed
Residual
Removed
Residual
Removed
Residual
Removed
Residual
Removed
Residual
Removed
Residual
1 - 2X)2
Mean
Square
Variance
Ratio*
Total
0
2X)
TABLE E5.1-3b
:2 (yt
bo = 13.337
b, = -0.391
b 2 = -0.019
b 3 = 0.892
0.801
b4 =
b = -0.999
0.365
b6 =
b 7 = 0.343
be = - 0.122
b g = -0.519
etc.
Solution'
Because the intervals for X are equally spaced, we can use
Equation 5.1-22 to estimate the coefficients in Equation
Term
Added
153
1
40
1
39
1
38
1
37
1
36
1
35
1
34
1
33
i
32
1
31
8449.79
7293.49
1156.30
880.6600
275.6453
236.2725
39.3728
38.1157
1.2571
0.1545
1.0976
1.0356
0.0620
0.0394
0.0226
0.0077
0.0149
0.0013
0.0136
0.00001
0.0136
7293.49
28.907
880.6600
7.678
236.2725
1.0361
38.1157
0.03398
0.01595
0.03049
1.0356
0.01720
0.0394
0.0006647
0.0071
0.0004515
0.0013
0.000425
0.00001
0.000438
124.60
228.04
1121.71
5.23
60.21
59.27
17.05
3.06
0.20
154
+0.7245
-0.3920 X 10-
x)2Var{b]}
+ (Xi -
V for
Model 5.1-1
18 X 8
(5.2-3)
F or a single data set
Var {Vi}
HYPOTHESIS TESTS
Xl) ...
(Xiq -
xq) ]
[1 (Xi!
co.:q]
: Xl)
(Xi! -
Cqq
(X1Q
Xq )
(Pi ---
+ tl-P-it )
v = (n - q - 1)
Because bo = Y,
Sb
.. / - 2 V Sf Coo
t
n - q - 1 (5.2-1)
J2:
-S~t
Wi
+ tl-~Sbo)
(5.2-2)
f3~ =
f30 -
(5.2-4)
f3k Xk
k=l
the variance of b~ is
q
2 x~
Var {bk }
k=l
e.
4
J:,S
106
"'1-a
"'min
[I + --!!!:n-m F
1 - a]
I)F 1 -
(5.2-5)
pq = 0, form the
s~
bTGJq
2=-2
Sy,
Sy,
(5.2' \1)
(bTG)<I+II) - (bTG)I
(5.21)
4,
Sy,VCkk
If t is greater than t l-g for (n - q - 1) degree s (If
freedom , the hypothesis that Pk = P: is rejected.
4. To test the hypothesis that [3 = [3*, where [3* i ~ .
matrix of specified values of [3, form the variance rau o
(b - [3*)TXT x (b - [3*)J(q
s~,
FIGURE 5.2-2 Erroneous confidence region based on individual
confidence limits.
1)
(5.2-9)
156
Y:
Y; -
y=
2 f3l xiJ -
Xf)
j=l
i = 1, ... , n
f3j > 0
(5.2-10)
Xij
j = 1, .. . ,q
(xu - Xf )2 = n
i=l
c.,
S2
Yt
= 0,
~ ~ t~
q~
(5.2-14)
j=l
i~l
Pjk
(Xii -
Xj)(Xik -
xk )
so that
n
(Xi! - Xf)(Xik -
xk) =
npfk
(5.2-11)
i=l
qS~i
ajk
npjk
(5.2-12)
Solution:
Because no replicate data are available, it is impossible to
obtain an estimate of the experimental error with which to
test the hypothesis that a proper model is a linear one. The
range of temperatures is quite narrow. Hence the tempera
tures may prove to have little influence on the corrosion
157
TABLE E5.2-1a
y
5 percent Crt percent Mo
Corrosion Rate
Day
I
2
3
4
5
6
7
8
9
10
11
(F0)
Temperature
in Cracking
Coil 1
CF)
Xs
Temperature
in Cr acking
Coil 2
CF)
753
748
749
747
745
743
762
760
752
752
749
922
925
925
925
934
940
936
935
928
928
930
885
885
886
887
895
905
904
895
887
887
887
X4
Xl
X2
X3
Flow Rate
at Probe
(bbl/day)
Temperature
of Probe
(in.Jyr)
Total Sulfur
in Feed
Stock
0.117
0.107
0.088
0.077
0.091
0.040
0.048
0.022
0.077
0.121
0.143
0.041
0.041
0.040
0.041
0.042
0.008
0.007
0.008
0.041
0.041
0.044
16.9
17.0
17.1
16.6
17.0
17.5
35.0
34.5
33.8
33.6
33.2
only two levels of sulfur and flow rate are available. Suppose,
nevertheless, that a model 7) = {3o + {31X1 + {32X 2 + {33X3 +
{34X4 + {3sxs is proposed as the model to fit the data, with
the x's designated as in Table E5.2-1a. A computer ro utine
for regres sion analysis was used with the result s indicated in
Table E5.2-1b (the numbers have been truncated to save
space). The point estimates of the {3's, the individual con
fidence intervals for the {3's, and the confidence intervals for
the 7)'s (for Wi == 1) which are tabulated in Table E5.2-1c
indicate the unsatisfactory nature of the experiment with
TABLE E5.2-1b
a=
c=
5.41 X 104
- 9.88 X 103
1.69 X 101
-3.10 x 101
- 4.65 x 101
1.38 X 101
-9.88
3.08
- 1.89
4.87
4.49
2.22
X
X
x
x
x
x
103
103
10
10
10
10
G =
X
X
X
X
X
X
102
101
103
105
105
105
1.69 X 101
-1.89 x 10
8.00 X 10- 3
-1.12 X 10- 2
-1.82 X 10..,2
9.35 X 10- 3
9.31
3.49
2.27
6.99
8.64
8.28
10- 1
x 10- 2
x 101
x 102
x 102
x 102
8.26
2.65
2.04
6.20
7.68
7.36
X
X
X
X
X
X
103 1.02
102 3.28
105 2.53
106 7.68
106 9.51
106 9.11
-3.10 X 101
4.87 x 10
- 1.12 X 10- 2
2.06 X 10- 2
2.70 X 10- 2
-1.06 X 10- 2
b=
X
X
X
X
X
X
104
102
105
106
106
106
9.80
3.14
2.42
7.36
9.11
8.73
- 4.65 X 101
4.49 x 10
- 1.82 X 10- 2
2.70 X 10- 2
6.38 X 10- 2
- 3.68 X 10- 2
0.6751
2.3064
0.0012
-0.0007
-0.0021
0.0020
X
X
X
X
X
X
103
102
105
106
106
106
1.38 X 101
2.22 x 10
9.35 X 10- 3
-1.06 x 10- 2
-3.68 X 10- 2
3.14 X 102
158
TABLE
E5.2-1c
'Y]t'S;
'Y]1 =
'Y]2
'Y]3
'Y]4
'Y]5
'Y]6
'Y]7
'Y]8
'Y]9
'1]10
7]11
15.9
10.8
- 7.4
-29.4
-10.7
20.3
7.3
-42.2
-43.8
9.7
19.8
Xl
X2
X3
X4
, X5
Y? =
YTwY
= (Xb)TW(xb)
E5.2-1d
Source of Variation
Xo
Wi
i=l
TABLE
L:
removed (intercept)
removed
removed
removed
removed
removed
v = Degree
of Freedom
1
1
1
1
1
1
SS
8.416
1.724
1.850
2.246
7.281
1.328
x
x
x
x
x
x
Mean Square
10- 6
10- 3
10- 4
10- 5
10- 5
10- 4
8.416
1.724
1.850
2.246
7.281
1.328
X
X
X
X
X
X
10- 6
10- 3
10- 4
10- 5
10- 5
10- 4
Variance Ratio
1.243 X 10- 2
2.547 x 10
2.733 X 10- 1
3.319 X 10--2
1.075 X 10- 1
1.962 X 10- 1
159
ANALYSIS OF VARIANCE
TABLE
5.3-1
ANALYSIS OF VARIANCE
Degrees of
Freedom (v)
Source of Variation
Sum of Squares
(SS)
Mean Square
Due to regression
L Wt(Yt t=1
Y)2
s~ =
Yt)2
S2r --
LWl~
n - q - 1
L Wt(Yt
n - 1
- YtJA
n - q - 1
L Wt(~
- f)2
sr = L Wt(~
n - 1
Wt(~- Y)2
i=l
n
n
LPt - n
t=1
q-1
'YJ - t(x q
xq )
f30
.L f3k(Xk -
xk)
(5.3-1)
k=1
.c!>q-l
.L [Yl -
('YJ1 -
~(Xiq
[x*xq ]
[::J
x*(3*
+ Xq(3q
Pt
~)2
Se
Pt
L L (Ytj - Yt)2
t=1j=1
Li Pt - n
b~ = b t
- Xq))]2
i=1
y)
f')2
t=1
Subtotal
Cik fb
Ckk' k
.,
g)
(5.3-2)
(5.3-3)
REDUCTION IN SUM OF SQUARES. If epq is the sum of
squares for the original model and ep;-1 is the sum of
squares for the model with 13k = g, then the difference in
the sum of squares 2 (~ - Y)2 is obtained from
A-.
'f'q-l -
A-.
'f'q
(g - bk)2
Ckk
(5.3-4)
Full Model
(x*Tx*)b* = X*Ty
b*TX*Ty
L (Xtk - Xk) ~ = L
i=1
i=ij=1
k = 1, ... , q (5.3-5)
160
fiq
g:
assigned as
2:
t=1
LL
k =" 1, ... , q - 1
q-l
j=1
1=1
q-1
t=1j=1
Xj)[Yt - (x,q -
(5.3-6)
(q -
2:
1, ... , q - 1 (5.3-10)
xq)]
(Xtk - Xk)(Xtq - x q)
bk =
t=l
L
Ckj L (Xii -"" Xj) Yi
j=1
t=1
q-1
- g
q-1
bq){XTX}kq
L {XTX}kiCbj -
Ckj)
j=1
(g -
b: - bk = L (cZj -
(5.3-6a)
(Xii -
Xj) Yf
L L
C:j
j=1
bf)
L
t=l
Xj)(Xtq - Xq)
(Xii -
t=1
j=1
k = 1, .. . ,q - 1
- Ckq
(Xiq - Xq) Yi
.
t=1
(5.3-7)
(5.3-12)
2:
k,/= 1, ... ,q
j=1
(g -
b q)
q-1
{XTX}qkCkl
k=1
q-1
(b j - b1)
i=1
{XTX}jkCkl
k=1
l = 1, ... , q
b~ - bk =
(g -
bq)[Oql - {xTX}qqCqll =
i=1
l
For I
= q,
= 1, ... , q
(5.3-8)
(g -
bq)[l - {xTx}qqcqql =
- Cqq
L (b
j -
bj){XTX}iq
(5.3-9a)
i=1
(g -
bq)[- {xTX}qqCqll = (b j
bj) - Cql
(b, - bj){xTX},.q
1"=1
I = 1, ... , q - 1 (5.3-9b)
-)
L...t (Xii - x, Yi
t=1
k = 1, ... , q - 1
q-1 [ *
L
cki =
j
CqkCqf]
Ckj + -c-
= g[
(5.3-13)
2:
q- 1
Ct-JCjk
j=1
L=
n
(Xij -
_
Xj) Yi
+ :qk
k = 1, ... , q - 1 (5.3-14)
q-l
qq j = 1
n
~
q-1
=1
~ bl ~
~
L.
C qq
or
q-1
Cqk ~
c* ~ Cqj
C Cqk
S
=1
[ Ckj
* - Ckj
CqjCqk]
+ c - {X TX }fl = 0
qq
k = 1, ... , q - 1 (5.3-15)
l = 1, ... , q - 1 (5.3-2a)
(5.3-16)
k=I, ... ,q-l
ANALYSIS OF VARIANCE
* =
Ckm
CqmCqk
Ckm - -
k, m
C q l1
1, ... , q - 1 (5.3-3a)
ePq =
L L
Y? -
bk
i=1
k=1
~:-1
(5.3-17)
zss
A single
b~/Ckk
Xk
q-l
Y? - 2g{x T Y }q -
i=1
A group of p x's
b:
b~C;lbp
k=1
~:= 1
b:
-
~q
(g - bq )2
(5.3-4a)
Cqq
g=
(5.3-19)
The quantity ~SS is often termed the sum of the squares
for Xk' adjusted for all the other variables or the sum of
the squares given the other variables. The ~ss for any
group of p variables adjusted for all the others is com
puted by
(5.3-20)
where b, is a single-column vector (matrix) composed of
the selected groups of bk's only, and cp is the matrix of
the related (Cjk) elements.
For example, suppose we want to measure the com
bined effect of Xl' X2' and X4 removed from a model based
on Xl' X2, X3, X4, . , x q Each X is associated with a
corresponding b. Then
13k
j=1
{XTX}kjbj
161
162
TABLE
5.3-2
Degrees of
Freedom (v)
Source of Variation
(e.g., hI
~SSI
~SSI - 0
~SSl+2
=
=
b~C-lb2
= 0)
(e.g., b = 0)
following the removal of Xl
3. Due to removing X2 (e.g., b3 = 0)
following the removal of Xl and X2
2. Due to removing
X2
b~
2
=s
CII
~SSI
1. Due to removing Xl
Mean Square
h~
Cll
~SS2
~SSl
~SS2
-1
~SSI +2 ~SSI
~SS3
-1
etc.
n
L:
4. Subtotal
Wt(:ft -
Y)2
t=l
L: wlY
0)2 = (Y)2
1=1
L:
Wj
t=l
n - q - 1
L:
Wt(
Yi
Yi)2
(Y - Xb)TW(Y - xb)
2 _
Sr
t=l
Wt(Yt -
yt )2
-----
n-q-l
t=l
7. Total
L: Wt~2
t=1
L Wj(Y; -
0)2 = (1')2
i=l
Wj
i=l
L Wj(Y; -
~)2
i=1
Wj(rj -
(Y Y)Tw(Y Y)
1')2
(5.3-21)
(Y - Y)TW(Y Y)
i=l
equation,
fi
1 _ _l_
Pn,'/, -
GuCu
(5.3-22)
t K.
ANALYSIS OF VARIANCE
1
1 - (7.52 x 103)(8.00 x 10- 3 ) = 0.98
Analysis of Variance
= 1
Xu
Xi2
Ei
Yr
3.8
2.9.3.8
/2.1 1.8
/
.%2
0.0
/""
-0.1 -1.9
./
""
/'
-
/ /
/'/
Remove
P11"1 = P1r2 = 0.90
-2.4 -0.6
-1.0
I -0.5
(a)
I
0.0
I
0.5
X2
1.0
Xl
t J.
076
.
'S2
SYi.
S~
Yi
F O9 5 (1, 12)
C2.48 ~ 10
0.76
3.17
4.67
C1.23 ~ 10
0.76
1.53
4.67
007)
Xl
Remove
I
Estimated
S2
/'
_-1_.4.-_1_,8
-1.0
bQ.- O.4
-0.5
""
/ /
3.5
/-
0.5
{3's.
~.2.8
-t
Se
1.0
=
=
Yrv
-1-1
-1
Yu r =
Yu =
Yt
Xl:
X2:
Example 5.2-1
Example 5.3-1
+ Xi2 +
007)
TABLE E5.3-1a
Model
Model
Model
Model
Model
163
Degrees
of Freedom
Sum of Squares
of the Residuals
(v)
(cPmin)
Estimated
Mean Square
13
14
14
15
10.07
12.48
11.23
77.28
0.775
0.89
0.80
5.15
Estimated Multiple
Correlation Coefficient
(p~)
0.87
0.84
0.85
164
1.0
0.08
0.07
0.5
0.06~
X2
~~/
/'
0.056 --...,IC---7'-----.~~_7fIL.-_..:...---
0.0
0.06
/
-0.5
0.07
0.08
7/
~/
~/,~
/
-1.0
-1.0
-0.5
0.0
0.5
1.0
Example 5.3-2
Xl
(b)
1.0
%
,~
\.~
0.5
~fp
~
\:)~
X2
0.0
~\:).
'?>~
~'
/'
f9
/
~~
/
-0.5
~'P
/'%
':\,~
-1.0
-0.5
(c)
0.5
0.0
~.
1.0
Xl
TABLE E5.3-2a
ANALYSIS
OF
VARIANCE
Degrees of Freedom
Source of Variation
Analysis of Variance
-1.0
(v)
Sum of Squares
(SS)
Mean Square.
Variance
Ratio
Due to regression
~ Wt(Yi
Y)2
k.....I
1.031
10- 2
2.062
3.383
10- 3
6.767 x 10- 4
10- 3
L Wt(Yi
Total
Yi ) 2
A
10
1.370 x 10- 2
3.05
165
"E5.3-2b
Source of Variation
Degrees of
Freedom (v)
SS (lOS)
Mean
Square
Variance
Ratio
2.246
2.246
3.319
5.615
5.615
8.297
6.051
6.051
8.941
4.637
6.852
SS (10 4 )
Mean
Square
(10 4 )
Variance
Ratio
1.32
1.32
0.196
0.026
0.026
0.003
0.036
0.036
0.005
2.83
2.83
0.418
removed
removed after
removing Xs
Xs removed after removing
Xs and X4
Xs
X4
X4,
Source of Variation
13.91
Degrees of
Freedom (v)
removed
removed after
removing Xs
X3 removed after
removing X4 and Xs
X2 removed after removing
xs, X4, and Xs
Xl removed after removing
X2, X3, X4, and Xs
Intercept removed after
removing all the x's
Xs
X4
98.6
98.6
787
C{t} = 0
C{xnt} = C{Xt2,} = ... = C{XtqEt} = 0 (5.4-1)
Here, as in Section 4.6, the subscript t designates a
sequence of sets of data in time, t = 1, 2, ... , n from a
stationary process.
To determine whether or not the estimation procedures
of this section are required, a test for serial correlation
should be executed such as the Durbin-Watson test
described in Section 4.6. If the test shows little or no
serial correlation, the usual least squares procedure
should be satisfactory, In matrix notation, the residuals
are E = (Y - Y).
.
If serial correlation is established (with values of Y
being calculated by the best regression equation), we
can extend the technique described for the, model of
Section 4.6 which contained a single independent
variableto the case of several independent variables. The
estimates of the {j's in Model 5.1...2 can be written as
follows in a different form than in Section 5.1 :
(5.. 4-2)
14.6
787
116
(5.4-3)
q
(b~
f3~)
= -
(b; - f3k)X k
(5.4-4)
k=l
where
n
-,
2:
t= 1
Et
E=-
166
Covar {bn b}
s
'"
L L
i-I
-
j-l
~ri ~SjCij
(5.4-5)
- ~2
where.
Cij
gij(O)
glj(k)
gij( - k)
gji(k)
t=1
E t = Yt
bo
b 1 X tl -
. . . -"bqXtq
= Yt
Yt
Y 2i
= 'YJ21,
+ Eli
+ E2i
Yvi
= 'YJvi
+ EVi
Yli = 'YJli
Y1
Y2 =
If{ Y 1 I x}
= 11(3, x)
'YJ2
If{ Y 2 I x}
= 12({3, x)
'YJv
If{ F,
(5.5-1)
I x, (3, arr)
k" exp
(5.5-2)
_ [Yr1 ~ 'l)r1]
Er -
Yrn
TJrn
p(Yl> ... , s;
I x,~, I') =
k* exp {
--Her ..
~]r-1 [::J}
(5.5-3)
I x} = h({3, x)
i = 1,2, ... , n
Outputs
FIGURE
k* exp {
-4 ~ ~ ars[~
(Yr i
'l)rt)( Y
s! -
'l)Si)]}
(5.5-4)
167
L
Srs
'"
= ars
=
(Yrj -
j=1
Y r)( Y Sj -
Y s)
p~
~-----1----
p-
</;1
2: 2:
i=1
Wi( r,
7Jri)2
(5.5-5)
ET~~V]
EfE v
EvE v
~=
[
</;2 =
2: flt
(5.5-7)
r=l
r=1
where
= 1
(5.5-6)
a;
st
tPmin/(n - m)
S2
e
t G.
168
2:
YlkXk
Y2
2:
Y2k X k
where
Yn + I
Lln + l = Ua(l
+ fJa2Y2 + r,
2:
YakXk
+ fJl2Y2 + ... +
1'; =
2:
Y,kXk
l ),
-+ !)%(x~-~ + x~)%
n
(5.6-3)
Xn-q
k=1
fJllYl
Y+
k=1
fJalYl
(n
k=1
fJ2lYl
(5.6-2)
r,
(5.6-1)
k=1
(1 + -n1)(n -n -q -22)
a2
~n + 1
a2
2, the variance
a~n+ 1
is infinite.
Supplementary References
General
SUPPLEMENTARY REFERENCES
Fisher, R., Statistical Methods and Scientific Inference, Oliver
and Boyd, London, 1956.
Guest, P. G., Numerical Methods of Curve Fitting, Cambridge
Univ. Press, Cambridge, England, 1961.
Mood, A. and Graybill, F., Introduction to the Theory of Statis
tics McGraw-Hill, New York, 1963.
Plackett: R., Principles of Regression Analysis, Oxford Univ.
Press, Oxford, England, 1960.
Smile, K. W., An Introduction to . Regression and Correlation,
Academic Press, New York, 1966.
Williams, E. J., Regression Analysis, John Wiley, New York,
1959.
169
170
~TM~
= a2I.
aYt
[1
Ii + L(xo(x n
X)2 ]
X)2
Problems
5.t The data in Table P5.1 were collected from different
wells. Can you apply the least square methods of
Section 5.1 to the data to estimate the parameters in
a linear model? To estimate confidence limits of the
parameters?
TABLE
P5.1
Well
Number
Depth
(ft)
Specific
Gravity
Dissolved
Solids
(ppm)
E-71
E-73
1-2
1-22
1-24
1-29
1-46
225
220
203
150
136
140
210
43.1
55.5
34.8
78.8
77.4
20.6
31.5
2320
2320
2660
3060
4460
2160
2540
Total
Hardness
as CaCO s
(ppm)
1030
1140
1280
1140
1640
673
868
Y
Y 1'
Y2
Ys
Y4
r,
5.2 Obtain the least squares estimates of /3l and /32 in the
model Y = /3l + e- 1J2 X + E. Point out some of the
difficulties. Let the weights be unity.
5.3 Given the model
Y
L (Y
i -
Yt)2
(Y - Xb)T(Y - xb)
+ E; n =
Ya
Y7
r,
Xl
X2
-1
1
-1
1
1
1
-1
1
Xs
-1
-2
-1
-1
0
0
-2
-2
-1
1
1
1.
-1
= 0.2193(Yl + Y 2 +
- 0.0288( - Yl + Y2
+ ... + Y a)
Ys + .. +
-
- 0.1042( -' Yl
+ ... + Y a)
Ys
Y2
Ya)
+ 0.0814(-2Yl
PROBLEMS
TABLE P5..9
bo
YI
Y2
Y3
Y4
Y5
r,
Y7
Ya
b2
bl
-0.1275
0.1514
-0.1633
0.1156
0.0831 .
0.0798
--0.2090
0.0699
0.1895
0.2133
0.0853
0.1091
-0.0765
0.0049
0.2253
0.2491
b3
-0.1792
-0.3192
0.0684
-0.0716
0.2444
0.1760
0.1108
-0.0292
-0.0163
0.0618
-0.0847
-0.0066
-0.1597
-0.0750
0.1010
0.1791
T - 190
x
5.10
10
~, in
Covar {bjbk } =
171
a~, (~ aija
fk )
Yield (percent)
200
210
220
230
240
6
7
8
11
18
5.11
TABLE P5.11
Conversion of
n-heptane to
Acetylene (percent)
Reactor
Temperature
(OC)
Ratio of H 2
to n-heptane
(mole ratio)
Contact Time
(sec)
Xl
X2
X3
49.0
50.2
50.5
48.5
47.5
44.5
28.0
31.5
34.5
35.0
38.0
38.5
15.0
17.0
20.5
29.5
1300
1300
1300
1300
1300
1300
1200
1200
1200
1200
1200
1200
1100
1100
1100
1100
7.5
9.0
11.0
13.5
17.0
23.0
5.3
7.5
11.0
13.5
17.0
23.0
5.3
7.5
11.0
17.0
0.012
0.012
0.0115
0.013
0.0135
0.012
0.040
0.038
0.032
0.026
0.034
0.041
0.084
0.098
0.092
0.086
172
C+
f3~(N
- N) +
f3~(A - A)
Area
($)
(A)
Number of
Tubes (N)
310
300
275
250
220
200
190
150
140
100
120
. 130
108
110
84
90
80
55
64
50
550
600
520
420
400
300
230
120
190
100
T(OC)
c., Yo
900
850
800
750
700
650
600
0
1.0
2.0
3.0
4.0
5.0
6.0
C"
Yo
0
6.25
13.00
20.25
28.00
36.25
45.00 .
Pl
P2
Are
(a)
6.p = 21pLv
where
PROBLEMS
Range
Notation
ttL = liquid viscosity, poise
1.6-20.8
0.041-0.125
. diirnension
. Iess
= re fl ux rauo,
0.83-70
h = height, in.
G = gas velocity based on column
0.25-5
crossection, Ib/(hr)(ft 2 )
where
Nu = Nusselt number for heat transfer
Pr = Prandtl number
Re = Reynolds number
j.t
= viscosity of fluids
a and w
0,0134]
100-2000
CLUVJ
70-609
(p~~J
27.7-520
Y = /31X
+ f32x2 +
[
0.0134 0.0179 0.0126
What are the confidence limits on the estimated
coefficients?
By a theoretical analysis of heat transfer, the
expression for the heat transfer coefficient in the
form of a dimensionless group is
Nu = 0.402 Re% Pr%
e:t
14 2
d.f.
SS
Due to regression
Departure from origin
Deviation about regression line
2
1
33
99,354
103
863
36
--
100,390
Model II
d.f.
Due to regression
Deviation about regression line
2
33
21,621
863
(ttLaVa )
0.044(
--22,484
-35
pLD L
Total
. - -
Model I
Total
0.1749 0.2627 0.0179
173
y=
6.4
5.6
6.0
7.5
6.5
8.3
7.7
11.7
10.3
17.6
x=
Xl
1
1
1
2
2
Xo
1
1
1
1
1
1
1
1
1
3
3
18.0
18.4
174
moving parts.
Preliminary experiments were done with a small
[(P, p, Jk, D)
'* (ML - I T-
2 )G
(M
d-l
2
d-3
, 2
a = --,
b = --,
c = 2 - d
or
which give
Q =
(P p3)O.5
Jk
Qp _
(Pp) o.5) a-1
- KI - (c)
Jk
Jk
The coefficients K I and d in Equation (c) were
estimated from the data in Table P5 .21. K proved
to be 3970 and d proved to be 1.904. A straight line
was obtained on a log-log plot which matched the
data well. Comment on this experiment and the sub
sequent statistical analysis. Would the model be an
improvement over Q =K2 p m?
5.22 If the two variables Xl and X 2 are distributed by a
bivariate normal distribution with means Jkl and Jk2
and the variance-covariance matrix
L - 3)b(ML -IT-I)CLd
Jk
(poise)
P
(psig)
Q
(ml/min)
(Pp)O.5
1.199
0.0846
1.164
0.0498
1.122
0.0288
1.000
0.0127
0.989 .
0.0054
2
4
6
8
10
11
4
6
8
10
4
6
8
10
11
4
6
8
10
12
4
6
8
10
12
3550
5030
6150
7110
7910
8230
5070
6130
6940
7680
5060
6060
6970
7620
8040
5000
6060
6760
7470
8180
4540
5480
6240
6850
7540
18.4
25.9
31.8
36.6
41.0
42.9
43.4
53.2
61.2
68.5
73.5
90.1
104
116
122
157
193
223
249 273
369
451
520
583
638
Jk
Qp
Jk
10 - 4
5.03
7.13
8.72
10.1
11.2
11.7
11.9
14.3
16.2
18.0
19.7
23.6
27.2
29.7
31.3
39.4
47.7
53.2
58.8
64.4
83.1
100
114
125
138
PROBLEMS
and
a12
a21.
10
11
12
13
14
15
Adhesiveness, Y
21
14
74,..-.J"ooor-------,.-------r------,
16
10
14
14
26
40
41
59
74
91
105
175
73
p
c:
Cl)
3-M
(36)
72
Cl)
Cl)
(37)
.e
71
en
6-0
""C
Regression, Y on x
co
00
70
c:
Regression coefficient,
b = 9.5
o
co
:.p
c:
"Eco
69
Cl)
given.
ro
~ 68
67
4-TV
(125)
I -M
(47)
2-D
66
(38)
2.4
2.5
2.6
CHAPTER 6
Nonlinear Models
where
X=
XII
X12
X1q
X21
X22
X2q
X n1
X n2
X nq
(3=
132
13m
1,2, ... , n
(6.1-3)
6.1
131
61
(6.1-4)
Y, x)
INTRODUCTION
Y =
(6.1-1)
1)(Xb ... , X q ;
+e
(6.1-5)
(3)
(6.1-2)
176
= Yl(Xb
... ,
X q ; (31'
... ,
13m)
+E
(6.1-6)
Minimize 4> =
L wi[i -
1Ji(X;, ~)]2
-. . . . . ~ ---..!<Y-Y)
~2
~
~/
Y2
cP
- ~~ - : I
Contours of
constant
(6.2-1)
i= 1
{j2
177
Surface of
esti mates-of 11
",-Y
{jl
FIGURE 6.2-1 Geometric interpretation of least squares for a nonlinear model: (a) observation
space (three .observations), and (b) parameter space (two parameters).
178
NONLINEAR MODELS
1. Derivative-free methods:
(a) Simplex method.
(b) Direct search method.
2. Derivative methods:
(a) Gauss-Seidel.
(b) Gradient methods.
(c) Marquardt's method.
(6.2-2)
or
X
+ 132
(6.2-3)
<P =
(Yi
TJi)2
i=1
n
- L
i=1
2 R ~ Xi ~
y2
i
fJ1 ~ X.
i=1 J
b ~
1
?J=1 (Xi
(6.2-5)
Xf
~ XiYi - 0 (6.2-6)
+ h2)3 - i=1
~ (Xi + b 2)2
f31 X
+ 132
f31
y= -
b1 ~
x;
_ ~ XiYj = 0
~ (Xi + h2)2
L
Xi + b2
i=1
i=1
Xf
(6.2-4)
1961.
u,
and choose Ij
Example 6.2-1
or
log YJ = log ex +
/31 log
Xl
/32 log X2
1'79
(a)
(yt _
n ...
Minimize.2
_ _YJ_t)2
t=l
(b)
1]t
Minimize .2 (log Y t
log
1]t)2
(c)
t=l
YSimulated
Data
0.48000
0.61094
0.12626
X
X
102
103
104
Y Calculated from
X
X
10- 8
10- 7
10- 6
Percent
Deviation
-0.1857
- 0.3113
-0.3623
X
X
1013
lot3
lot2
180
NONLINEAR MODELS .
TABLE
E6.2-1
= axf 1 xg 2
Direct Search-Initial
Guesses I:
a = 0.05
hl = 4.0
h2 = 0.7
a
hl
h2
Minimum of Equ ation (a)
Number of exploratory
searches
hI
h2
Minimum of Equation (b)
Number of exploratory
searches
a
hI
h2
Minimum of Equation (c)
Number of exploratory
searches
hI
h2
0.1%
1O%
1%
0.988
3.004
0.502
2.15
0.886
3.047
0.521
214.3
370
310
0.996
3.001
0.501
7.7 X 10- 6
150
0.930
3.026
0.518
9.4 X 10- 4
167
0.996
3.001
0.501
7.7 X 10- 6
170
0.931
3.025
0.517
9.4 X 10- 4
157
0.996
3.001
0.501
0.931
3.026
0.618
.273
3.516
0.714
2.1 x 105
161
0.628
3.159
0.629
7.2 X 10- 2
215
0.651
3.142
0.617
7.7 X 10- 2
120
0.652
3.143
0.618
5.543
2.229
0.397
7.8 x 105
489
19.704
1.701
0.244
1.99
231
0.699
3.115
0.610
4.4 X 10- 2
105
0.695
3.118
0.612
Direct Search-Initial
Guesses II :
a.= 5.0
b,
h2
=
=
5.0
5.0
hI
h2
Minimum of Equation (a)
Number of exploratory
searches
hI
h2
Minimum of Equ at ion (b)
Number of explo ratory
searches
hI
h2
Minimum of Equation (c)
Number of exploratory
searches
None
0.998
3.000
0.500
3.5 x 10- 2
149
1.001
2.999
0.499
4.1 xlO - 6
105
0.999
3.000
0.500
9.2 x 10- 8
125
0.1%
0.973
3.011
0.503
4.42
123
0.997
3.000
0.500
7.8 x 10- 6
70
0.995
3.001
0.501
7.7 X 10- 6
127
1 %
0.889
3.046
0.521
214
221
0.930
3.026
0.518
9.4 X 10- 4
127
0.931
3.026
0.517
9.5 X 10- 4
96
10'70
0.276
3.513
0.712
2.1 x 105
183
0.625
3.162
0.630
7.2 X 10- 2
114
0.651
3.143
0.618
7.7 X 10- 2
100
Var { Y} = 10 Var{Y}=loo
1.231
2.908
0.485
761
169
0.717
3.106
0.607
4.5 X 10- 2
80
0.694
3.1'18
0.612
4.4 X 10- 2
109
5.640
2.222
0.394
7.78 x 105
73
19.55
1.704
0.247
1.99
94
181
0.80
0.70
Criterion (a)
,-..
~
c::
.g
3
10
0.60
'E
(,)
"t
(1)
::::J
n;
>
~a(n)
0.5
3.50
00
50
100
150
200
FIGURE
b~O) =
6.2-2
a(O)
= 0.050,
h1 (0)
= 4.000,
and
182
NONLINEAR MODELS
... ...
.......:::.::::.::-:: B
--;..:-v.:
(b)
(a)
6.2-2 Regular simplexes for two and three independent parameters. The
point in the direction of greatest
improvement. (a) Two variable simplex. (b) Three variable simplex.
FIGURE
t J.
- - --Indicates projection
(a)
(b)
6.2-3 Sequence of simplexes obtained in minimizing the sum of the squares of the devia
tions: (a) regular simplex, and (b) variable size simplex.
FIGURE
(6.2-7)
6.2-1
Coordinates
Vertex
b1 ,t
b2 ,t
bm- 1,t
bm,t
71'1
71'
71'
71'
71'
71'1
71'
71'
71'
71'
71'1
71'
+1
71'
71'
71'
71'1
where
71'1
=~
mV2
183
b** = Ycbu
+ (1
- Yc)c
(6.2-10)
[vm + 1 + n - 1]
= ~ [v m + 1 - 1]
mV2
a = length of the path between two vertices
'IT
b1 , i
b2 , i
0
0.965
0.259
0
0.259
0.965
2
3
We shall let
Example 6.2-2
'/J
is
b* _= (I
+ 'Yr)c
- Yrbu
(6.2-8)
(6.2-9)
where
{3 = f + (1 - D
Q = predicted excess channel flow rate above the normal
channel flow rate, the dependent variable
Q* = ioput flow rate, a known value
t = time, an independent variable
184
NONLINEAR MODELS
No
Yes
~* > cPu? ~ No
Yes
No
Calculate b**
= I'c bu + (1 -
"Yc)c
Replace b u by b*
No
No
~---- No
FIGURE
-----I
Yes--/
Stop
100 ...------r------r----.------y---.----...,..------.-----r-----;
I
, cP(O)
I
80
= 9.55 x 10 5
I
I
III I)(\
XX
x x Xxxxx xxxxxxxxxxxxx XX
XX'l(x'l(XXXXXxxXXXXX)()(XXX
IOe
('I')
'l(X~)(
o
-; 60
-e
20
10
20
30
40
50
Iteration number
FIGURE
6.2-2
60'
70
90
1.0
Observed data
m = 6.65
n = 14.67
""
~ 0.8
f= 0.719
OJ"
~ 0.7
3:
.g
Predicted flow
Q. 0.9
185
0:6
en
.; 0.5
=0.440
.Q
~ 0.4
E
C 0.3
0.2
0.1
18
17
19
20
21
22
23
24
25
26
27
28
29
30
31
32
6.2-3
TABLE E6.2-3a
Stage
Number *
A
B
C
D
E
5
A
B
C
D
10 A
B
C
D
E
20
A
B
C
D
50 A
B
C
D
184 A
B
C
D
* A, B,
2.443
0.470
2.460
2.768
0.107
2.035
1.077
1.131
1.152
2.000
1.077~
4.035
1.119
1.143
4.000
0.898
0.898
0.155
0.730
0.040
1.013
1.013
1.080
1.979
2.000
O~675
0.471
0.496
0.977
0.489
1.888
1.077
2.410
1.969
1.814
3.895
4.035
5.423
3.973
4.460
0.577
0.898
0.895
0.266
0.652
1.203
1.013
1.220
0.574
0.613
0.480
0.470
0.432
0.470
0.413
1.555
1.077
1.875
1.489
1.675
5.53
4.403
4.924
4.630
4.647
0.970
0.898
0.832
0.751
0.731
0.966
1.013
1.027
0.466
0.853
0.241
0.250
0.225
0.249
0.314
2~354
2.202
2.546
2.393
1.953
6.467
6.177
6.625
6.243
5.972
0.867
0.579
0.836
0.778
0.916
0.366
0.385
0.359
0.489
0.391
0.143
0.169
0.147
0.167
0.126
3.731
3.470
3.670
3.228
4.167
8.587
8.047
8.481
7.822
9.458
0.727
0.730
0.729 '
0.788
0.713
0.472
0.439
0.472
0.384
0.500
0.0219
0.0219
0.0219
0.0219
0.0219
6.651
6.653
6.653
6.651
6.651
14.672
14.676
14.675
14.672
14.671
0.719
0.719
0.719
0.719
0.719
0.440
0.440
0.440
0.440
0.440
Exploratory
Search
Number
o
5
10
20
50
1.067
0.434
0.048
0.026
0.026
ex
2.000
4.400
3.200
3.136
3.133
4.000
7.600
36.40
23.83
23.92
0.040
0.028
0.280
0.399
0.398
2.000
0.800
0.800
0.716
0.717
186
NONLINEAR MODELS
'8bjO)
where '8 represents a small perturbation. However, it
may not always be possible to compute
derivative
numerically with the required accuracy. If the regression
curve for the model is flat, the quantity in the denomina
tor grows small and the relative error in the approximate
derivative can increase drastically. (The same feature -is
true in the next section in connection with the numerical
approximation of the derivatives of cp. As cp approaches'
its minimum, the relative errors in the numerically
computed derivatives become larger. Consequently, the
search for the minimum of cp can oscillate and/or become
very inefficient.)
After the linear approximation for 'YJ, Equation 6.2-12,'
is introduced into Equation 6.2-11, the partial derivatives
of ep with respect to each of the t:,.bjO) can be equated to
zero, as explained in Section 5.1
8.~
t=
Wi[Yi
.~
('YJi)O -
(8'YJi) t:,.bjO)]2
= 1 8f3j
8(t:,.bjO)
(6.2-13)
c/> =
2
i=1
Wi(l'j -
(6.2-11)
1]1)2
where 'YJi refers to the model with the vector for the ith
data set introduced, Xh by minimizing cp we can find an
improved estimate of f3j. (If 'YJ were truly a linear func
tion, only one step would be needed to reach the mini
mum of cp.) We expand 'YJ as follows:
'YJ = 'YJo
+ (-8'YJ)
~10
(131 - bi)
+ ... + (8
- TJ )
~mo
(13m -
b~)
s:
j=l
813J
6
n
(6.2-12)
Wi [ Y1
(:~:t !1b~Ol
. _ (8'YJi) ~b~O)
8132
+ ... J(8'1Ji)
8f:lm
= 0
0
!:J.b~Ol ?~
W1( 8'YJi )
8131
t=1
(1]1)0 -
+!:J.b~O' ?~ w
(8'YJi)
8131
+!:J.b<,;!l
t=l
~ W;(8TJ i)
?
t=1
TJ i) (fhJi.) + ...
8132 0 BPI 0
i(8
813m
(8'YJi)
8131
~ w1E10l(O'YJi)
~
t=1
0f31
(6.2-14)
!:J.b~O' ~ w
.?t=1
i(8
TJ i )
8131
(8'YJi) +!:J.Wl
813m.
~ Wi(8'YJi)
L...t
i=1
8132
(O'YJi\
+ ...
8f3:J .
[Xij] =
i
j
81Ji(Xi; b)
8f3j
1,2, ... , n
1,2, ... , m
(8~1)
(8~1)
8f31
8f3m
X(O) =
(8~n)
e~n)
8f31 0
8f3m
an n x m
matrix
(XTwX)(O)
1Ji )
~ w.g O)(8
8f3 0
i=1
bj.n+l)
Then:
A(O)
187
bjn)
hjn) D..bjn)
(6.2-17)
188
NONLINEAR MODELS
Global
minimum
'/"
=x---t-
t /'
~~
~
b
FIGURE 6.2-5 Effect of the initial guess for fJ on: (1) convergence to a local minimum in the sum
of the squares of the deviations (solid line), and (2) oscillation in ep in the search sequence (dashed
line) and subsequent divergence.
SIMULATED DATA
TABLE E6.2-4a
Xl
X2
(Exact)
Y
(Simulated)
0.0
0.0
0.0
0.0
1.0
1.0
1.0
1.0
2.0
2.0
2.0
2.5
2.9
0.0
1.0
2.0
3.0
0.0
1.0
2.0
2.0
0.0
1.0
2.0
2.0
1.8
3.00
1.82
1.10
0.67
6.00
4.82
4.10
4.10
9.00
7.82
7.10
8.60
9.92
2.93
1.95
0.81
0.58
5.90
4.74
4.18
4.05
9.03
7.85
7.22
8.50
9.81
in which both
respect to f32
7]
07]
8f32
7]
with
(f3o + f3 1Xl)X2
(f32Xl + f33 X2)2
by minimizing
n
4> =
.L [Y
t -
(b)
i=l
b~O),
or
189
bj(n - 1)
--b-~n---1-)-
< 10- 6
( )
=X1
0{31
01]
0{32
e IJaX2
(32 X2 eIJax2
01]
-
0{33
3
FIGURE
Consequently,
E6.2-4a
b~O)
3.1
2.9
3.2
1
2
= In{32 + {33X2
r-------.....,.....------,.-----....,
1.5
-b
X21
= 0.0
= e - 0.52(0) = 1
0{33
(2.9)(0)e - 0.52(0)
= 0
Y 1 - (TJ1)0
E~O) =
Y 2 - (TJ2)0
etc.
from which the matrix (X(O)TE(O can be computed. Then
B(O) can be calculated from Equation 6.2-16, and the vector
b(l) can be computed from Equation 6.2-17.
Table E6.2-4b lists the progress of the Gauss-Seidel
method by cycles (only four significant figures are shown in
1.0
TABLE
M
I
OTJ2)
( 0{31 0
-e
= 0.0
01]1)
( OTJ2)
0{33 0
(b)
2.0
X11
OTJ
0 52
( - 2) = e- ( 1 ) = 0.594
0{32 0
b~O)
OTJ1)
( OP2 0
so that an average value of 3.1 can be used for biO). Note that
if the X2 values had not been replicated, the above pro
cedure would require considerable interpolation among the
data points.
To get b~O) and b~O), we can form
In (1] - 3. 1x1)
OTJ1)
( 0{31 0
0.5
Cycle Number
..E
0
-0.5
E6.2-4b
Slope ~ biG)
o (initial
bin)
b~n)
b~n)
c/><n)
3.1
2.9
-0.52
0.1981
3.017
3.017
3.017
2.958
2.958
2.958
-0.5222
-0.5220
-0.5220
0.1573
0.1574
0.1574
guesses)
0
(b)
FIGURE
E6.2-4b
1
2
3
190
NONLINEAR MODELS
the table). Note that with the initial guesses close to the
final" estimates" of ~ and with a well-behaved objective
function, only a very few cycles are needed to meet the
termination criterion.
Approximate
Example 6.2-5
v*~
TABLE 6.2-5
FIGURE
f31 e- P2x
f33 e- P4x
Pax
Ps e
Cycle Number
bln)
b~n)
b~n)
b~n)
b~n)
b~n)
0-,103 *
83.7
7.415
4.048
3.533
3.016
0
1
2
5
10
0.4660
0.4607
0.5261
0.5696
0.4786
1.1420
1.3458
1.6027
1.6775
1.5047
0.5410
0.4526
0.4302
0.3845
0.4063
6.4470
5.9621
6.9199
7.6433
5.4189
0.1000
0.0820
0.0414
0.0438
0.1125
9.9990
8.9045
9.0207
10.993
12.708
B:
0
1
2
4
9
5.180
0.5625
0.5313
0.4578
0.4256
1.5770
1.6825
1.6276
1.4974
1.4431
0.3910
0.3674
0.3892
0.4758
0.4918
6.2190
6.5011
5.9050
5.2726
4.8668
0.0890
0.0699
0.0792
0.0651
0.0823
25.000
22.126
21.504
28.376
24.116
6.624
2.390
1.923
0.786
0.572
Unmodified Method
AOt
A2
Bl
10
9
10
0.5897
-0.5899
1.2226
1.7177
1.7182
- 0.7341
0.2792
-199.47
1.9944
8.2340
8.2426
2.0019
0.1292
199.88
0.2384
8.2392
8.1085
-7.8010
3.574
402.7
1012
* u = (~(~(n))y'
t
t Values at cycle number n with starting vector AD, etc, as designated in the upper portion of the table.
191
{jl.
4><n + 1) - 4><n)
4
4><n)
< 10-
~ (4))0 +
6(:t)o(f3j - b~O)
m
-V4>lb(O)
-(~!)
UfJ1
5/h .
(84))
5/3
8fJ2
0
6.2~3).
- ... 2
The magni
(~)
813m
5/3
m
J. 7, 149, 1964.
1963.
vector in
-V4>
II-V4>1I
04>
04>
-~5/31-8i35/32-'"
(6.2-18)
8{32
= 25/31 - 5/32
-2
- Vz
11- Vzll
vS 5/31 + vS 5/32
192
NONLINEAR MODELS
+ Af)O =
(6.2-19)
xZ
L
n
Wi[Yt
1Jt(bCnl)] O1]t~~c.nl)
i=l
_z~nl
==
t D.
Because
-2
z(n)
= gjZj
fll
ep
= 10
ep=8
ep=6
4>min
FIGURE
-+- ~
Marquardt's procedure
from point A
where the (*) designates the scaled element and the scale
factor is ~i = (A ii ) - %. The scaled elements of B* are
converted back to the elements of B by
I1b j =. l1b7
~j
+ AD)B =
(6.2-20)
6.2-6
Nonlinear
Estimatlon
by
Marquardt's
193
m
n
f
ex
Lower
Upper
2.65
16.1
0.29
0.64
3.61
31.7
0.49
0.78
TABLE E6.2-6
Cycle
Number
ep
ex
0
1
5
10
15
1.284
0.831
0.0284
0.0267
0.0267
6.00
1.00
3.22
3.13
3.13
25.0
19.2
25.1
23.9
23.9
0.500
0.432
0.371
0.398
0.398
1.000
0.760
0.722
0.717
0.717
10- 2
10- 3
10- 7
10- 8
10- 8
27.3
29.9
52.3
50.5
50.9
194
NONLINEAR MODELS
100f3~ - 0.010,8~
P2
10- 1,82
,8~ = 102P~
2,81,82 + 10
The individual estimates of ,81 and f32 can range over any
series of values for a given estimate of the product ,81f32.
Thus, once a parameter has been assigned a given value,
the other parameter will compensate to make the" product
satisfactory, even though both estimates are badly
biased. Scaling is more difficult if interaction exists.
Quadratic functions, as explained in Appendix Section
B.5, can be transformed to canonical form so that the
interaction term is removed. New coordinate axes are
defined, as shown in Figure 8.2-2 by the dashed lines,
about which the quadratic surface is symmetric. For
example, the surface
4>
cP
= 7,8i +
can be transformed to
A..
'f' -
-2 + 6-2
18 = 3f31
f32
+ 9,iJ2
P3
or = e- E /T = ~
ok
!!- =
oE
_~ e- E /T =
T
6f3~
- 6,81 - 24,82
18f33
+ 18
(a)
Because rr, (rr/Ti )2, and (rfITt ) are all positive for any
range of r, and Ti, the det (A) can be quite small if 11. takes
on only a small range of values, and the matrix A becomes
singular as the values of the two terms in the brackets
approach each other. On the other hand, if a transformation
of variable is carried out so that
T* = T - T = 1 _
T..
T
(b)
195
5.0 r---..,-------,----..,....---.......,-..------.----......-------
C\I
~ 4.5
c:
Model:
'e
r=ke- EI T
~ 4.0
~
~
3.5
Model:
r = k *e- E * [(lIT> -
<5
(lITO)]
t)
C'a
'+
3.0
.~
~C
QJ
0
c. 2.5
QJ
Q.
FIGURE
E(l-!)
if
= (k eE)(e-ETIT)
(c)
8E
= kT*
eET~
k eE and E = ET
8r
---::=
20e 11.82033
. 02
= 5,000
=
20,000
= rT*
01
2e47.28132
= 0.01 +
(0.05r)2
~2 [2 if 2 (r.rt')2 -
(2 rFnn
(d)
-#0)
(b)
(e)
k1
k,
k2
k2
a1
1000
196
NONLINEAR MODELS
TABLE E6.3-2
k2
k1
Initial guesses
Estimated parameters
at the minimum 4>
Model parameters
in Equation (a)
8,460
8,460
20.085e11. 7 358
1.9220e47 .781 3
4,964
20,203
20e11.82033
2e47.28132
5,000
20,000
k1
/(2
ii1
ii2
10
10
5
5
20
20
20
20
20.085
1.9220
11.7358 47.7613
j
.!
,I
4>
5e2O
In gJ
Q2
lOe2O
Starting guesses
Scale factors
Estimated parameters
at the minimum 4>
Q1
0.5q) In ( 882,p/8f3f)]}
(In q) {1 - exp [- (In
2,p/8f3;
7,671
90.7
100
by letting
k ' = Ink
was carried out. The starting values for k' were the logarithm s
of the original starting values ; the scale factors were taken
to be 1 for both k~ and k; . The results were essential1y the
same as with the transformation of Example 6.3-1, although
the number of functional evaluations was slightly greater.
Estimation was also carried out by using fine-mesh
forward difference schemes to approximate the derivatives
numerical1y. With a parameter increment of 0.001 times the
scale factor, little difference was experienced between the
two methods. However, larger meshes indicated that
additional functional evaluations were required .
NULL EFFECT. This can be illustrated by using as an
example the following objective function:
+ 2f31f32 + f3~ + 2
= (f31 + f32)2 + 2
,p = f3~
+ f32 = P1
,p = P~
is made, we find
+2
197 .
Covar {b}
~ (XTWX)-lO'~t = CO'~t
- b)T(XTwX)(~ - b) = S~tmFl-a[m, n - m]
ETWE
=
n - m
f31 - f32
(6.4-1)
epl-a = epmin
(6.4-5)
+ s~imFl-a[m, (n
- m)]
n-m
(6.4-6)
(6.4-2)
Covar{b}
. ETWE
(XTWX)-l_
(6.4-3)
10.0
8.0
S~j
-<:
= Var {hj} ~
n-m
S~t == Var{Yi } ~
L~
j=1
"It
0
t"""'l
4.0
e
3.0
2.0
Sum of squares
contours for the
nonlinear model
S~iCjj
6.0
(OY)2
u: Var{h
-<:
Sum of squares
contours for the
model linwized
about Jmin
1.0
0.8
j}
2
{31 X
(6.4-4)
3
10 4
FIGURE 6.4-1 Contours for a true sum of squares surface and the
corresponding contours based on Equation 6.4-5. Numbers next
to contours indicate probability for indicated confidence region:
(From G. E. P. Box and W. G. Hunter, Technometrics 4, 301,
1962.)
198
NONLINEAR MODELS
The sum of the squares of the residuals was <Prn,n =4.76x 10- 6 ,
1.000
[ 0.9902
tl- ~ y, VCll
tl- ~S Y, VC2 2
As an estimate of sY"
_ V 2 _ )4.7604 x 10
7 _ 2
Sr -
P, psia
0.0680
0.0858
0.0939
0.0999
0.1130
0.1162
0.1190
20
30
35
40
50
55
60
(R -
20
30
35
40
50
55
60
XTX =
R) x
loa
0.433
-0.656
-0.0619
-0.605
1.636
0.282
-1.006
Sr -
= V O.952 x 10
f30P
r = - -
I + f31P
1
,:1j. ',.'j.
0.9902]
1.000
= 0.975
10- 3
0.0191 0.178]
0.178
1.703
or
f3~+ 18.64f31f30+89.13f3~-0.499f3o-4 .875f31 +0.00698=0
(a)
=.
2 F 1 - a(2, 5)]
(b)
131
0.0191 0.178]
[ 0.178
1.703
= (XTX) -1 =
2020
- 21 I.l ]
[ - 21I.l
26.65
5.154 x 10-1
[
b = 2.628 X 10- 2
FIGURE
contour.
""'----~----------------------------,- -_ . _ - . , - - ,
+ fJ1 e-
1CX
(6.4-7)
Y = bo + b1 e- k x
where k is the estimate of
can be represented as
K.
(K - k(O]
k(l) = k(O)
and the iteration continues.
+ b2
b1
(6.4-9)
At.such time as
Xt
51.6
53.4
20.0
-4.2
-3.0
-4.8
0.4
1.4
5.4
19.5
48.2
95.9
= 65.276
= 0.0518
or
199
b~n+1)
Second Iteration
bi2 )
b~2)
= - 0.0269
65.262
bin + 1) --?- 0
xwas
= -4.85
Se
1108.80 = 4620
24
.
20U
NONLINEAR MODELS
where
400
300
200
t.SS b2
100
0.35
0.40
2
FIGURE
'VI = -!-
LY
= -!-( Y ll + Y 12)
1=1
2
361.4
Y ll =
Po
+ PI eIJ2XU} Group I ; k = 2
_
IJ + IJ IJ x
Y12 - JJo JJI e 2 12
+ PI e1J2X21}
IJ IJ x
JJo + JJI
e 2 22 Group 2; k = 2
Y21 = Po
.y: _ IJ
22 -
Y31 = Po + PI e1J2X31}
+ JJI
iJ IJ x
e 2 32 Group 3; k
IJ
y:32 -_ JJo
=2
JO, ~) = t
L!(Xlh
~)
1=1
e1J2x12)
_ -- ._._--
. .
,. _.,, - .__
_- ._----
-.
---."
_
..
.-
201
.. , Pm)] + t (6.5-3)
X = 131 eIJ2Y
and that Y is the dependent random variable available
for fixed values of x. Since log x = log 131 + 132 Y,
J':.
132
132
.
t
--
log
(i)
C)%
1
(R o = vlkK +
132 ~ 132
The calculation ofthe approximate confidence limits ofthe
old parameters can be from those of the new parameters.
L......-
..'
vlkK c
fE'
kKc
Ro -- (1 +
KC)2
kK
+ (1 + KC)2
(R'
'2
oC
kKAPAPB
r=---...;;;........;;;;.....;;....-
1 + KAPA + KBPB
where r is a rate of reaction, k and K are constants, and
P is the pressure, arguments on physical grounds lead to
the conclusion that k, K A , and K B must be nonnegative.
Consequently, fitting the model without restricting the
region of search for the parameters estimates to k ~ 0,
KA ;?: 0, and K B ;?: will lead to unreasonable, often
negative, estimates. An example of constraints
the
independent variables occurs when the independent
variables Xi represent mass fractions, in which case
LXi ==
on
1.
1/02
.1
+ KC)2 +
i = 1, 2, .. . ,n (6.5-4)
kKc
(1
where
t G. E. P. Box and
202
NONLINEAR MODELS
f3j
> k,
6.6-1
Technique
Name
POP/360
Simplex Search
SUMT
Iterative linear
programming, numerical
derivatives
Simplex method
Penalty function,
analytic derivatives
Reference
1
2
pj = k j + elJ;
where f37 is the parameter to be estimated.
Although f31 may range from -00 to +00,
always be greater than k j
The more general constraints:
TABLE
TO CONSTRAINTS
f3j will
1, 2, ... , q
Po + f31 X1 + f32 X2
'YJ =
Minimize f> = eP
2: A/(g/Y
(6.6-1)
i=1
gi
= 0
gi
(hi - k j )2
if hi > k j
if hj S k j
= f3t + f3t X1
Eq uality constraints:
j = 1,2, ... , r
Supplementary References
Bartlett, M. S., "The Use of Transformations," Biometrics 3, 39,
1947.
Beale, E. M. L., "Confidence Regions in Nonlinear Estimation,"
J. Royal Stat. Soc. B22, 41, 1960.
Box, G. E. P. and Coutie, G. A., "Application of Digital Com
puters in the Exploration of Functional Relationships,"
Proceed. Inst, Elec. Eng. 103, Pte B, Supplement 1, 100, 1956.
Box, G. E. P. and Cox, D. R., "An Analysis of Transformations,"
Dept of Stats. Tech. Rept. No. 26, Univ. of Wis., Madison,
1964.
Box, G. E. P. and Tidwell, P.,"Transformation of the Inde
pendent Variables," Technometrics 4, 531, 1962.
Curry, H. B., "The Method of Steepest Descent for Nonlinear
Minimization Problems," Quart. Appld. Math. 2, 258, 1944.
Dickinson, A. W., in Chemical Division Transactions of the
American Society for Quality Control, 7959, p. 181.
Dolby, J. L., "A Quick Method for Choosing a Transformation,"
Technometrics 5, 317, 1963.
L...
---_ ...
PROBLEMS
Goldberger, A. S., Econometric Theory , John Wiley, New York,
1964.
Hartley, H. 0., " The Modified Gau ss-Newton Method for the
Fitting of Nonlinear Regression Functions by Least Squares ,"
Technometrics 3, 269, 1961.
Kale, B. K., "On the Solution of the Likelihood Equation by
Iteration Processe s," Biometrika 48, 452, 1951.
Kale, B. K., " On the Solution of the Likelihood Equ ation by
Iteration Processes. The Multiparametric Case," Biometrika
49,479, 1962.
Levenberg, K., "A Method for the Solution of Certain Nonlinear
Problems in Least Squares," Quart. Appld. Math . 2, 1964,
1944.
Marquardt, D. L., "An Algorithm for Least Squares Estimation
of Nonlinear Parameters," J. So c. Ind. Appld. Math . 2, 431,
1963.
Pereyra, V., " It erative Methods for Solving Nonlinear Least
Squares Problems," SIAM J. Num , Anal. 4, 27, 1967.
Tukey, J. W., "On the Comparative Anatomy of Transfor
matio ns," Ann. Math . Stat. 28, 602, 1957.
Turner, M. E., Monroe, R. J., and Lucas, H . L., "Generalized
Asymptotic Regression and Nonlinear Path Analysis,"
Biometrics 17, 120, 1961.
Williams, E. J., Regression Analysis, John Wiley, New York ,
1959.
S urveys of Optimization Techniques for Nonlinear Objective
F unctions
Dorn, W. S., "Nonlinear Programming-A Survey," Mang . Sci .
9, 171, 1963.
Spang, H . A., " A Review of Minimization Techniques for
Nonlinear Functions," SIAM Rev. 4, 343, 1962.
Wilde, D . J. and Beightler , C. F., Foundations of Optimization,
Prentice-Hall, Englewood Cliffs, N.J., 1967.
Wolfe , P., "Methods of Nonlinear 'Programming," in Nonlinear
Programming , ed. by J. Abadie, North-Holland, New York,
1967, Chapter 6.
Zoutendijk, G., "Nonlinear Programming : A Numerical Survey,"
SIAM J. on Control 4, 194, 1966.
Computer Routines for N onlinear Estimation
Derivative-type methods :
Booth, G . W. and Peterson, T. I., AIChE Computer Program
Manual No .3, Dec . 1960, AICE, '345 E, 47th St., New York
(FORTRAN).
Efroymson, M. A. and Mathew, D ., " N onlinear Regression
Program with Nonlinear Equations," Esso Res. and Develop .
Co. (FORTRAN).
Marquardt, D . W ., et al., " NU N, Least Squares Estimation of
Non-linear Parameters," IBM Share Program SD 3094,
1964 (FORTRAN).
Moore, R. H., Zeigler, R. K., and McWilliams , P., " PAKAG,"
Los Alamos Scientific Laboratory, Albuquerque, N.M.
(FORTRAN and FAP).
Derivative-f ree methods:
Beisinger, Z. E. and Bell, S., "HZ SAND MIN," Sandia Corp .
(FORTRAN). Direct Search .
Lindamood, G. E., " AP MINS," John Hopkins Univ. Baltimore,
Md ., SD 1259 (FAP). Direct Search .
Kaupe, A. F., "Collected Algorithms from the Association of
Computing Machinery," Algorithm 178, New York, annu
ally (Algol) . Direct Search.
Rosen, J. B., "GP90 Gradient Projection Method for Non-linear
Programming" (FAP).
203
Problems
6.1 Using the following data (x, = independent variable
and Y t = dependent variable) :
Xl
0.4
1.4
5.4
19.5
48.2
95.9
Y1
51.6
53.4
20.0
:-4.2
- 3.0
-4.8
B
T
a
0.00150
0.00235
0.00370
0.00580
0.00877
0.01330
0.01960
0.02880
0.04150
0.06060
0.08190
0.12300
0.17200
0.23700
0.32100
0.43700
0.59000
0.78800
1.07000
1.42000
1.87000
2.40000
3.11000
4.02000
5.13000
6.47000
Absolute
Temperature,
TCK)
a'
b'
8.39000
10.30000
12.90000
15.90000
20.20000
24 .80000
30 .70000
36.70000
45 .30000
55.00000
66 .90000
79.80000
95 .50000
115.00000
137.00000
164.00000
193.00000
229 .00000
268.00000
314 .00000
363 .00000
430.00000
500 .00000
580 .00000
682 .00000
790 .00000
308 .16
313 .16
318.16
323.16
328 .16
333.16
338.16
343 .16
348 .16
353.16
358.16
363 .16
368 .16
373.16
378 .16
383 .16
388 .16
393.16
398 .16
403 .16
408.16
413 .16
418.16
423.16
428.16
433 .16
438 .16
443.16
448.16
453.16
458.16
463.16
468.16
473.16
478.16
483.16
488.16
493 .16
498.16
503.16
508.16
513.16
518.16
523.16
528.16
533.16
538.16
543.16
548.16
553.16
558.16
563.16
204
6.3
NONLINEAR MODELS
8.0050 X
6.1111 X
3.1792 X
4.40359
1.4448 x
7.5917 X
2.6723 X
3.6466 X
9.5717 X
4.7435 X
2.4336 x
AI XI log. (::)
Y =
eA 3 x 3
A4
6.4
Xl
0.8102 8
8.1028
12.154
5.0514
60.771
0.68833
6.8833
10.325
3.4417
51.625
0.30451
3.0451
4.5676
1.5225
22.838
1.0000
10.000
15.000
5.0000
75.000
1.0000
10.000
15.000
5.0000
75.000
1.0000
10.000
15.000
5.000
75.000
X2
X3
' 0.1000
0.1000
0.1000
0.1000
0.1000
0.1000
0.1000
0.1000
0.1000
0.1000
1.0000
1.0000
1.0000
1.0000
1.0000
0.1000
0.1000
0.1000
0.1000
0.1000
1.0000
1.0000
1.0000
1.0000
1.0000
0.1000
0.1000
0.1000
0.1000
0.1000
6.6
Y = e- [a l
X a.
+
+
a2X
a5X
+
+
(a)
(p
(b)
+x J
+ x3
(c)
a2Xr
(d)
(f)
Xl
X2
X3
10- 4
x 10- 4
x 10
x 10
75
68
39
16
58
53
61
47
99
33
15
9
25
48
5
63
72
29
75
68
39
16
58
53
61
47
99
10- 3
102
10- 1
10- 1
Lorentz:
RT
= -V 2 (V+ b) -
V2
Dieterici:
RT
= (V b) e
= (V
-a/VRT
Berthelot :
a
RT
- b) - T V2
Wohl:
RT
P
(h)
C~)
= (V b) - T( V + C)2
(e)
Clausius :
p
RT
RT
RT
0.1000
0.6700
1.3400
2.0100
2.6800
4.6900
5.3600
6.0300
6.7000
4.73700
8.0400
8.7100
9.3800
10.050
(g)
x
x
x
x
a
7T=P+ V2
7.7385
4.2372
8.8133
4.5851
9.4883
1.1336
1.2052
1.0767
4.3098
; 2)(V b)
Macleod:
1.9697
3.867 x 10- 1
1.226 x 10- 1
. 4.611 x 10- 2
1.877 x 10- 2
1.5805 x 10- 2
7.2126 x 10- 3
3.3327 x 10- 3
1.5553 x 10- 3
7.3177 x 10- 4
3.4665 x 10- 4
1.6516 x 10- '
7.9076 x 10- 5
3.8024 x 10- 5
Y = (a1
33
97
29
16
13
72
43
84
100
81
63
by least squares.
6.5
102
10- 3
10-'
10- 5
10- 5
10- 5
10
n(V- b')
x2
a3
asx 2
17
80
61
23
32
77
67
34
15
13
11
33
97
29
1.6
13
72
43
84
100
81
63
10- 1
10- 4
103
= ( V - b) - V(V - b) + TV3
Keyes:
A
RT
P
= (V 0) - (V
1)2
o = fJ e - a/V
(i)
Kammerlingth-Onnes:
pV
(j)
RT[I
~ + ~2 + ... J
Holborn :
p V= RT[1
B 'p
C'p2
+ . ..J
o==-~","--_----------------..,,----:-:=============~
PROBLEMS
(k) Beattie-Bridgeman:
pv
TABLE P6.7
f3
RT
205
ORIGINAL DATA
Runs
B((meter)2(min)
kg evap, )
L (min)
(
kg
)
G (meterf'(mia)
1
2
3
4
5
6
7
8
9
10
11
12
13
1.305
1.90
2.71
2.61
2.48
3.61
3.48
4.95
4.38
4.63
4.65
3.18
3.55
1.05
1.17
1.06
1.00
1.04
1.13
1.02
1.02
1.00
1.06
1.04
1.32
1.43
14.1
42.7
42.0
42.5
42.3
71.4
70.0
117.0
115.0
112.0
114.0
99.6
99.6
+ V + V2 + V3
Rc
f3 = RTBo - A o -
T2
Y = -RTBob +aA o +
RBoc
T2
S ., RBobc
T2
(1) Benedict-Webb-Rubin:
PV
f3
RT
'1J
+ V + V2 + V4 + V5
Co
f3 = RTBo - A o - T2
a
bRT - a
+ -T2 exp - V 2
y
'1J
= cyexp - V2
= aa
= pressure
V = volume/mole
T = absolute temperature
R = gas constant
t o.
= f31 eIJ2XxIJa
y
Nitrogen Oxide
Absorbed (g/liter)
x
Concentration of Product
(g/liter)
0.09
0.32
0.69
1.51
2.29
3.06
3.39
3.63
3.77
15.1
57.3
103.3
174.6
191.5
193.2
178.7
172.3
167.5
cJE.
71'1
where
= moles absorbed/min
c =:= concentration, moles/cc
D = diffusion coefficient, cm 2/min
1 = time, min.
_.
.::::=:::JB
206
NONLINEAR MODELS
'TI (centipoise)
288.16
310.94
338.72
366.49
394.27
422.05
449.83
1.0275
0.7268
0.5363
0.4028
0.3266
0.2728
0.2344
(g/cc)
1.0114
0.9934
0.9672
0.9392
0.9124
0.8862
0.8575
S2(X) = e- x a
S2(X) = -
ax + b
(a + ~ + ;3)
a
RT
P= V-b-TV2
for S02 to the following data.]
V (cm3/g)
P(atm)
67.810
50.882
45.280
72.946
49.603
23.331
80.170
45.664
25.284
15.285
84.581
42.675
23.480
14.735
23.913
18.241
7.2937
4.6577
20.685
10.595
5.8481
5.651
7.338
8.118
5.767
8.237
15.710
5.699
9.676
16.345
24.401
5.812
11.120
19.017
27.921
20.314
25.695
51.022
63.730
26.617
47.498
74.190
Mass of Gas
(g)
t ("C)
0.2948
50
0.2948
75
0.2948
100
0.2948
125
1.9533
150
1.9533
200
(a)
where
k, K = constants
r = dependent variable, stochastic
P = independent variables, deterministic
1 . K1
1
kK2 +kK2 P1 + I P2
K3
kK2 P3 (b)
I
L
PROBLEMS
6.14
6.15
4>1 =
.........
__--_ __
..
..
......
where
'YJt)2
107
+ f33 e13 4x 2
_--~----------'---~----~---------------------------------
CHAPTER 7
(Y; - Yt ) = Et
Certain underlying assumptions have been outlined for
regression analysis, such as independence of the un
observable errors 10, constant variance, and normal
distribution for E. If the model represents the data .
adequately, the residuals should possess characteristics
that agree with, or at least do not refute, the basic
assumptions. The analysis of residuals is thus a way of
checking that one or more of the assumptions under
lying regression analysis is not violated. For example, if
the model fits well, the residuals should be randoml y
distributed about the Y predicted by the regression
equation . Systematic departures from randomness indi
cate that the model is not satisfactory; examination of
the patterns formed by the residuals can provide clues
as to how the model can be improved.
Examinations of plots of the residuals versus Yh X h
or time, or a plot of the frequency of the residuals versus
I
t
ANALYSIS OF RESIDUALS
209
1955, p. Ill.
TABLE
= 10.8 + 0.40XI -
0.20X2
s:
Xl
(28)
10.3
(6)
11.6
(8)
9.7
(27)
11.4
(32)
10.3
(20) .
9.6
(3)
11.3
(13)
10.4
(30)
12.2
(17)
11.5
(33)
10.6
(2)
9.4
(1)
13.0
(4)
11.5
(15)
10.9
(29)
11.0
(22)
10.0
(14)
11.8
(31)
12.0
(25)
11.9
(5)
11.8
(12)
11.5
(35)
11.3
(23)
10.2
(36)
10.6
(24)
13.3
(34)
12.1
(7)
11.3
(11)
11.4
(21)
12.7
(9)
13.2
(19)
14.2
(16)
12.8
(18)
14.5
(10)
12.8
(26)
12.4
Sum
62.9
65.4
68.2
68.7
71.4
79.9
Mean
10.48
10.90
11.37
11.45
11.90
13.32
3
X2
4
5
6
Et
E7.1-1a
Sum
Mean
70.4
11.73
72.9
12.15
69.5
11.58
71.2
11.87
66.4
11.07
66.1
11.02
416.5
11.57
210
TABLE E7.1-lb
ANALYSIS OF VARIANCE
Source of Variation
d.f,
Sum of
Squares
Mean Square
Xl
1
1
33
25.51
3.68
24.05
25.51
3.68
0.73
JJ
X2
JJ
d.f.
Sum of
Squares
Mean
Square
Variance
Ratio
16.01
0.75
16.01
0.75
14.0
37.51
1.14
Xl
X2
Deviation from
regression
33
Total
35
37.5
5.4
35
TABLE E7.1-lc
Source of
Variation
Y=
Source of
Variation
JJ
d.f.
Xl
3
2
1
Deviation from
regression
33
Total
35
40
Sequence number
FIGURE
E7.1-1 a
+ 0.51xI
(b)
TABLE E7.1-1 e
X2
-2
-3
11.53
(>;"1
I J;
>;"
Variance
Ratio
Sum of
Squares
Mean
Square
Variance
Ratio
27.31
1.43
27.31
1.43
15.7
57.44
1.74
,.,.
TABLE E7.1-ld
Xl
X2
Sum
Mean
13.43
13.57
13.28
13.60
13.35
12.68
1
2
3
4
5
6
13.0
12.1
10.4
14.0
13.4
11.5
11.5
11.6
15.1
13.1
13.8
9.5
13.0
11.8
12.3
13.8
12.1
13.1
15.0
14.3
12.2
12.6
14.7
12.4
14.1
14.6
15.4
11.9
12.4
14.7
14.0
16.0
14.3
16.2
13.7
14.9
80.6
81.4
97.7
81.6
80.1
76.1
Sum
74.4
74.6
76.1
81.2
84.1
89.1
479.5
Mean
12.40
12.43
12.68
13.53
14.02
14.85
13.32
_J
. ANALYSIS OF RESIDUALS
3r---,.---r-----..--~-
- - _ -__- _
111
(~l
I ~
-1 t+-+~-+---Po...----A~-
-2
-3
-3
1
15
20
25
Sequence number
40
Sequence number
FIGURE
FIGURE
Y = 11.44 + 0.465xI
(c)
TABLE E7.1-1f
E7.1-1c
Y = 8.39 +
d.f.
Sum of
Squares
Mean
Square
Variance
Ratio
22.63
0.46
22.63
0.46
7.6
98.24
2.98
Xl
X2
Deviation from
regression
33
Total
35
(d)
1.19xI
d.f.
Xl
X2
Deviation from
regression
33
Total
35
,.,....
Sum of
Squares
Mean
Square
Variance
Ratio
148.93
20.81
148.93
20.81
20.6
2.9
236.27
7.16
TABLE E7.1-1g
Xl
X2
Sum
Mean
13.87
13.65
11.80
12.85
11.22
12.00
1
2
3
4
5
6
10.3
11.6
9.7
11.4
10.3
9.6
11.3
10.4
12.2
11.5
10.6
9.4
13.0
11.5
10.9
11.0
10.0
11.8
12.0
11.9
11.8
11.5
11.3
10.2
22.4
16.7
12.8
8.6
10.0
17.1
14.2
19.8
13.4
23.1
15.1
13.0
83.2
81.9
70..8
77.1
67.3
72.0
Sum
62.9
65.4
68.2
68.7
87.6
98.6
452.3
10.48
10.90
11.37
11.45
14.60
16.43
Mean
40
35
E7.1-1b
and X2 was not. See Table E7.1-1f. (Observe that the mean
square of the residuals is now significantly different from 1.)
A plot of the residuals based on predicted values of Y from
the equation of best fit
Source of
Variation
30
12.56
212
TABLE E7.1-li
Xl
2
0.72
2.02
0.12
1.82
0.72
0.02
2.00
1
2
3
X2
4
5
6
Range
0.53
-0.37
1.43
0.73
-0.17
-1.37
2.80
1.04
-0.46
-1.06
-0.96
-1.96
-0.16
3.00
2.00
(d)
1.50
1.00
0.50
0
'':;
'"
.~
Cl
4
-1.15
-1.25
-1 .35
-1.65
-1.85
-2.95
1.80
5
8.06
2.36
-1.54
-5.74
-4.34
2.76
13.80
6
-1.33
4.27
-2.13
7.57
-0.43
-1.63
9.70
Range
9.39
5.52
3.56
13.31
5.06
5.71
-0.50
,/
-1.50
-2.00
98
FIGURE E7.1-1d
EXAMINATION OF RESIDUALS TO ASCERTAIN IF THEY ARE
NORMALLY DISTRIBUTED. Figure E7.1-ld is a plot (according
to the method outlined in Example 2.3-3) on normal prob
ability paper of the 36 residuals based on the data of Table
1.80
(e)
(~
I
1.00
I>;"
11.0
10.0
'-'
-1.00
FIGURE E7.1-1e
STEPWISE REGRESSION
P,2 a
(7.2-2)
I Xa) =
-exp
I
Yhux, v I - Pi,xa
[_.I. {Xl -
[/LX,
C{X , I X a} = /Lx,
+ px, x~(U.Xl)(Xa
uXa .
2
Var {X}
l.a = Ux,
(7.2-7)
2 (Ux,)
2
PX,Xa
- 2UXa
UXa
UX,UXa(PX, Xa - Px,XaPXaxa)
(7.2-9)
'
(7.2-8)
= ai,(1 - Pi,xa)
Similarly, Var {X2 .a} = uia(1 - Piaxa)' Finally,
Covar {X, .a, X 2 .a} = C{X , .a, X 2 .a} - C{X , .a}C{X 2 .a}
=
(7.2-4)
I Xa} =
.a}
X , . a == X, - C{X ,
ux1 vI-Pi,xa
(7.2-1)
213
. /
V
(7.2-10)
(1 - PXIXa)(1 - PXaX.)
t R. L.
214
(7.2-12)
PYXj Xk
Stepwise regression
+ (31(XI -
Xl)
+ (32(X2
..;,.." X2)
+ (3s(xs -
xs)
E"
X2
= shear.stress
= viscosity
= temperature
Xs
= pressure
V
Xl
Xl
X2
Xs
1
0
-1
4
2
0
2
3
10
0
0
-1
2
1
8
Xl
X2
Xs
28.9
23.5
722.0
28.9
23.5
722.0
(0.857)%(5 - 2)%
(1 - 0.857)%
= 4.24
91.7
68.7
91.7
68.7
Xl
X2
Xl;
0.758
0.057
[(0.758)(5 - 3)] ~ = 2 10
(1 - 0.758)
.
Y + bs(xs - xs)
or
Y=
0.201
0.38(xs - xs)
Y
-
3.98
-5.10
-1.03
9.00
32.0
0.271
0.028
0.857
215
.2 [fi -
:f\(q)J2
1= 1
(q
Co ~ (n ~2 q)s; - (n - 2q)
(7.3-2)
Y,
Howe ver,
Co ~ q
7.3-1
TABLE
Model
Variables
IV
None
III
II
Xl> X2
1
3
2
2
100
80 f-
70 f-
o Model IV
Cq
40 -
.d
e df
-.,
. d'f
cd ef
.bcd
.bde
. bde/
"bcdf
bcd e
bcdef
501--
40 I-
30 I-
-T-----~--.
a~. a~a~ ~.~-20 -
a included ~ _
I d d
mcue
I ac
---!:!..- _
abc . ad e
. abe
e ob
aed!
aef
. tJbaJe
abedef .....
.abeef ,
abd
e abde
e abf
.~.--
I
labd
~d~
b 'f_iab~
Cefdf~
o L....====:=i===I==L=-~
o
234567
FIGURE 7.3-1
T
3
. ,od!"
87.6
3.3
4.4
2.8
. bd
a and b
f
cd c
Cq
1010
. cd
. de
70 I
20 I-
77.28
10.07
12.48
11.23
601
50 -
C.
80 I-
Model III
Model II
x Model I
30
60 f-
X2
1',)2
90
Xl
L OF, -
iiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiO~;;;;;;;;;;;;;;;
~
216
s~,
==
[Y -
i(Y1 + Y2 ) ] = '\(Y2
Y1 )
(7.4-2)
7)1(x, ~)
7)ix,
~)
7)1
+ (;
== [Y - !( Y1 + Y2 ) ]
= 7)1 -
H Y1 + Y2 ) + (;
Discharge,
cfs x 10- 3
Damage, $ x 10- 3
(1956 price levels)
61
64
70
75
83
50
100
150
180
88
94
100
105
112
210
250
290
340
420
120
127
134
142
150
520
670
810
1200
1600
160
170
180
190
200
2100
2500
2900
3300
3700
r-----,----...-----r-----r----r-----,
TABLE E7.4-1b
Data
- - Modell
Model
95 - percent confidence
limits for Modell
3000
1
2
CV)
o
,.....
217
Mean Square
x 10- 3
v
d.f.
<s:)
120
151
178
171
15
16
17
16
8.0
9.4
10.5
10.7
OJ"
2000
C)
co
co
1000
Xl
X2
75
100
125
150
Discharge, cfs X 10- 3
175
200
X3
(a)
FIGURE
X4 = X _
E7.4-1a
y*
TABLE E7.4-1a
MODELS *
...;-_._-
Modell: YI
= bo +
b-x
b2~
bsx
b4
X _
60
200
bo
bI
Slope = 0.473
50
+x
b
4
_ 60
1190 589
31.3 9.36
0.224. 0.035
-115 297
= -
b =
b3
150
Model 3: Y3 = b o + b, + b2
b o = 1050 452
b, = -292 7.6
b 2 = 0.217 0.030
...
.
Model 4: Y 4 = bo + b, + b2
Y(10- 3 )
= b o + b, + b2r + b 3 x
b = 1990 1280
b l = - 55.3 33.7
b = 0.437 0.280
Y2
60 (10)
b o = 2840 1490
b, = -74.1 37.3
b 2 = 0.572 0.298
= x(10- 2 )
= x 2(10- 4 )
= x 3 (10 - S)
-50
..
-100
-150
-200
(b)
FIGURE
E7.4-1b
218
TABLE E7.4-1c
0
50
100
150
180
210
250
290
340
420
520
670
910
1200
1600
2100
2500
2900
3300
3700
Y1
Y2
-19
143
:'126
110
113
136
186
257
334
465
647
833
835
1305
1593
1984
2404
2847
3308
3782
118
64
96
69
88
122
182
261
341
475
657
841
888
1304
1587
1971
2389
2834
3304
3796
Y2
Y1
137
-79
-30
-41
-25
-14
-4
4
7
10
10
8
48
-1
-6
-13
-15
-13
-4
14
I
0
...-4
100
X
~
c"
.,tj
50
co
(!)(Y1 + Y2 )
50
103
111
90
100
129
184
259
337
470
652
837
859
1305
1590
1978
2397
2840
3306
3789
Z=
Y -
(!)(Y1 + Y2 )
-50
-53
-11
40
80
81
66
31
3
-50
-132
-167
51
-105
10
122
103
60
-6
-89
's
Q)
'"C
Q)
en
50
m
0
<~
-50
-100
-150
-200
(c)
FIGURE
E7.4-1c
1959.
7.5-1
219
Degrees
of Freedom
Source of Variation
Improvement of y*
over Y
p-l
Deviation from y*
n-p+l
Mean Square
Sum of Squares
L (i -
f)2 -
t=l
L (i -
Y *)2
s~
= t_=_l
L (i -
y*)2
sr =
n--p+l
n
"f
(i -
_t=_l
[Vj;c]
I :::;
2: (Yi -
1'ik)
Y;j)( Yj -
i=1
2: r,
p
Y= P k=1
"
~ :::;
n}
I~J~p
l~k~p
Vu
Y)2
(7.5-1)
7\"
1'*)2
~ (i - y*)2
t=l
t=l
Deviation from
(i -
t=l
Sa
2: (Y
j
1'i l )2
1'11)( Yj - 1'j 2 )
i=1
(7.5-2)
V1 2
2: (Y
i=1
(7.5-3)
n
2:
i =1
(Yj -
1'*)2 =
(7.5-4)
-p--p--
L: L:
[V j k ]
f= 1 k= 1
(7.5-5)
220
b01 +
7]2 =
b0 2
bUXl
with
Xl
= X
+ b2 l X 2
with
X2
Yl
Y2
= - 4.83
= 2.20
6.01x
l.OOr
Vu = ~ (Y, -
:911)( Yj -
1=1
V1 2
V2 1
= I.l795
V2 2 = I.l807
103
10
-0.09991]
0.10065
(7.5-8)
To test the hypothesis that bf is different from some
constant 'Y, we can compute
bf - 'Y
t=-Sbj
t = [Var{bf}
+ Var{bZ}
x
1
2
3
4
5
-
_ . _--_.-..-_
. . __
.....
Y .
3.1
5.9
1I.l
17.8
27.2
b* = [0.120]
0.880
10
+ 100(1 - e- OU 5 X )
(a)
7] =
(3)
7] =
(4)
7]
(5)
7]
f30
f30
= f30
= f30
+ f3l (1 ~2;2J
+ f3l[tan - 1 (f32X ) ]
+ f3l[tanh (f32X ) ]
+ f3l e- P2! X
_. _ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
,. ~.
SUPPLEMENTARY REFERENCES
221
140 r-----r-------r---------,r---------.----
\\m it on generated data: true mean
u~~e{
+ 2(1
120
100
80
y
Regression equations
1.
2.
20
5.
O...............;....---......L..-----J....----_-.L
'0
20
FIGURE E7.5-2
TABLE
E7.5-2
TEST
RESULTS
OF
THE
40
MODEL
SELECTION
0
0.5
1.0
2.5
5.0
10.0
b*
c*
0.8
0.5
0.4
0.2
0.2
0.5
0.6
0.8
1.0
1.0
95.2
62.8
94.7
J.....
Minimum
C
1.0
0.8
0.8
0.6
0.4
S;t
--'
100
80
PROGRAM
Degree of Random
Error Introduced
Into True Model,
60
74.2
113.0
alri =
100.
Supplementary References
0.2
0.2
0.4
0.6
1.0
222
Data Set
Number
Residual
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
0.018
-0.093
-0.026
0.040
0.067
0.074
0.064
0.033
0.006
-0.045
-0.127
-0.163
0.075
-0.105
0.006
0.116
0.096
0.053
-0.008
-0.082
Problems
2.8389
bl = -7.4057
b2 = 5;7223
1.4795947 (PULSE)2
where
b3 = -0.8916
PULSE
b, = - 0.02669
0.9755
1.0000
0.9878
0.9647
0.9407
0.9195
-0.8509
SIGMA = 0.8681
B(O)
B(l)
B(2)
B(3)
B(4)
B(5)
B(6)
"
.. _- . . .
0.9313
0.9878
1.0000
0.9937
0.9812
0.9678
-0.7673
Fl (ENTER) = 2.500
0.1483 X 102
=-0.1120 X 102
=
0.1479 x W
=
O.
=
O.
=
O.
=
O.
0.8880
0.9647
0.9937
1.0000
0.9966
0.9897
-0.7007
0.8515
0.9407
0.9812
0.9966
1.0000
0.9981
- 0.6501
0.8224
0.9195
0.9678
0.9897
0.9981
1.0000
-0.6121
F2 (REMOVE) = 2.500
SB(l) = 0.1135 x W
SB(2) = 0.2477 x 10
SB(3) = O.
SB(4) = O.
SB(5) = O.
SB(6) = O.
_- - -- - --- - - - - - - - - - - - -
- 0.9410
-0.8509
-0.7673
-0.7007
-0.6501
-0.6121
1.0000
..
PROBLEMS
223
VISIBILITY
ACTUAL
VALUE
-0.7100 x 101
-0.6300 X 101
- 0.3300 X 101
- 0.2800 X 101
NUMBER
PULSES
6000
6000
900
900
200
200
50
50
12
12
3
-0.3000
0.3200
0.1040
-0.6000
-0.5100
- 0.1200
0.3400
X
X
X
X
X
X
X
10
101
102
101
101
101
101
PREDICTED
VALUE
- 0.6574 x 101
- 0.5420 X 101
-0.3310 X 101
-0.3310 X 101
-0.1278 X 10
0.4268 X 101
0.9628
- 0.5574
- 0.5420
-0.1278
0.4268
X
X
X
X
DIFFERENCE
0.5257 x 10
0.8796 x 10
-0.1042 X 10- 1
-0.5104 X 10
0.1721 x 10
0.1068 X 101
-0.7717 X 10
-0.5742 x 10
- 0.3203 x 10
-0.1027 X 101
0.8688 X 10
101
101
101
10
101
PERCENT
ERROR
-7.4
-14.0
0.3
18.2
-57.4
33.4
-7.4
9.6
6.3
-110.7
25.6
Fl (ENTER) and F2 (REMOVE) are the Fisher F-values for adding and
(a)
(b)
7.4
0.350Ya . .
+ 0.150Ya
L tf{D~}
4~ooou~,
----,---
Thrust
t
Specific Fuel
Consumption
Y
2,000
3,000
4,000
5,000
6,000
7,000
8,000
9,000
10,000
11,000
12,000
13,000
14,000
15,000
16,000
17,000
18,000
1.295
1.088
1.010
0.963
0.935
0.920
0.912
0.910
0.912
0.918
0.929
0.940
0.952
0.966
0.980
0.994
1.010
....,.;_=:
224
TABLE
P7.6b
Fourth-Degree
Polynomial
ao
t= x+a1 +a2
x+ aaX2
ao
x+a1 +a2X
ao
Error
Error
Error
Error
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
1.295
1.088
1.010
0.963
0.935
0.920
0.912
0.910
0.912
0.918
0.929
0.940
0.952
0.966
0.980
0.994
1.010
1.270
1.123
1.022.
0.959
0.922
0.906
0.903
0.909
0.918
0.928
0.938
0.945
0.952
0.959
0.970
0.988
1.019
0.025
-0.035
-0.012
0.004
0.013
0.014
0.007
0.001
-0.006
-0.010
-0.009
-0.005
-0.000
0.007
0.010
0.006
-0.009
1.314
1.060
0.982
0.949
0.933
0.927
0.925
0.926
0.930
0.935
0.941
0.948
0.956
0.964
0.972
0.981
0.990
-0.019
0.028
0.028
0.014
0.002
-0.007
-0.013
-0.016
-0.018
-0.017
-0.012
-0.008
-0.004
0.002
0.008
0.013
0.020
1.299
1.080
1.003
0.964
0.940
0.926
0.918
0.914
0.915
0.918
0.925
0.934
0.946
0.960
0.977
0.996
1.018
-0.004
0.008
0.007
-0.001
-0.005
-0.006
-0.006
-0.004
-0.003
-0.000
0.004
0.006
0.006
0.006
0.003
-0.002
-0.008
1.294
1.092
1.005
0.960
0.935
0.921
0.914
0.912
0.915
0.920
0.928
0.938
0.950
0.963
0.978
0.995
1.013
0.001
-0.004
0.005
0.003
+0.000
-0.001
-0.002
-0.002
-0.003
-0.002
0.001
0.002
0.002
0.003
0.002
-0.001
-0.003
Note:
x =(t/1000) 1
ao = 0.53069
a1 = 0.77242
a2 = 0.010964
Coefficients of
fitted polynomial
in this form were
not calculated
0.12
0.56
0.83
1.36
1.48
1.73
2.20
2.57
.2.83
3.01
3.32
3.62
3.90
3.85
. 9.42
12.90
17.. 36
19.31
22.73
32.89
44.5J
53.01
62.09
81.00
102.11
124.00
ao = 0.42106
a1 = 0.88852
a2 = ~0.011786
aa = 0.00105556
n
m
Sy!
0.09
- 0.15
0.42
0.42
0.23
0.27
0.36
0.83
0.52
0.61
0.93
0.86
0.71
m=n-q-1
n = 13 = number of data sets
q = number of parameters
eP = sum of the squares of the residuals
ao
a1
a2
a3
a4
a5
ePmin
1
11
-0.5567
18.9928
ao = 0.91148
a1 = 0.73665
a = 0.004888
a = 0.004897
2
10
4.8593
-1.5784
7.3924
3
9
1.9142
17.0535
- 8.0616
3.0132
147.1809
1.4002
5
4
8
7
1.8899
1.9802
17.2574 16.2962
-8.3539 -6.2119
3.1472
1.4636
-0.01877 0.5136
~0.0580
1004.01
1.6371
1.8340
I!
!
PROBLEMS
FIGURE
TABLE P7.8a
Material
Polystyrene 1
DATA
ON
Polystyrene 2
P7.8
SPOUT DIAMETER
'. Column
Diameter,
In; de
Bed
Height,
Mass
Velocity,
Lb/(Hr)
(Sq Ft),
In,L
Average
Spout
Diameter,
In, d,
12
326.0
356.2
416.3
476.1
1.13
1.20
1.27
1.33
259.5
296.4
355.8
1.07
1.13
1.16
0.50
320.2
373.6
453.6
0.72
0.80
0.85
0.25
12
598.9
667.2
1.06
1.12
0.50
12
595.0
654.0
713.8
1.47
1.52
. 1.58
506.0
536.2
594.8
1.38
1.40
1.50
Orifice
Diameter,
In, do
0.50
(continued)
225
226
TABLE P7.8a
Material
Barley
Polyethylene
Millet
Wheat
Column
Diameter,
In, de
Orifice
Diameter,
In, do
Bed
Height,
In,L
Mass
Velocity,
Lb/(Hr)
(Sq Ft)
G
Average
Spout
Diameter,
In, d,
0.50
762.7
890.0
1.10
1.20
0.50
12
635.0
705.0
775.4
1.35
1.47
1.49
528.4
563.6
634.0
1.31
1.35
1.44
12
512.0
530.0
619.0
706.0
795.0
883.6
1.37
1.43
1.50
1.62
1.68
1.75
442.0
486.0
530.0
573.6
618.2
1.23
1.34
1.37
1.50
1.50
12
386.2
416.0'
575.6
535.2
594.0
1.02
1.10
1.16
1.25
1.28
326.0
356.4
416.0
475.6
1.00
1.05
1.10
1.14
0.50
0.50
0.50
455.0
567.0
624.3
0.80
0.85
0.90
0.50
12
704.0
776.0
845.4
1.39
1.45
1.50
563.6
634.2
704.0
1.35
1.42
1.45
12
795.9
884.3
1.47
1.53
663.4
707.5
795.9
1.34
1.44
1.53
10
954.8
1028.2
1175.1
1.12
1.25
1.28
0.25
0.625
(continued)
228
TABLE P7.8b
Material
Polystyrene 1
Polystyrene 2
Polyethylene
Millet
Rice
Barley
Corn
Flaxseeds
Wheat
Absolute
Density,
Lb/Cu Ft
Bulk
Density,
Lb/Cu Ft
Percent
Void
Particle
Diameter,
In
Shape
Factor
66.02
66.02
57.58
73.69
90.95
79.87
83.75
70.51
87.40
40.00
37.00
36.97
45.37
56.47
45.24
46.39
43.88
53.98
39.41
43.94
35.80.
38.44
37.92
43.36
44.60
37.77
39.24
0.0616
0.1254
0.1350
0.0783
0.1071
0.1458
0.2857
0.0824
0.1420
1.141
1.176
1.020
1.070
1.041
1.141
1.500
1.050
1.073
Mechanism
Desorption
ai
Adsorption
(c) r =
(Cl.
C2PA)2
PA
Adsorption
Surface reaction
Surface reaction
Fit the models, assuming R. = r + E, and apply the
screening method of Gorman and Toman to discrim
inate among them. All the a's, b's, c's, d's') and e's
are nonnegative constant parameters. List the models
in two categories-" keep" for further analysis and
"reject." How can
be estimated?
Repeat, using the Wilks procedure.
S;t
TABLE P7.9
R,
Reaction Rate,
g-moles/(g-cat)
(Min)
Alcohol
Partial
Pressure,
Atm
Pi
Ether
Partial
Pressure,
Atm
0.00000385
0.00000600
0.00000900
0.00006890
0.00009366
0.00013291
0.00005780
0.00010180
0.00013647
0.00007235
0.00010550
0.00013035
0.00010674
0.00006986
0.00013766
0.00006541
0.00004646
0.00008877
0.00004815
0.00007427
0.00010270
0.00005019
0.00007652
0.00010354
0.00007309
0.00005695
0.00010984
0.00006421
0.00009257
0.00012128
0.00003912
0.0005768
0.0001957
0.0001608
0.0002430
0.0004715
0.00005209
0.27308
0.33130
0.38090
0.73027
0.87864
1.00000
0.65488
0.83945
0.94690
0.65848
0.80092
0.87710
0.73596
0.66312
0.78120
0.43042
0.32314
0.47140
0.44593
0.51953
0.57220
0.54041
0.60424
0.70420
0.58620
0.55396
0.64070
0.71662
0.82606
1.00000
0.59116
0.75500
0.52736
0.30069
0.55180
0.52422
0.62260
0.05391
0.67301
0.64390
0.02480
0.61910
0.00000
0.13487
0.13487
0.06068
0.06068
0.00000
0.00000
0.14601
0.19911
0.05373
0.10683
0.00000
0.05310
0.10931
0.23221
0.03809
0.16099
0.00000
0.12290
0.02262
0.24142
0.05904
0.27784
0.21880
0.00000
0.54909
0.02049
0.60273
0.07413
0.52860
0.00000
0.49093
0.06313
0.45414
0.02634
0.42780 . 0.00000
0.37769
0.08189
0.34578
0.04998
0.29580
0.00000
0.38655
0.02725
0.40267
0.04337
0.35930
0.00000
0.14169
0.14169
0.08697
0.08697
0.00000
0.00000
0.08192
0.32692
0.00000
0.24500
0.01222
0.46042
0.02556
0.47376
0.00000
0.44820
0.22389
0.25189
0.17470
0.20270
PA
Pw
Water
Partial
Pressure,
Atm
PROBLEMS
X2
0
1
1
2
2
1
2
2
3
--
2
3.75
4.5
6
7
--
1.344
'2.972
4.852
7.352
7.017
229
1.867(1 + 0.024xl)(1
1.176x2)
CHAPTER 8
FIGURE
8.0-1
Statement of Objectives
1. Why is the work to be done? What questions will be
answered by the experiment?
2. What are the consequences of a failure to find an
effect or to claim one when it does not really exist?
3. What is the experimental space to be covered?
4. What is the time schedule? .
5. What is the allowable cost?
6. What previous information is there about the experi
ment or its results?
7. Is an optimum among the variables sought or only
the effect of the variables?
Type of Model(s) to be Used
1. Will empirical or transport phenomena models be
used?
2. Is the form of the model correct or is the form to be
determined?
3. What will be the independent and dependent variables?
Experimental Program
1. What are the variables to be measured? How will they
be measured and in what sequence?
2. Which variables are initially considered most
important? Which least important? Can the desired effect
be detected?
3. What extraneous or disturbing factors must be con
trolled, balanced, or minimized?
4. What kind of control of the variables is desirable?
5. Are the variables independent or functions of other
variables?
6. How much dispersion can be expected in the test
results? Will the dispersion be different. at different levels of
the variables?
Replication and Analysis
1. What is the experimental unit and how are the experi
ments .to be replicated-all at once, sequentially, or in a
group?
2. What are the number and type of tests to be carried
out?
3 How are the data to be analyzed and interpreted?
e-
--'--~.-~_.
__ ...-
231
Example 8.1-1
One-Dimensional Experiment
t,
Temperature
(OF)
V,
Shear Stress
(dynes/cm'')
60
80
100
120
15
17
20
28
45
140
232
40 r---r----r-,.---r-,--.....,--,--r-r---,
f30 + f31t1 +
V' =
30
.y 20
Coded data
10
x=lO
10
"
Y = V -IS
tTt =
15
80
17
V=
60
70
120
28
FIGUREE8.1-1
45
500]
54,000
Coded .
1
1 3
1 5
x=
xTy =
0 50
20
125]
13,920
xTx =
10
1 140
[50~
tTV = [
30
V 20
60
100
Uncoded
t=
40
......--,--,---,--,--,
y=
1 7
13
1 9
30
25]
165
[2:
[3:~]
Uncoded
TABLE E8.1-1a
Source of
Variation
SS
d.f.
Mean
Square
Pt( Yl -
y')2
504.1
504.1
16.1
500
500
16.0
Effect of removing
bo (with b,
already removed)
n
2 pt(Y- 0)2
Deviation about
the empirical
regression line
n
2
Consequently, the estimated first-order regression equa
tions are, respectively,
Uncoded :
Coded :
~~~~
._---_.. -_.._.. .
+ 0.355t
f = -7.75 + 3.55x
= - 10.5
_- -- - - ---~----~----~
Ratio
21
t=
93.9
(2 Y?)
1098.0
Pl( Y1
Y )2
1=1
Total
Note: F O9 S (l , 3) = 10.13
31.3
an~ 3~.3
Effect of re
moving b1
1=1
Coded
Vari-}
E8.1-1b
TABLE
Source of
Variation
SS
Effect of removing
bl
Effect of removing
bo (with hI al
ready removed)
Deviation about
the empirical
regression line
d.f.
Mean
Square
Vari-}
anc.e
Ratio
3~ .3
3125
3125
Total
3723.0
Note: F o.9s(l, 3) = 10.13
"5
120
140
* = old
values.
7.500
4.950
101
102
10- 1
10- 2
-4.166
8.333
102
10- 3
g Matrix
1.470
102
1.153
103
Regression Coefficients
100
101
101
31.3
Temperature (OF)
2.270
- 4.166
100
80
X
X
60
a Matrix
16.1
504.1
93.9
Example 8.1-2
Replication
1.500
7.500
504.1
233
bo = -7.616
bl =
3.483
The corresponding analysis of variance is shown in
Table E8.l-2a.
TABLE
E8.l-2a
Source of
Variation
SS
d.f.
Mean
Square
vari-}. S 2
ance -
Ratio 0.933
Effect of removing
1456
hI
Effect of removing
bo (with b l al
ready removed) 1440
Deviation about
the empirical
regression line
309.0
3
Deviation within
sets
9.33 10
1456
1560*
1440
1500*
103.0
-- -
Total
3215
Note: F o. 9 5(l, 10) = 4.96
FO 9 5(3, 10) = 3.71
110*
0.933
15
* Significan t.
A quick computation provides a result similar to Example
8.1-1, namely the residual sum of squares, "
309.0
9.33
318.33
which leads to
318.33 _ 2
13 - 4.4
compared with 31.3 in Example 8.1-1.
The F-test points out that the deviation about the first
order regression line is significantly higher than the experi
mental error, and that the model can be materially improved.
234
+ {31 X1 + {32X'f +
a Matrix
1.500
7.500
4.950
X
X
X
10
101
102
7.500 X 101
4.950 x lQ2
3.675 x 103
4.950
3.675
2.900
X
X
102
103
104
X
X
X
10- 1
10- 1
10- 2
1.470
2.529 X to- 2
-1.488 x to- 2
1.488 x to- 3
-2.946 X to - 1
1.571 x to- 1
-1.488 x 10- 2
10
Matrix
1.153
103
9.458
103
Regression Coefficients
b =
3.210
b1 = -2.885
b2 =
0.636
Source of
Variation
SS
dJ.
Effect of removing
b1
1456
Effect of removing
.b2 (with b, al
ready removed)
273
Effect of removing
b (with b1 and
b 2 already
removed)
1440
Deviation about
the empirical
regression line
36
2
Deviation within
sets
9.33 10
--
Mean
Square
273
292*
1440
1500*
0.933
0.~33
1560*
Total
3215
15
Note: F o.9 5(1, 10) = 4.96; F O9 5(2, to) = 4.10
* Significant.
vari-}
anc.e
Ratio
1456
18.2
B 1X1
+ {32X'f + {33X~ +
Ct
+ {3x +
Y = {30
might better fit the data and give a nonsignificant F-test for
the deviations about the empirical regression line. These
calculations will not be shown here in order to save space,
but the procedure should now be clear.
19.5*
y =
Pox o
PIXI
P2X2
(8.1-1)
== 1 at all levels)
Xl
X2
-1
1
-1
1
-1
-1
1
1
t (OF) - 220
L YiXiO
_i_ _
hI =
_i_ _
h2 =
X2 =
p (atm) - 3
2
~Xil
1
(8.1-3)
n
L
20
(8.1-2)
Xl =
ho =
X=
~Xi2
_i_ _._
(8.1-4)
n .
X2
Xl
-vi -Vi
vi -Vi
v'2
t, OF
240
,xl
220
(1,1)
(1, -1)
(0,0)
(0,0)
200
-1~
(-1,-1)
I
-1
(-1,1)
1.
------I-~
(- ~, --Jr)
4_---
& - - -.......... - - - -
Xl
(-{f, -~)
X2-----;"
235
5 p, atm
FIGURE
236
= flo + fllXI +
+ fJllX~ + fl22X~ +
fl2 X2
fll2XIX2 + E
(8.1-5)
FIGURE
the variance of Y at x, is
Xo
Xl
X2
-1
1
-1
1
-1
-1
1
1
xr
X~
Xl2
1
-1
-1
1
fll2XI X2
(8.1-6)
Xl
X2
XIX2
-1
-I
en+ k) -l
n l
the variance of Yf is
Var {Yj}
=
Ul'
(1
--
'n+k
X~2 )
+ -X~l
+
n
n
xTI =
I
I
i
I
i
= Var {bo} -
Xfl + Xf2
Xfl
2
2
2
2
2
2
2
2
+ Xf2
(Xn Xi2)2
2
2
2
2
4
4
4
4
X2
1
1
1
1
-1
0
0
0
0
Experimental Levels
-1
and also yield the san1e Var {:Vi} for each point:
Var {:Vi} = Var {ho}
237
1
-1
+ 0.375
-1
o
1
FIGURE 8.1-5
-1
-1
-1
o
o
o
1
1
1
32 design (nonrotatable).
Experimental Levels
Xl
XI
-1
Pentagon design
1.000
0.309
-0.809
-0.809
0.309
0
X2
0
0.951
0.588
-0.588
-0.951
0
238
Xl
X2
60.141
54.890
67.712
77.870
78.933
70.100
-1
1
-1
1
0
0
0
-1
-1
1
1
0
0
0
24.500
a=
~ [~
0.250
:]
0.:,J
8.000 0
8.000
0
4.000 0
12.00
8.000 0
4.000
0.1250 0
4.000
-0.1250 - 0.1250 0
12.00
0.1250
-0.1250 0
0.15625
0.03125 0
-0.1250 0
0.03125
0.15625 0
0.2500
37.96
62.020
+ 12.116xI + 9.490X2
79.162
53.095
71.328
38.609
80.131
o
o
1.414
-1.414
o
o
o
1.414
-1.414
TABLE E8.1-3a
Source of Variation
Effect of removing
b2
Effect of removing
bl
Effect of removing
bo
Deviation about
the empirical
regression line
Deviation at
replicated point
Total
8.000 0
0.2500 0
c=
8.000
G
[4::::6]
f
8.000 0
SS
d.f.
Mean
Square
Variance
Ratio
360.3
360.3
Significant
587.2
587.2
Significant
26,926.1
29,926.1
Significant
1,103.2
46.5
551.6
23.3
Significant
TABLE E8.1-3b
Source of
Variation
Effect of
130.17
removing b,
Effect of
1,094.86
removing b22
Effect of
removing bu
359.94
Effect of
remo ving a,
886.88
Effect of
remo ving s,
910.11
Effect of
removing bo
23,567.5
Deviation about
empirical
regression line
178.14
Deviation at
replicated point
61.67
Total
Note: F O 9 S(l , 3)
d.f.
SS
Mean
Square
Variance
Ratio
130.17 Not
significant
1,094.86
Significant
359.94
Significant
886.88
Significant
910.11
Significant
23,567.5
59.38
20.56
Significant
Not .
significant
12
10.12
I
I
I
I
!
I
--
80.8
85.1
82.9
71.9
82.9
81.1
Xl
X2
-1
1
-1
1
0
0
-1
-1
1
1
0
0
239
X2
Xl
81.7
82.9
Y2
-Y2
0
0
57.7
84.7
83.8
80.9
0
0
0
0
V2
-V2
0
0
Y=
TABLE E8.l-4b
SS
d.f.
Mean
Square
Variance
Ratio
76,225.0
305.6
184.5
1
2
3
76,225.0
152.8
61.5
Significant
Significant
Significant
157.1
52.4
Significant
3
76,880.2 12
3) = 9.28
2.0
Source of
Variation
Effect of removing
TABLE E8.1-4a
bo
Source of
Variation
Mean
Square
Variance
Ratio
39,155.7
39,155.7
24,400
11.2
11.2
7.0
30.8
30.8
19.2
31.5
19.7
SS
d.f.
Effect of removing
ho
Effect of removing
hI
Effect of removing
h2
Deviation about
empirical
regression line
Deviation at
replicated point
63.0
1.6
1.6
Total
39,262.3 6
Note: F O9 S(2, 1) = 200; F o 9 s(l, 1) = 161
= 161.00
= 18.51
F2 ,1
= 200.00
F1 ,2
F2 , 2
F 1 ,3
= 10.13
F2 3
= 19.00
= 9.55
F1 ,4 =
F 1 ,s =
7.71
6.61
F 2 4 =
F2 ,5
Total
Note: F O9 5(3,
6.0
First-order terms
Second-order terms
Deviation about
regression line
(lack of fit)
Deviation at
replicated point
6.94
5.79
240
EXPER~MENTATION
TABLE
8.1-1
k=4
-1
1
1
-1
1
23 Design
Xa
-1
1
-1
1
-1
1
-1
1
-1
-1
1
1
-I
:""1
1
1
-1
-1
..;
..;
-1
-1
-1
1
1
1
-1
1
o
o
-1
1
-1
1
-1
1
-1
1
-1
-1
1
1
-1
-1
1
1
o
o
o
o
..;
..;
-1
-1
-1
-1
1
1
1
1
1
-1
-1
1
-1
1
1
-1
t
~}*
-1
1
-1
1
-1
1
-1
1
-1
-1
-1
1
1
-1
-1
1
1
-1
-1
-1
1
1
1
1
0
0
~ ~}*
k=5
Y = .2f3iXi
+E
(8.1-8)
-1
-1
1
-1
1
-1
1
-1
-1
1
1
-1
-1
i=O
(8.1-9)
Again, rotatable designs exist which can be used in
conjunction with Equation 8.1-9. They can be described
geometrically as the vertices of a icosahedron (12 points),
a dodecahedron (20 points), and a so-called central
composite figure (14 points), the latter being formed
from the vertices of a cube (8 points) plus the vertices
of an octahedron (6 extra points). In addition, each
-1
1
1
-1
o
o
1
-1
1
-1
1
-1
1
-1
1
-1
-1
-1
1
1
-1
-1
1
1
-1
-1
1
1
-1
-1
-1
-1
-1
-1
-1 -1
1
1
1 -1
-1
1
1 -1
-1
1
-1 -1
1
1
1 -1
1
-1
-1 -1
1
1
-1 -1
1
1
1 -1.
-1
1
1 ~1
1 -1
-1
1
1
1
-1 -1
1 -1
1 -1
1 -1
-1
-1
-1
-1
-1
1
1
1
1
1
1
1
1
o
o
1
1
1
o
o
~}*
-1
-1
-1
-1
-1
1
1
1
1
1
1
1
1
1
1
1
-1 -1
-1 -1
1 -1
1 -1
-1
1
-1
1
1
1
-1
-1
-1
1
1
-1
-1
1
-1
-1
"-I
-1
1 -1
1 -1
I
1
0
0
O.
~l'
Of
'\
Y =
.2
i=O
f3i X;
.2 L f3liXiX +
j
(8.1-10)
i=llr1
241
23
TABLE 8.1-2
~ 4
TABLE E8.1-5a
Coded Levels
-1
-1
-1
1
1
1
-1.68
1.68
0
0
0
0
-1.68
1.68
0
0
0
0
0
0
-1.68
1.68
Center points
-1
-1
-1
1
Octahedron
vertices
(" star")
FIGURE
-1
-1
-1
-1
-1
1
factorial
(cube)
X3
X2
Xl
-1
1
Variable
Factorial
Star
Xl
2
0
-,
k=4: 2 factorial +
4
X2
X3
X4
r ~}
8 points
Xl
k = 5: 25 factorial +
X2
X3
X4
-1
0.112
0.233
1.68.
0.255
0.437
15.1
30
44.9
55
40.5
50
59.5
66
8.1-7
-1.68
X5
2.378 0
0
o
0
2.378 0 0
0
0
2.378 0
0
0
0
o 2.378 0
0
0
o
0
2.378 J
10 points
001
1.
2.
3.
4.
5.
Tensile strength.
Tear strength (tab tear).
Tear strength (Elmendorf tear).
Elongation.
Initial modulus.
t A.
242
8.1-4
(a)
(b)
PVA-H
{c}
Blocking
TABLE E8.1-5b
243
Film
* Italic
Tensile
Strength
(psi)
Tab
Elmendorf
4,920
5,690
4,920
4,920
4,770
5,500
4,920
7,230
6,620
5,830
6,920
6,620
6,770
10,170
6,670
5,690
5,330
7,540
5,080
5,080
~1
33
24
25
23
38
47
49
55
56
18
19
19
18
18
24
24
20
25
23
3.5,3.0
4.0,3.0
3.0,3.0
3.0,3.5
3.5,4.5
3.0,2.5
2.5,2.5
3.0,3.0
3.0,3.0
3.5,3.5
2.5,4.0
3.0,3.5
3.0,3.0
3.0,3.0
3.5,3.0
3.5,4.0
4.5,3.0
3.5,4.0
3.5,3.0
3.0,3.5
Solution
Time'
(min)
Elongation
at Break
(Percent)
Modulus
X 10- 6
(psi)
20
219
,0.12
15
234
0.12
20
243.5
0.15
20
3.5
253
0.09
Example 8.1-6
Rupture
Time
(sec)
Blocking
Vector of
Observations
y
Matrix of
Independent Variables
Xo
X2
Xl
96.0
1.000
76.7
-0.500
0.866
64.8
-0.500
-0.866
97.4
93.0
244
TABLE E8.l-6a
Source of Variation
d.f.
370.2
-
Variance
Ratio
36,551
431.7
70.8
Due to removing b
Due to removing b,
Due to removing b 2
Deviation about empirical
regression line
Deviation at replicated
point (er ror)
Note : Fo.eo(l , I)
SS
38.24
9.68
37,433
39.86
Matrix of
Independent Variables
Xo
Xl
1.000
X2
78.7
0.500
0.866
76.7
-0.500
0.866
54.6
-1.000
64.8
-0.500
- 0.866
78.9
0.500
-0.866
97.4
90.5
93.0
86.3
Matrix of
Independent Variables
Xo
Xl
X2
1Ij
d.f. = 1
Source of Variation
Due to blocks
Due to removing b
Due to removing
linear terms
Due to removing
second -order terms
Deviation about
empirical regression
line
Deviation at replicated
point (error)
Total
d.f.
SS
Mean
Square
790.3
58,813
Significant
Significant
849.3
Significant
718.4
Significant
. 1:60 1:60
-
Variance
Ratio
10
18.5
Notsignif
icant
9:25
61,191
96.0
1.0
68.7
0.5
0.866
76.7
- 0.5
0.866
44.6
-1.0
64.8
1 - 0.5
-0.866
-0.866
68.9
0.5
97.4
80.5
93.0
76.3
.,
!.
CANONICAL ANALYSIS
8.1-5
Practical Considerations
orthogonal de
one among the
literature exists
that have been
24S
+ Z; 22X~
(8.2-1)
246
X2
X2'
X2
,Translation
Rotation
--+\-\-+~~~--~Xl'
==?
==?
t
-I----'~------,/---~Xl
-I--->"d-------,/---~Xl
,
1
FIGURE 8.2-1 Transformation of coordinates into the princ ipal axes. (Canonical coordinates are
the heavy lines.)
X2
X2
"
1
--'-t---.:::-""""'-,----XI
\
(a)
X2
(b)
--t-j----jf---;f--'<--- Xl
(d)
(c)
%2
I"
I"
(e)
(f)
FIGUR E 8.2-2 Contours for second-order models with two independent variables.
(Adapted from G. E. P. Box, Biometrics 10, 16, 1954.)
..... .... . .
_-
. .. . . _---------~---~---------~
.. ~ __..
._ _,.._ ._J
CANONICAL ANALYSIS
TABLE
8.2-1
247
f' -
r, =
bllX~
b22X~
Coefficient
Signs
Case
Relations
1
2
bl l = b22
bl l = b22
bl l > b22
s.. > b22
bl l = b22
bl l = b22
bl l > b22
b22 = 0
b22 = 0
3
4
5
6
7
8
9
Type of
Curves
+
+
+
+
Circles
Circles
Ellipses
Ellipses
Hyperbolas
Hyperbolas
Hyperbolas
Straight lines
Parabolas
(f - f e) =
b11X~
+ b22X ~ + baax~
Geometric
Interpretation
Center
Figure
Circular hill
Circular valley
Elliptical hill
Elliptical valley
Symmetrical saddle
Symmetrical saddle
Elongated saddle
Stationary ridge
Rising ridge
Maximum
Minimum
Maximum
Minimum
Saddle point
Saddle point
Saddle point
None
At infinity
8.2-2
(a)
(a)
(b)
(b)
(c)
(c)
(d)
(e)
(f)
= bo
( a)
(b)
(8.2-2)
(e)
(e)
(d)
(f)
248
or
(8.2-4)
where
xTbllx e
= - !(xTbD
(8.2-5)
+ 2bll x
(8.2-6)
Xextremum = -(!)bilbI == Xe
(8.2-7)
Yextremum ==
Y - Ye
Ye =
bo
+ b.x, + xJbllxe
(8.2-8)
-2
-2
-2
-2
Adjoint of bl l =
_._--
(8.2-9)
162 =F 0
..
= O.
X=X-X e
[blx
(8.2-12)
or
Y - Ye =
+ A2X~ +. ..
iTVTbllVi = Al xi
Example 8.2-1
bI
= xTbllxe = --!(xTbD
Inverse of bl l = bi? =
(8.2-10)
-_.. _ ... _ - -- - -- - - - - - - - -
1~2
14 38
I: I:]
10
35
14 38
I
I
CANONICAL ANALYSIS
x, =
249
-~ C~2) [:4
: ;~ I:][ ~:] [ ~]
14 38
18
- I
-f--f,---fI---f,---+f+-- X3
---1H-+-+---~~_++-H-~
= Xl -
Xl
;(2 = X 2 ...:. 2
X3
and
X3
Y. is
FIGURE
-+~ ~~
- 18
E8.2-1
-:J[J
u,~ m
For A = 6
det (bll
(7 - A)
- 2
- 2
(6 - A)
-2
-2
(5 - A)
-2
U2
- 2
-I
U3
=0
2,
U2
1, and
A2 = 6,'
Al = 3,
0]["']
AI) = 0
[-:
-2
=0
Finally, for A = 9
A3 = 9
= 3x~
Y - 18
+ 9X5
6x~
(a)
where the Xt are the principal axes. The initial and final
stages of the translation and rotation are illustrated graphi
cally in Figure E8.2 -1.
(b) The new coordinates Xl can be related to the old
coordinates x, through the unitary matrix V and the
transformation
x = vi
or
x=
-2
- 2
(6 - 3)
- 2
-: ][::]
(5 - 3)
U3
or
4Ul - 2Ul
-t
- :]
-t
= 0 is
(7 - 3)
Consequently,
V = [:
3U2 -
-2U2
2U2
= 0
2U 3
2U3
= 0
=0
to
250
-2~V6]X "
= 2x~
2x~
2X2XS -
3x~
+ 4XlX2 +
4Xl
6X2 - 2xs
2XlXS
I/V3
(a)
Solution:
In matrix notation, Equation (a) is equivalent to Equation
8.2-4 with
= 2i~
+ 5i~
+ linear terms
(d)
AI) = 0
or
(2 - A)
(2 - A)
=0
(3 - A)
=O}
+ 2U2 + U3
2Ul + 2U2 + Us =
u, + U2 + 3us =
2Ul
norm =
vT+1
V2
U3 =
+ U2
+
+
+
=
V6
2U2
A = 2 Zu,
{
ui
norm =
VI + 1 + 4
u, = 1
U2
= -1
U3
=0
UI
= 1
U2
O}
= 0
2UI
Ul
+ U2 - 2U3 = 0
-1
Coded:
+1
60
30
45
0.140
0.100
0.120
330
340
350
150 .:
50
100
280
310
340
10
19
28
O}
+ 7.064x4 +
+ 0.655x~ +
norm =
Design Values
U3 = -2
U3 = 0
+ 2U2 + Us =
- 3U2 + U3 = 0
- 3 Ul
A= 5
U3
TABLE E8.2-3
Variable
A= 0
7.602x~
- O.0428xg
VI + 1 + 1 = V3
1/V2
U =
-1 /V2
0
1/V6 1/V3]
I/V6 I/V3
-2/V6 I/V3
1.847xsX6
(a)
CANONICAL ANALYSIS
t - 349.96
= -1.388xi - 0.064x~
1.20lX~
+ 0.604x5
+ 12.049x~
+ 9.776x~
(b)
X4e = 1.84
Xse = 4.73
Xae = 2.87
T, = 349.96
Xl e
t
i
I
I
+
+
0.6509x~
- 3.84l8x2x4 - 3.7421x2
7.5876x~
7.1016x4
1.706xs
6.1038x3
3.945xa
Xl< = -0.643
X4e = -0.734
X2 e = 0.707
t - 353.426
= -O.7477Xi
11.6025x~
1.4812x~
I!
i
L_ _
(c)
(d)
+ 0.681(x2 - 0.707)
+ 0.734)
0.568(XI + 0.643) + O.l42(X2 - 0.707)
- 0.811(x4 + 0.734)
0.623(XI + 0.643) - 0.718(x2 - 0.707)
+ 0.310(x4 + 0.734)
Xl = 0.538(XI
X2 =
X4 =
251
0.643)
0.496(~4
(e)
t - 353.4
= 11.6025x~
(f)
or, by rearranging,
) t - 353.4
(g)
11.60
)t -11.60
353.4
(h)
t - 353.4 =
11.6025x~
(i)
(j)
252
X4 (flow rate)
1'-1'.=30
X2
/'
./
Xl
/'
/'
/'
/'
/'
+1(0.14in)
-1 (50Ib/hr)
-1 (30 rpm)
/'
/'
(screw spee~
""--=-..J._-L_L.Jll-->-=--X2
(a)
FIGURE
(channel depth)
L - - - - - - -- -- - - - - X2
(b)
and X2'
Xl
Xl
e
I
253
oY
-Sl
_ OX1
oY
+ -S2
+ ...
OX2
[~(~~rr
IIVYII -
(8.3-1)
VY
IIVYII
b1S1
=
+ b2S 2
Vb~ + b~ .
(8.3-2)
namely,
VY
I
VYII
254
Extreme
extrapolation
Region of
experlmentation
Region of
experimentation
LL.
- - X2
(a)
FIGURE
L-------------(b)
X2
for a maximum: B 2
for a minimum: B 2
where
(8.3-5)
AC < 0
AC < 0
and
and
A
A
+ C< 0
+ C> 0
where
__._
--------------------------~_._-_._---.. _....
.....
Viscosity (Xl)
Pressure (X2)
Flow rate (X3)
Xt - 50
Zt=~
Xl
X2
X3
786.8
744.1
642.6
684.9
142.7
955.9
1123.5
50.0
50.0
50.0
50.0
25.0
75.0
75.0
25.0
50.0
50.0
50.0
50.0
25.0
75.0
25.0
75.0
50.0
50.0
50.0
50.0
25.0
25.0
75.0
75.0
lQ7~.6
S2
e --
Se
n - 1
= 4030
=,63
bo
b-x,
+ b2X2 +
b3X3
(b)
2011.0
2432.0
2345.6
2391.7
2449.6
2833.7
2494.3
2629.0
2458.0
2129.9
2121.4
2389.5
81.0
89.0
81.0
89.0
81.0
89.0
81.0
89.0
85.0
85.0
85.0
85.0
81.0
81.0
89.0
89.0
81.0
81.0
89.0
89.0
85.0
85.0
85.0
85.0
81.0
81.0
81.0
81.0
89.0
89.0
89.0
89.0
85.0
85.0
85.0
8"5.0
Y=
--
t )2
From the first four data sets, the estimate of the error
variance O'~ can be calculated as
L Y? _ (2: Y
1 to 100
1 to 100
o to 100
--
2SS
(a)
were
b = 823
bl = 215
b2 = 191
b3 = 245
(c)
256
TABLE E8.3-1a
Source of Variation
Mean
Square
ss
v = d.f.
311,684
103,894
76,788
19,197
91,326
30,440
Deviation of residuals
Experimental error
(d)
TABLE E8.3-1b
= d.f.
removed
(intercept)
Xl removed
X2 removed
X3 removed
Variance*
Ratio
ss
Xo
.
. * USIng
a
Xl
X2
X3
3284.9
3349.2
3982.2
3483.9
100.0
95.0
100.0
100.0
100.0
100.0
100.0
95.0
95.0
100.0
100.0
100.0
95.0
100.0
95.0
3861.5
Due to regression
Source of
Variation
.
pooled varIance sfi
F o .9 5(1,7) = 5.59.
4.7957 X
1..2142 X
2.2512 'x
1.8801 x
107
105
102
105
Significant
Significant
Not significant
Significant
326
= 76778
' 4+
+ 91
3'
= 24,015
and
3821.0
3471.7
3961.5
98.0
98.0
98.0
99.0
99.Q
99.0
99.0
99.0
99.0
l'
j
2.5
$148.20/kg at
64.5% yield / - 65% yield
.r-. \ \
2.0
1.0
$53/kg $51/kg
$51/kg
$53 /kg
0.5
2.5
Percent A, Xl
FIGURE 8.3-3 Cost contours and yield contours as a functi on of
two compl exing agents, A and B , for a cert ain antibiotic. (Adapted
from E. E. Lind, J. Goldin, and J . B. Hickman, Chem. Eng.
Progr. S6 (II), 62, 1960, with permi ssion of the publi sher , The
American In stitute of Chemical Engineers.)
257
. - -- - ---- --- --
- --
- -- _. _. _--- -
- _. _--- -- -
-- --_.
258
G)
~0. 300
6)
Q)
(0,0)'
297
.'
FIGURE
8.3-4
3 hr
75.4
(8.3-6d)
(180 min)
3 hr
5 min
(185 min)
83.8
83.2
Particle Size
4.0
82.8
75.8
Percent Solvent
Recovered
77.2
(8.3-6b)
Y4 - Ys) (8.3-6c)
change in mean effect = t( Y2 + Y3 + Y4 + Ys - 4 Y1 )
75.6
Y2 - Y4 )
= -!(Y2 + Y3
t( Y1 + Y2 + Y3 + Y4 + Ys)
76.4
t( Y3 + Ys
(1, -1)
2 hr
55 min
(175 min)
Yield
(-1, -1)
Time
temperature effect
(1, 1)
(-1, 1)
::J
.....
Also:
82.2
5.2
4.4
82.6
4.6
5.4
Y1
= ~
s~
i <j < k
D,
ytj
j
starting with the second cycle and con
, tinuing up to the most recent cycle but
one
or
time effect =
1)
where
LsUnt=2
n=1
-!-( Y3 + Y4
Y2 - Ys )
(8.3-6a)
2.
S"
.1
k _ I:
[~]
i'
[* ct Dif]
~
D2. _
1
~-k--
j=1
At the end of a few cycles, one can get a feeling for the
effect on the response of each variable and decide to
either accumulate more information or perhaps make a
specificchange in one of the controlled variables so as to:
1. Start a new phase about a new center point chosen
in the optimal direction.
2. Explore in the optimal direction.
3. Select levels of variable more widely spaced.
4. Substitute new variables for one or more of the
previous ones.
When several simultaneous responses are, tallied, a
compromise is needed. For example, if at the end of
several cycles the effect of the design is to: (1) reduce the
259
cost per batch by 0.3 0.4 units, (2) reduce the yield
by 0.5 0.06 percent, and (3) increase the particle size
by 1.5 1.1 units, it will be necessary to assess the
deleterious effects of (2) and (3) compared to the advan
tage of (1) through use of an objective cost evaluation or
subjective judgment. For a given significance level ex and
operating characteristic {3, the number of cycles required
to detect an effect of a variable depends on the allowable
increase in variability in the process. For a 30-percent
increase in (1, four or five cycles are needed for a 2 2
factorial design, and two to three cycles are needed for
a 23 factorial design if ex and {3 are about 0.05 to 0.10
each.]
4, 441, 1962.
t W. Spendley, G. R.
(8.3-7)
Cnew=q~l
q+ l
[
LXi-Xj+Xnew
(8.3-8)
l=l
2~
L
~new = A
q i=l
Yi
X new
is
[2 + 1]Y
A
(8.3-9)
260
Order,j
8.3-4
Vector
Designa
tion
o(Start)
[x~o)
Re
sponse
X Co )
(8.3-10)
The S's can be evaluated in almost any way and may
constantly be reduced from cycle to cycle, or they can
grow and recede depending upon the accumulated
successes or failures in improving Y, as in the Hooke
and Jeeves technique. Here the S's are reduced as
follows:
_ I)n+IS(l)
s (n ) _ (
k
(8.3-11)
k
n~
with SkI) = O.l(Uk - lk)' As an initial condition for the
x's, we let
(8.3-12)
A series of Y?)'s is obtained from the series of experi
ments indicated by the list 8.3-10. From these y/n),s, a
decision is reached as to how to modify the x's for the
next cycle according to the following rule :
"
' ji
'.;1
r-
(8.3-13)
The symbol ~kn) is determined on the nth cycle according
to the folIowing rules:
+
~kn)
{-
Yr'
if yr -
O}
if
-YJn) >
if Y? ) - YJn) = 0
0
1
YJn)
< 0
d n)
k
_
-
)
( _ I )n+ I a(l
k
n%
(8.3-14)
Skn ) and
(8.3-15)
~\
II
i.
,~
(8.3-16)
:j
I'
I4
+ 1)
= Xk,, ;h -1)
= Xk") + (h
- 1) ~k")ak")
(8.3-17)
261
TABLE E8.3-2a
n
o~n)
o~n)
o~n)
a~nl
a~n)
a~n)
1
2
3
4
100
-80
80
-70
0.20
-0.20
0.15
-0.15
0.5
-0.4
0.4
-0.35
200
-120
80
- 70
0.4
-0.25
0.20
-0.14
1
-0.6
0.4
-0.4
Example 8.3-2
= wave length
X2
Xs
5000
4000
7.5
2.5
4500
1
100
0.2
0.5
200
0.4
1.0
rounded off) for 0" and a" required in the calculations based
X~1 ;1)
= 5
+ 1(-1.0)
= 4
X22 l
= I
X~2l
y(2)
+ (4 - 3)(- 0.4) =
5 + (4 - 3)( -1) = 2
- 0.2;
0 is used instead
= 4140
-"-:
262
8.3-2b
Variables,
Trial,
Cycle,
n
x~n)
x~n)
x&n>
1
2
3
4
5
4500
4400
4600
4400
4400
4300
4100
3900
3700
3900
3980
3720
3980
3980
3780
3660
1
0.8
0.8
1.2
0.8
0.6
0.2
0
0
0
0.20
0.20
0.00
0.20
0
5
4.5
4.5
4.5
5.5
4
3
2
1
2
2.4
2.4
2.4
1.6
1.4
O~8
3180
3700
0
0
0
0.15
0
3810
3960
3870
0
0
0.15
0.15
0
0.15
0
0
0
1.4
1.0
1.0
1.0
1.8
1
0.6
1
1.35
1.35
1.35
0.65
0.6
0.2
0.6
*
o
1
2
3
4
5
6
*
o
o
1
2
3
4
5
3100
3940
1
3
4
5
3800
3940
3940
~ -3800
3720
~800
3700
3860
-1 34
.
4350 - 3400 = 5 94
160
.
Extinction,
YJn>
-8.50
4700 - 4010
-140
-4.92
200
313
46
266
288
819
1696
4140
3200
4140
2850
4210
3240
3120
4080
3100
4080
3700
4350
3400
3600
4400
3980
4540
4010
4700
4150
4120
4800
4300
4800
0.4
-200
-0.4
-1
-120
-0.25
-0.60
-120
-0.25
-0.60
90
0.20
0.40
90
-0.20
-0.40
-70 .
-0.40
-0.14
-70
-0.14
-0.40
15 __--.---..,...--~-----r-----,.--..,
10
Approximate curve
for estimation
of derivative
5
........
~o
~ ?
I ~
........
5::
J...--------t-----------;
.......~.....
-5
-10
-15
4210 - 2850
-160
3500
4000
4500
x~n)
FIGURE
E8.3-2b
263
7.5 r----r---r----,...-~---,.-~__.-_
4500
5.0 .
4000
2.5
3500
1
2
n cycle
6
4
j step
n cycle
j step
1.5
5000
4000
1.0
3000
2000
0.5
1000
1
1
n cycle
5
6
j step
n cycle
*
FIGURE
j step
E8.3-2a
(8.4-1)
i= 1,2, ... ,n
u;.
of3j
j = 1,2, ... , m
264
(27T)m/2I Q IYz
to'
(8.4-4)
(.). I )
Pn+lto' Yn +l =
(84-2)
.
where b(O) is the vector of initial estimates for the {3's and
r oo L(~ I Yn +l)Pn(~)d(3
where
Pn +l(~
I Yn +l) =
L(j I Yn+n)
2a~
(8.4-6)
Introduction of Equat ions 8.4-5 and 8.4-6 into
Equation 8.4-2 yields the desired posterior probability
density for the (3's:
.
(8.4-3)
= (27T)n"2 a exp -
Pn+n(j I Yn +n')
n +n-
= kn+n exp
1=1
[Y1
TJt(j,
2~
Xi
)]2]
..
....L
7]i(~'
2: (f3j -
bnXij
(8.4-8)
j=1
265
n.:
X ll
X 12
X 1m
X 21
X 22
X 2m
X n1
X n2
X nm
Xr.+n.1
X n+ n. 2
X n + n-, m
X=
n+n
.2 (Y
i -
7]i)2
i= 1
x)]T[X(~
- b*Y]
+ ([3
n + n'
exp
x exp { -
2~? [(3 -
(8.4-10)
where I I signifies the determinant. If the posterior
probability density Equation 8.4-9 is maximized with
__ .
of3f
(~XI~+Xl1) (~XilXI2+XS1XS2)
(~X'HXl2)
b)TXTX([3 - b)
_ 7]i(b<n>, Xi)
- b*YXTX([3 - b*)]
Pn+n([31 Yn+n')
_ k*
ij -
(i: XilXI3+XS1XS3)
1=1
266
(a)
where
_ --_.. -
..
IXTXj
2: X
2:xt~
t=l
1=1
2:
UX'2
Lxt~
Al1Xl2
t=l
i=l
= -
Xi! [exp
Xi 2
= -
Xi2[l -
(b l xn)](1 -
b2 X i 2 )
(c)
exp (blxu)]
267
= 0.646 at
Xl
Y2
= 0.194
Xl
= 1.10, X2 = 144
1.10, X2 = 0
biG)
= 0.944
b~O)
= 4.86
10- 3
bi3
(d)
Xu
at
) ,=
b~3)
-0.937
= 4.85
10- 3
TO
(b): _
= IXlI
X 21
IX(O)I = xllx22(1 -
b~O)X12)
(b~t)X21)]
(e)
The effect of initial guesses for the x's on the final design
is shown in Table E8.4-1a. As might be expected, the
design falls at the extremes of the range of the x's-at the
;1
___________ '
0.2 0.6
0.17 1.1
0.9
1.0
0.17 0.17
1.1 1.1
0.5 0.5
0.2
1.0
40
0
120
0
144
50
10
80
144
130
0
144
50
130
1.10
1.10
1.10
1.10
1.10
1.10
1.10
i).~O)
Xl1
I:
1.10 41.9
1.10
2.35
Ll0 144
0.33
1.10
1.10 144
1.10 144
0
1.10
10- 5
82.7
0.725
143
8.71
'5.98 -8.36
144
9.05
9.08
0
9.096
0
144
9.095
268
--,-
-.,-----,----,---...----{ 2,4
X31
0.25,=--
1400
25
75
50
150
100
X32
0.17
0.20
0.80
LIO
0.17
LIO
X 32
0
20
100
144
144
0
Final Results
X 32
X3 l
1.06
1.06
0
0
144
137
144
0
LIO
LIO
LIO
1.06
TO
Value of
1).~3 ) X 10- 3
2.62
2.62
2.26
2.50
2.62
2.62
b2 = 4.96
10- 3
0.17
0.20
0040
0.80
LIO
0.17
1.l0
1).~4)
TO
01 12
01 2 1
01 2 2
0.5
5
50
500
10
5
50
500
0.5
20
50
500
0.5
5
30
500
0.5
5
50
40
Final Design
X11
X12
X2l
LIO
LIO
144
144
144
142
1.10
1.10
1.10
1.10
1.04
144
LIO
1.06
1.04
X22
O'
0
0.2
144
0
Final Results
X42
X 4l
X 42
0
20
50
100
144
144
0
1.07
1.07
0
0
144
100
144
144
0
LIO
LIO
LIO
1.10
1.07
M 4) X
10- 3
3.96
3.96
5.28
4.04
5.28
5.28
3.96
t J.
1966.
_-_. . . ._-
-'------------.,---------~---- _ ...
~._ ..~ _.
- - . _ .. -- - - ~
where
r
k
K =
p =
'.
ONE-VARIABLE-AT-A-TIME DESIGN
NO
H2
Simulated
Observed Rates,
0.00922
0.0136
3
4
5
6
7
8
9
10
11
12
0.0280
0.0291
0.0389
0.0485
0.0500
0.0500
0.0500
0.0500
0.0500
0.0500
0.0500
0.0500
0.0500
0.0500
0.0500
0.0500
0.00918
0.0184
0.0298
0.0378
0.0491
2.01
2.52
3.10
3.65
3.82
3.82
4.90
2.02
2.83
3.75
4.32
4.53
Run
Number
0~-t)197
269
TABLE E8.4-2b RESULTS OF THE UNPLANNED DESIGN COMPARED WITH THOSE FROM THE
SEQUENTIAL DESIGN
k x 104 g-moles
(min)(g catalyst)
K H 2 , atm- 1
*2.9 0.92
t4.7 0.53
38.0 26.1
16:9 4.0
38.0 24.9
20.2 4.2
* The
numbers
ePmin
g-moles
]
x 1011 [ (min)(g
catalyst)
4.62
6.88
. . JXT~J~
5.02 x 10- 1 4
9.02 x 10- 1 3
t Sequential design.
-----------.----------------------..,....-------'--."..-------,-------_.-.
---
-_. -~--_.~_
.. ,
270
220 24g"
K NO
///
/
'c
\ k = 1.85 X)0-4
\ k = 1.~2 X10- 4
200
220
1//
-----------------------y
KHz
FIGURE E8.4-2a Approximate 95-percent contour for the joint confidence region derived from
unplanned experiments. Contours shown on the surface are the loci of lines of constant k in the
surface. (Reproduced from J . R. Kittrell, W. G. Hunter, and C. C. Watson, AIChE J. 12, 5, 1966,
with permission of the publisher, The American Institute of Chemical Engineers.)
The square root of the determinant 6.1 is inversely pro
portional to the size of the confidence region. Figure
E8.4-2c contrasts the change in the relative volume of the
confidence region by the sequential design procedure with
the constant volume of the unplanned design . It can be
seen that the estimates from the sequential design procedure
after the fifth point (i.e ., the first point chosen by the mini
5.0 r--------r------r----::---,------,- - - - - , - - - ,
~
>-
'Iii
4.0
a
C>
::::::
Cl>
0
E
3.0
K NO
2l
A - - 2.90 X 10- 4
B - - - - 2.07 x 10- 4
C- - -1.85 X 10- 4
1.0
'Bee
38.0
99.3
203.5
O' - -
PNO, atm
PH" atm
0.01
0.01
0.03
0.03
0.10
0.06
0.10
0.10
0.04
0.10
0.10
0.03
OO'
0.03 } Initial
0.01 design
0.03
0.06
0.10
0.03
0.10
0.10
0.03
0.10
0.03
6.1 x 10'5
0.0003
0.022
0.498
1.14
1.96
3.15
4.60
6.93
8.15
KH
- -'
38.0
111.5
194.0
Cl>
0:
1
2
3
4
5
6
7
8
9
10
12
0
'".....
2.0
Run .
Number
11
C>
---'
0.12
(b)
Volume after 12
[~r:-~-a-time runs
Volume by sequentially
maximizing Al
10 11 12 13
Run number
271
(c)
FIGURE
E8.4-2c
region.
5
4
10
..
=5.43 X 10-4
=4.70 X 10-4
7]1 =
k = 4.06 X 10- 4
20
7]2
+ f33
We shall designate each response by
60
40
e- 1J 2 x
f31
f32 X2
1 5:. r 5:. v
K NO
20
1 5:. i 5:. n
30
40
50
60
KH2
/
E8.4-2d Confidence region after 12 runs using planned
experiments. (Reproduced from J. R. Kittrell, W. G. Hunter,
and C. C.Watson, AIChE J. 12, 5,1966, with permission of the
publisher, The American Institute of Chemical Engineers.)
FIGURE
0.12,..-----r----.----ro----r--__- _
0.10
+-'
co
0.08
0"
.....0
0.06
e
~
e0.
0.04
co
'E
co
0-
CD
rs)
L((3 I Yn, x, a
Ir- 1I n / 2
=
l-2 r~ ;S
r
(27T)n q /2 exp
1 ~
arsVrs
where
CD
0.02
0.04
0.06
0.08
0.10
VAn)
0.12
. __ _ _ .
(8.4-11)
CD
I'
0.02
2: [Y
n -
7)rl((3, XI)]
i=1
t N.
.1
272
I Yn +n')
(27Tym+ <n+n')vl/2
-exp
[-4 ~ ~ ~'VT~n+n. )]
(8.4-12)
Note the analogy to Equation 8.4-7.
The remainder of the development is exactl y the same
as in Section 8.4-1. To maximize Pn +n' with respect to
{3 and the vector of new values of'x., == XT.i for i = n + 1,
. . . , n + n*, we introduce the equivalent of Equation
8.4-8 into Equation 8.4-12, except that now we must
expand each model as
(f3J - bnXT ii
X:,<n+n')1
dTI
X ,l m
X T , 2m
X T 21
XT -
.,',
X r,<n+n')2
j
!
(8.4-14)
which is proportional to the square of the normalization
factor that would be obtained after Equation 8.4-13 is
introduced into Equation 8.4-12. All the derivatives in X.
must be evaluated at b<n), and the elements a TS are
presumed known . Criterion 8.4-14 reduces to the
criterion developed by Draper and Hunter]' and others if
the elements of ,Q-l are zero . Equation 8.4-14 also
appeals to common sense because it weights the X
matrices of each model inversely proportional to the
error associ ated with the model.
To give a specific example of ~ for the multi response
situation, let us take the case of two models (v =2)
with three coefficients each (m = 3), n = 0 (no experi
ments yet completed), all the w's zero, and n* = 4
(four experiments to be planned). Then :
6. = lall(XI ,n+n.)T(XI.n+n')
J=1
where
!i
XT,ll
where
where
T9
.[ds
! -
_
n + n -
x r.ll
XT,12
XT.13]
X T,21
X T, 22
X T, 23
X T,31
X r32
X r,33
Xr 4 1
X T, 4 2
X T, 43
for model r
b1)Xr,iJ]
(f3J - bJ)XS.iJ]
(a)
r= 1 s = l
1=1
L L ({3 -
J=1
T=1 s=1
Ii
b)T(arsX i X s)({3 - b)
(8.4-13)
YB = I
+ (f3AC -
l) xA
(f3BC - l )x B
(b)
I
J
where
A, B, C = the three components, respectively
y = the mole fraction in the vapor phase
x = the mole fraction in the liquid phase
{JjJ = relative volatilities, essentially empirical param
eters, to be estimated
Although the assumption that x is a deterministic variable
and only Y = y + is a random variable stretches the
truth for any real series of experiments, for this example,
we shall assume that Y is the random dependent variable.
Because we have two parameters to fit, we need to make a
minimum of two initial runs . At what values of XA and XB
should the first two experiments be carried out? Isobaric
ternary component experiments for the acetone (A)-benzene
(B)-carbon tetrachloride (C) system were planned at 760
mm of Hg pressure. Mixtures of A, B, and C could be
prepared at selected values of Xjas measured by refractive
index, and Yj was determined by gas chromatographic
analysis.
A study ofthe literature indicated that although the param
eters in Equations (a) and (b) had not been specifically
reported, values of Y versus x had been reported t from
which initial estimates of the {J's could be computed from
two Y versus x measurements. For example, at a tem
perature of 62.4C and a pressure of I atm,
XA = 0.389
Y A = 0.572
XB = 0.332
Y B = 0.200
Xc = 0.279
Yc = 0.228
0.572[1
0.200[1
+ (b AC -
(b AC - 1)0.389
1)0.389
b~od =
1.85
b~~ =
0.72
(c)
Specifically,
X 1,f2 = [I
_
2 ,fl -
[1
_ [1
2 .12 -
273
+ ({JAC -
- {JACXfAXIB
+ ({JAC -
-{JBCXtBXjA
(b BC -:1)0.332] = b AC(0.389)
+ (b BC -
1)0.332]
= b BC(0.332)
and
b~d =
0.74
bPd = 1.21
Point
Number
XA
XB
Xc
YA
I
2
3
0.234
0.288
0.624
0.268
0.476
0.206
0.498
0.236
0.170
0.441
0.480
0.723
Point
Number
YB
Yc
bAC
I
2
3
0.169
0.310
0.121
0.390
0.210
0.156
2.41
1.88
1.26
b BC
0.80
0.73
0.64
1966.
274
T ABLE
E8.4-3
MAXIMIZATION OF
TO OBTAIN THE
(a)
Wll
CO, W 2 2
CO, W12
w 2I
= 0
Objective
Function ,
a~
a~
o
o
o
o
o
103
10- 3
0.5
0.5
0.5
103
10- 3
to..
0.573
0.280 0.720
0.351 o
0.351 o
o
0.582
o
0.582
0.351 o
o
0.585
0.1
1
1
0.1
10- 3
103
1
10- 3
103
(b)
Wll
l,w22
0.280
o
0.280
0.281
0.280
0.269
0.280
0.281
l ,w12
0.720
0.580
0.720
0.719
0.720
0.428
0.720
0.719
W2I
4.40 x 10- 3
2.42 x 10- 3
2.42 x 10- 3
2.20 x 10+3
2.20 xlO +3
2.11 X 10- 3
2.20 x 10+3
2.20 x 10+3
= 0
Objective
Function,
a~
a~
1
1
to..
o
o
o
o
o
103
10- 3
0.5
0.5
0.5
103
10- 3
10- 3
103
0.1
1
(c)
0.1
10- 3
103
0.280
0.280
0.280
0.279
0.282
0.281
0.291
0.282
0.720
0.720
0.720
0.721
0.718
0.719
0.709
0.718
0.280
0.280
0.280
0.347
o
0.284
0.360
= 102 , W2 2 = 10- 2 ,
a.'ll
W1 2
7.720
0.720
0.720
0.001
0.593
0.716
0.005
0.593
W2 I
1.555
1.305
1.305
2.36 x
2.46 X
1.278
2.33 x
2.46 X
103
103
103
103
= 0
Objective
Function,
a~
a~
1
1
0.1
10- 3
103
to..
o
o
o
o
o
10- 3
0.5
0.5
0.5
103
10- 3
10- 3
103
0.1
1
103
(d)
Wll
0.274
0.280
0.279
0.341
0.280
0.351
0.351
0.280
0.726
0.720
0.721
0
0.720
0
0
0.720
0.277 0.723
0.280 0.720
0.279 0.721
0.277 0.723
0.280 0.720
0.351 o
0.280 0.720
0.280 0.720
w2I
8.307
5.020
5.020
5.85 x
3.66 X
4.652
5.85 x
3.65 X
103
103
103
103
= 0
of
Objective
Function,
a~
P 12
o
o
o
o
o
0.5
0.5
0.5
0.1
10- 3
0.1
10 - 3
103
103
10
10- 3
10103
to..
X IB
0.280
0.279
0.280
0.280
0.720
0.721
0.720
0.720
0.577
o
0.280 0.720
0.280 0.720
0.581
o
0.280
0.281
0.280
0~280
0.280
0.280
0.280
0.281
0.720
0.719
0.720
0.720
0.720
0.720
0.720
0.719
49.23
27.52
27.52
2.41 x
2.63 X
25.11
2.41 x
2.63 X
Dependent
variable
~_-'---r Model
104
104
'----:'----~
Xl
X2
10
104
Model B
Independent variable
FIGURE 8.5-1
._---\'
2: (y <n) -
yT<nz, r = 1,2
J""
c
cc
Pl(Y) In Pl(Y) dy
pz(y)
(8.5-2a)
275
J""
- co
pz(y) In pz(y) dy
Pl(Y)
(8.5-2b)
J~"" [Pl(Y) -
pz(y)]
In;~~;~ dy
(8.5-2c)
sr-
PT(y<n+1
1 (y <n+l) _ y<n+lZ]
ayz + o;z
r = 1, 2 (8.5-3)
The quantities Yr and a~ represent the mean and variance
of Y when H = HI or alternately H = Hz . Kullback
showed that when Equation 8.5-3 is substituted into
Equations 8.5-2, it follows after completing the inte
grations that
/(I :2) =
J(I,2)
! In ail + a~ + ! ail + a~
2
a~+a~
2ail+a~-2
---:
276
/ (1 :1) /(1:2)
/(2:1)
/(2 :2)
...
I( v :l)
/ (v:2)
1(1 :V)][Pin)l
1(2:v)
p(n)
~
..
I( v:v) Pt n )
(8.5-4)
+ P 2PvJ(2, v) + . . .+ p v - 1PJ(v -
2: 2:
2
= _I
I, v)
r =1 8 =r+1
p (n)p<n) [ (aT
2
T S . (a ~
as2)2
a;)(a~
+ a~)
(8.5-5)
(Note that J(r, r) = 0.) Equation 8.5-5 is the same
equation as that developed by Box and Hill.
One way to obtain the posterior probability that model
l is correct after taking n observations is to apply succes
sively for each model Bayes' theorem in the foIlowing
form :
p~n )
= P? -l)PT(y(n
T=1
where
-p~ n - npT(y(n
(8.5-6)
p~n -1 )
Initial
experiment
1966.
10
Observation number, n
8.5-2
I
11
iII
I: Yl
=
I
II
: Y2
0.1
1.0
2.0
4.0
5.0
6.0
2.0
2.0
EO
+ {34
b s = -2.490 X 10- 2
be = 8.649 x 104
b7 = 3.730 X 10- 2
be = - 1.908 X 105
b = 1.102 X 105
bo = 7919
bl = 0.07937
b2 = 3431
b = 0.3630
b 4 = 4797
2
3
Transformed
x, mm/hr
8140
7430
6310
5510
5390
5250
6140
6490
c
Example 8.5-1
C, mg/liter
277
s~ =
3.11
4
3
I
4
10
=
X =
...
3.90
.~
~
E 3.85
:3
c:
'"
eE
3.80
's
~
o 3.75
oo
.38
-10.
Initial 8 runs
\~
11
\.~
-, '"
'"
,,"-,
3.70
3.65 '--_--'-_ _"---_--'-_ _I-_--I:>_ _.l-_...J
2.0
o
1.0
3.0
4.0
5.0
6.0
7.0
x, mm/hr
....
- . , . _ - - - - ~ . -.
.
~ .."
278
Model
.--/2
0.50
pin)
~3
---1
0.25
Model II
rA = -klcA
k lcA - k 2cB
ro = k 2 c B
rA = -k~c~
rB = k~c~ - k~c~
rc =k~d
te =
FIG URE
E8.5-lb
= 0.17
Assumed
.x<9)
Posterior
P2
P3
0.333
0.333
0.333
0.22
0.17
0.40
0.53
0.38
0.30
0.20
0.42
0.38
4.2
Posterior
Assumed
X (9)
PI
r,
4.2
Expected Values
Models
YTI
YT
1
Y=
YTf
YT
Yv
YTU
The additive error for the jth response in the nth run is .
l>
I, ... ,u
(8.5-7)
I: y
_r::2:~2
..
alu
a2u
:::]..
!
1
a~
I Y" I:
y)
II:yI-Yo
- - exp [_l(Y<n +l) _ Y
' )TI:-l(y<n +l) _ y)]
(21T)u/2
"2
T
Y
T
(8.5-8)
~~--,-------~~~~~=-~--------------------
---"
Yr;(~ ~n ),
xW)
tf{(y<n) _
XrJ =
Xr~~ i
Xm]
[tf{(y<n) _
y~n }
y~n)Y}J(l:~n-l[tf{(Y<n )
+ trace l:~n)(l:~n - 1 =
l'Wl
tf {(y<n) _
y~nl)T(l: ~n
- l(y <n) _
y~n}
Xr~~ ~]
Xr~~h
Xr~~~
! <n+ll(r:s)
1l: ~n+1 )1
=2
trace I,
..
xsn
Tu.m
Iw~n+1)I- Yz
(21T)u/2
-exp [-'!(Yr- y~n+ll)T(w~n+1 -1(Yr- y~n+lJ
(8.5-10)
Finally, after combining Equations 8,5-8 through
8.5-10 and after some extensive manipulations, the
probability density function for y~n+l ), given I: y , is
I I: y)
1I:<n+1)1- Yz
r
(21T)U
y~n +1
y~n +1]
(8.5-12)
s; = t
2: 2:
p~n )p~n )
.{trace
[l:~n +1 )(l:~n +1
+ (y~n +
(8.5-11)
-1
+ l:~n +l)(l:~n +1 -1
- 2Iu J
y~n +1)y
1) _
. [(l:~n+1
x~n+l )M-1(x~n+ll)T
p(y~n + 1 )
y~n+l )Y
1=1 8 =1+1
l:~n)(l:~n-l
trace
w~n+1)
y~n }]
trace I u
Xr~~ h
n+1) =,....
:
:
X<
r
.
.
[
<n+1) x Tu,2
<n+l)..
x rU,l
:
Xr~~)l
-l(y<n) _
y~n )Y(l:~n
279
-1
y~n+1 }
(8.5-13)
Equation 8,5-13 can be used as a design criterion for
selecting the values of the independent variables on the
(n + l)st run after n runs have been completed .
The discrimination procedure is essentially the same as
that presented in Section 8.5-1. An initial set of experi
mental runs is carried out, the parameters in each model
are estimated, the posterior probabilities are calculated
by Equation 8,5-6, the next experiment(s) is(are) designed,
and the next run is carried out. These steps are repeated
until the desired degree of discrimination is achieved .
Because of the presence of the prior probabilities in the
function for K; less emphasis is placed on the poorly
fitting models and more emphasis on the better models.
Thus, maximizing K; leads to experimental designs at
conditions at which the maximum discrimination takes
place between the best models.
280
T ABLE
Y2
8.5-2
Y1
AND
MEASURED
k1KAKBPAPB
KAPA + K BPB)2
Run
PI
0.333
0.039
0.001
4
5
0.333
0.753 x 10- 6
0.000
0.333
0.961
0.999
k 2KAKBPAPB
r2 = ..,..,..---::--.....:.......:.."-----:"":"
(l
Model II:
KAPA
K BPB)2
k 1 KAKBPAPB
rl
(l
KAPA
K BPB)2
k 2K APAPB
r = (l
K APA
KBPB)
k 1KAPAPB
(l + KAPA)
Model III : rl
r2
k 2KAPAPB
(l + K APA)2
0.5
p 0.4
0.3
0.2
0.1
!
00
15
FIGURE
E8.5-2
E =
L Pin) !iT/ s;
max
T=l
WI
= [v(1 -
p~n )/(V -
1)]h
.1
TABLE E8.5-3b
Run XI = t
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
25
25
125
125
150
25
150
150
25
150
25
150
150
25
150
281
575
475
475
575
550
525
550
550
525
525
525
525
525
525
525
0.3961
0.7232
0.4215
0.1297
0.1504
0.5565
0.1558
0.0671
0.6196
0.1427
0.5979
0.1717
0.2419
0.5441
0.1977
0.0060
0.0004
0.0001
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.4335
0.5580
0.6278
0.6865
0.9424
0.9895
0.9978
0.9993
0.9997
0.9995
0~9994
0.9996
0.4087
0.3740
0.3446
0.3020
0.0570
0.0105
0.0022
0.0007
0.0003
0.0005
0.0006
0.0004
0.1518
0.0676
0.0275
0.0115
0.0005
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
A-+B :"
Ya = [I
Y4 = [1
+
+
+
(a)
(b)
(c)
o ::;
Xl
-s 150 min
Run
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
XI=t T
25
25
125
125
150
25
150
150
25
150
25
150
25
150
25
575
475
475
575
550
525
450
550
600
450
600
450
600
450
600
TABLEE8.5 -3a
Xl (min)
25
25
125
125
0.3961
0.7232
0.4215
0.1297
0.1504
0.5565
0.5542
0.0671
0.3356
0.4842
0.3140
0.5133
0.3500
0.4936
0.3058
PI
P2
475
575
475
575
T CK)
--
Pa
P4
2.5
2.0
.,
0.4335
0.5580
0.6278
0.5011
0.9031
0.9208
0.9312
0.9322
0.9307
0.0~2 0.9386
0.0002 0.9402
0.0002 0.9381
0.0060
0.0004
0.0001
0.0001
0.0002
0.0002
0.0002
0.0002
0.0002
0.4087
0.3740
0.3446
0.4541
0.0953
0.0780
0.0680
0.0671
0.0685
0.0607
0.0592
0.0613
0.1518
0.0676
0.0275
0.0447
0.0014
0.0009
0.0006
0.0005
0.0006
0.0005
0.0004
0.0004
o
-; 1.5
<l
1.0
0.5
FiGURE E8.5-3a The value of .6. for sequential designs using two
different criteria. (Reproduced from Technometrics 10, 152-159,
1968, with permission of the authors and the American Statistical
Association.)
.:
282
5000
4500
1hz
4000
3500
3000
,For criterion
s,
only
W2
1 -
WI
0.43
Supplementary References
General
Cochran, W. G , and Cox, G , M., Experimental Designs (2nd.
ed.), John Wiley, New York , 1957.
Cox, D. R., Planning of Experiments, John Wiley, New York ,
1958.
Davies, O. L., Design and Analysis of Industrial Experiments,
Hafner, New York, 1956.
Davies, O. L., Statistical Methods in -Researcb and Production,
Hafner, New York, 1961.
---~----------------------~ ,=;
. , . ,?"~~_.
.,
PROBLEMS
Elfving, G., "Optimum Allocation in Linear Regression Theory,"
Ann. Math. Stat. 23, 255, 1952.
Kiefer, J. C., "Optimum Experimental Designs," J. Royal Stat.
Soc. B21, 272, 1959.
Kiefer, J. C., "Optimum Experimental Designs v, with Applica
tions to' Systematic and Rotatable Designs," Proceed.
Fourth Berkeley Symposium 1; 381, 1961.
Kittrell, J. ,R., Hunter, W. G., and Watson, C. C., "Obtaining
Precise Parameter Estimates for Nonlinear Catalytic Rate
Models," AIChE J.12, 5, 1966.
Stone, M., "Application of a Measure of Information to the
Design and Comparison of Regression Experiments," Ann.
Math. Stat. 30, 55, 1959.
Wald, A., "On the Efficient Design of Statistical Investigations,"
Ann. Math. Stat. 14, 134, 1943.
(b)
Xl
X2
Xa
:-1
1
-1
1
-1
-1
1
1
-1
1
1
-1
Xl
X2
-3
-"3
t
I
-~
3
"3
.i,
-3
0
0
283
3
2
0
0
0
-t
t
0
Xa
X4
-4.84
-2.47
-2.46
-0.08
-4.84
-0.08
-4.85
-0.08
-2.46
-2.48
-2.01
-2.47
-1.99
-2.72
-1.76
-2.25
-2.22
-2.24
Cetane
Number
Volume, Percent
Oletins
16.2-8.0
22.3-49.6
1.3-60.7
Independent Variables
Problems
TABLE
P8.4
Temperature (OC)
Yield (%)
1
5
1
5
1
5
1
5
240
240
280
280
240
240
280
280
24
42
3
19
24
46
5
21
Controlled
Variables
Response,
Yield
Tempera
ture
Catalyst
Nitrite
23
27
24
22
32
29
18
31
110
117
113
107
118
117
102
118
0.54
0.63
0.71
0.48
0.57
0.63
0.59
0.60
Coded
Controlled
Variables
-0.55
0.85
0.05
-1.15
1.05
0.85
-2.15
1.05
Xl
temperature - 112.75
5.00
X2
-1.075
0.725
2.325
-2.275
-0.475
0.725
-0.075
0.125
CjN - 0.59375
0.05
284
Pentagon.
Hexagon.
Central composite.
7.5
Xl =
X2
X3
X4
TABLE P8.6
Factor Levels
Yield
Design Levels
('70)
Temperature COC)
Time
(hr)
Xl
X2
96.0
78.7
76.7
54.6
64.8
78.9
97.4
90.5
93.0
86.3
75
60
30
15
30
60
45
45
45
45
2.0
2.866
2.866
2.0
1.134
1.134
2.0
2.0
2.0
2.0
1.000
0.500
-0.500
-1.000
-0.500
0.500
0
0
0
0
0
0.866
0.866
0
-0.866
-0.866
0
0
0
0
di
_ temperature - 45
C o mg.
Xl -
30
X2
Observed
Yields
y=
time - 2
1.000
Eng.
1.0
1.7
Xl
X2
-1
-1
-1
-1
Xo
6.0
-1
5.2
-1
-I
7.0
x=
7.9
-1
18.0
19.2
Catalyst Concentration
7
7
7
7
7
7
PROBLEMS
TABLE
P8.7
285
Xl
X4
Run
Number
(tempera
ture)
(addition
rate)
I
2
-I
-1
+1
+1
-I
+1
+1
-I
+1
- I
+1
+1
-I
-I
+1
+I
+I
+1
-1
-I
+1
+1
-1
-1
+1
+1
-1
+1
-1
-1
+1
+1
-1
+1
+1
+1
+1
+1
-,-1
+1
-1
-1
-1
-1
-1
+1
+1
-1
B.E.T. sur
face area
(sq meters/g)
Fisher
SSS
(microns)
11.0
9.4
3.5
2Ll
8.9
13.6
9.2
ILl
12.3
13.0
Sedimen
tation,
Yield
(microns)
Bulk
density
(g/cu in)
2.2
3.0
3.2
2.2
5.0
1.3
2.9
3.4
5.6
2.1
4.2
4.5
4.7
3.6
4.5
4.4
3.5
4.0
4.9
3.9
4.7
5.6
5.3
6.7
9.0
2.8
5.3
5.3
8.8
3.7
97.0
95.9
98.5
90.5
94.4
95.2
94.6
93.4
94.0
92.8
10.9
13.1
10.5
10.8
5.4
ILl
11.2
18.0
15.2
18.4
3.8
2.6
2.7
1.3
4.5
2.2
3.9
4.4
5.1
3.4
5.0
3.9
4.1
3.6
3.2
3.5
6.4
5.6
5.5
2.4
6.4
5.9
6.4
6.2
2.9
8.5
93.0
94.2
95.6
98.3
96.3
95.1
91.8
92.0
94.8
91.6
10.6
11.6
20.9
10.1
13.0
3.8
27.9
7.6
9.4
15.4
2.8
1.4
3.1
3.3
1.6
3.2
3.5
3.8
4.0
3.2
5.8
2.7
3.2
3.4
3.1
5.6
2.9
7.8
5.7
. 3.6
12.3
5.9
5.0
7.5
3.9
93.3
95.7
90.0
94.5
94.9
94.5
90.0
96.4
95.2
95.3
d op
(%)
Block I
4
5
6
7
8
9
10
-1
-I
-I
o
-1
+I
-1
+1
Block II
1
2
3
4
5
6
7
8
9
10
o
-I
+1
-I
+1
-1
+1
o
o
o
o
o
o
+2
+2
6
7
8
9
10
o
o
o
o
-2
o
o
o
o
o
o
+2
-2
o
o
o
o
+2
(d)
(c)
- - - - - - - --
- --
2.5
1.2
4.0
- --,---
U.s
1.3
2.9
4.2
1.6
(b)
3.3
Block III
o
o
-2
(a)
+1
-1
-1
o
o
o
o
o
o
-2
..- 0
1
2
3
4
- 3.5l4xi
5.87x~ 6.25xlX2
- - - - --
-0.924x~
+ 2.220XIX2
286
TAB LE P8.9
Y=
57.71
1.94xl
Blend Ratio I
Cat alyst
Concentration
Conversion
ZST
Con version
ZST
- 4.2
0.5
-2.3
-2.4
-0.9
-l.l
-l.l
--
L X2 -1l .5
32.17
L X
--; 15
2
I
- 6
I
-2
I
-
-18
272
1.7
-3.6
0.4
0.7
- 0.9
-3.3
3.2
2
-8
8
-4
- 2
-II
0
-0.9
0.4
0.0
2
0.3
- 2.9
l.l
2.1
--
-0.1
L X2
15.09
LX
0
0
I
6
-4
2
3
L X2
LX
- 1.8
38.44
8
66
-0.7
-0.7
-0.3
0.4
0.3
1.7
-0.1
-
0.6
4.22
7
I
6
6
0
3
-2
0.1
0.6
-2.3
1.6
3.2
- 3.5
3.9
1.4
0.7
1.6
2.7
1.2
1.3
0.3
--
21
135
9.2
15.2
3.6
45.92
-15
- 15
2
0
6
-2
-I
-9
10
6
226
(b) Y
19.43
- 6.25x IX2
8.86xl - 0.145x2
- 2.302x!
(c)
5 .8 7x~
0.000293~
+ 0.04777xI X2
+ IO.019.ra
- 1.376x 3X4 + 6.077x3 + 7.602xi
+ 0.942x4XS + 2.739x4Xa + 7.064x 4
- 0.0428xg - 1.847xsXa + 1.6795xs
+ 2 .33x~ + 3.956x a
0.201x2XS - 3.78x2
0
0.866
0.866
0
-0.866
-0.866
0
0
- 15
273
1.07X3
0.68x~
- 3.09xIX2
- 2.19xIX3 - 1.21x 2x 3
8.13 For the following design matrix a nd responses, find:
(a) the second degree canonical equ ation of best fit
a nd (b) the coordinates ( y, Xl , X2 ) of the center of
the surface.
Y
Xl
X2
- 4
0
- 2
-I
-3
2
-7
+ 0.91x2 +
0.26x~
0.5
. - 0.5
-I
-0.5
0.5
0
0
-93.7
98.5
88.8
85.8
92.4
87.8
97.8
99.0
Temperature
eC)
Time
(min)
1000
2200
1205
1695
10.2
29.8
p = {3o
where :
Xl
(pressure - 1600)/370
X2
(temperature - 1450)/150
X3 = (time - 20.0)/6
p = pressured density (g/cc)
,i
!
460 to iooo-n
1 to 100 atm
o to 100 lb/rnin
T- 600
10
p -
50
(XTX)-l =
t 0 0 OJ
o t o o
[ o 0 { 0
o
0 0 t
3142.31J
29.92
xTy =
[
-24.34
97.62
xTx =
7 04 00 0OJ
[0 0 4
0 0
X2
=-2
X3
F - 50
= --2
288
Response
EXPERIMENTAL DATA
Temperature
(OR)
Pressure
(atm)
Flow Rate
(lb/min)
48
52
48
52
50
50
50
48
48
52
52
50
50
50
590 .
610
610
590
600
600
600
422 .356
425.146
486.126
458.998
449.447 :
449 .962
450 .256
F = 70lb/min
12.48T - 8.08p
32.4F
xTy
.'
~989 .l 8J
49.93
-32.33
TABLE P8.17a
I
1
.1
EXPERIMENTAL DATA
Point
Response
Temperature
COR)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
1915.01
1923.26
1910.08
1918.33
1954.72
1963.18
1951.01
1959.48
1931.61
1945.25
1943.16
1934.62
1905.17
1971.80
1937.72
1938.16
1938.62
1938.98
993
997
993
997
993
997
993
997
991.6
998.3
995
995
995
995
995
995
995
995
Pressure
(atm)
Flow
Rate
(lb/min)
1.5
1.5
2.5
2.5
1.5
1.5
2.5
2.5
2.0
2.0
1.159
2.840
2.0
2.0
2.0
2.0
2.0
2.0
97.0
97.0
97.0
97.0
99.0
99.0
99.0
99.0
98.0
98..0
98.0
" 98.0
96318
99.681
98.0
98.0
98.0
98.0
'f
I
,
1
129.53
EXPERIMENTAL DATA
Response
Temperature
COR)
Pressure
(atm)
Flow Rate
(lb/rnin)
960.93
969.73
1050. 66
1009.53
999.00
999.52
999.81
790
810
810
790
800
800
800
28
32
28
32
30
30
30
68
68
72
72
70
70
70
p - 2.0
X2=O:S
X3
p = 1 atm
F = 100 lb/rnin
F- 98
- -1
(XTX)-l =
T - 995
2
+ 4.18T -
[~2o- : : :J;
0
o o
2.16p
xTy
20.21F
23,248,55J
33.43
=
[
- 17.27
161.71
~ -~ - ---- ,
il'
_. --- -- -- ---"'"
PROBLEMS
ANALYSIS OF VARIANCE
TABLE P8.17b
SS
d.f.
Mean
Square
4.50 X 107
139.8
37.2
3270
1
1
1
1
4.50 X to 7
139.8
37.2
3270
7.5653
0.9012
5
3
1.5131
0.3004
Source of
Variation
D ue to b i
D ue to b2
D ue to b3
D ue to b;
D eviation about
regression line
Error
Total
8.18
12
:Y =
+ 0.0091TF + 0.684pF
1
2
3
4
16 to 22
14 12 18
12 11 17
18 11 27
13
13
16
10
16
14
15
15
1
2
3
15.06
19.87
17.29
19.44
9.72
17.30
Design
II:: [J
1
E
.,
....
4
--->~
Time(hrl
FIGURE P8.21
Xl =
X2
X3
X4
column temperature
= column length
= carrier gas flow rate at outlet
= weight of liquid phase per weight of solid phase
TABLE P8.22
Variables
X2
X3
(em)
(mljmin)
45
35
55
230
190
270
20
35
35
25
20
30
25
65
to
150
3to
40
20
80
15
15
35
5
20
80
30
10
PROCESS RESPONSES
First
Half-Replicate
Cycle
u B
to
Date Points
Sum from
previous cycle
A verag e from
previ ous cycle
289
Second
Half-Replicate
5
68 72 53 71 78
72 62 69 85 66
55 72 58 75 69
... _._
10
75 69 65 68 57
59 67 71 60 73
79 86 56 73 62
Zero level 0
Low level -1
Upper level + 1
Starred points :
-2
+2
Variation interval
Number of starred
points
- - _ . .. _......._ - - - - - - - - - - - - - -- - - -
290
+ 0.0672x2
K = 0 .87 - 0.0408xI
0.002x~
0.003x~
+ 0.0128xlx2 -
0.0675xa
+ O .006x~ -
0.027x~
0.00406xIXa - 0.000313xlx4
+ 0.OO094x2Xa + 0.0303x2X4 -
0.00531xax4
(a)
- 0.495xIX2
0.371xIX4
0.349xaX4
(b)
e- a 2 X )
{Jl{J2X
1] = - -
1] =
+ (J2X
Y1XY2
0.5
5.0
0.5
5.0
2.95
9.78
3.66
7.59
a2 =
0.76
PI = 0.019
0.87
P2 = 0.847
site
- -:::..::.:...::.:.-=;;.:;
(PDPH/KE ) ]
r = kO[PA
-::-,;:....;.:__
1 + KAPA + KDPD
:--='-,;:---"-'=-=:-::::':--="
KDPD
KHDPDPH
r =
kO[PA - (PDPH/KE,')]
.,---::':=':'---"-'--'=:,-'--"
+ KHPH + KHDPDPH
PH
PD
R=r+E;
1.0
1.0
5.0
5.0
1.0
5.0
2.0
0.1
0.1
2.0
2.0
0.1
2.0
0.1
2.0
0.1
2.0
2.0
-4.39
3.24
6.54
14.80
-2.32
7.19
o =:;; PA =:;; 5
o =:;; PH =:;; 2
o =:;; PD -s 3
PI
P2 =
I. Single
adsorbed:
7J = a1(I
r =
1.52x2 - 2.095xa + Ll l x,
0.37x~ - 0.I4x~ + 0.92x~ - 0 .06x~
t = 8.8 - 2.195xI
+ 0.102x4
jI .
'92 = 0.35
Pa
0.132
" ._-
._-------
PROBLEMS
The differential equations are
dCA
dt
Y
1
2
I
1
1.26
2.19
0.76
1.25
= -kCA
CA(O) = 1
291
dCB
= kCA
dt
CB(O) = 1
A~B~C
P8.27
Alcohol
Ether
Water
(A)
(E)
(W)
1.00
0.80
0.60
0.70
0.50
0.25
3.00
0.00
0.00
0.00
0.30
0.70
0.75
0.00
0.00
0.20
0.40
0.00
0.00
0.00
0.00
2
3
4
5
6
7
Model
dCA
dt = -
k 1cA
CA(O)
0.Q75
10- 5
0.212
0.090
Response 2:
CB
= _k_1_ (e-k2t
k1 - k2
e-k1t)
(a2)
r = reaction rate
k = reaction rate constant (with subscript)
K = equ ilibrium constant
K = absorption constant (with subscript)
P = partial pressure
L = concentration of active sites
Contrast the experimental design at the end of the
seventh experiment for discrimination with that which
would be used to obtain the best estimates of the
coefficients in Model I at the end of the seventh
experiment.
8.28 Consider the first-order irreversible reaction
A--+B
7J
fJl
fJt > 0
dt
- kCA
CA.O(O) = 1
(a)
292
(b)
or
(b)
(c)
be written as
!
1,
j'
;1
.!
._- - - -- ---_. - ._ . _
- ~------
Part III
Topic
Model
identification
Parameter
estimation
Prediction
Inverse problem
Model
Including
Model
Boundary Param- Model
Input
Conditions
eters
..;
..;
"
Model
Output
" "
"" "" ""
~----
"I
i
294
CHAPTER 9
296
Flow velocity.
II
Stead y Fl o w in a River
d 2c A
In itial condition :
- kc",
yet) =
Jj7h"=
yet)
f:
x(r)
ea(t- f) dr
(9.1-2)
= y(tt) + E(tt)
(9.1-3a)
= yet) + E{t)
(9.1-3b)
,-(uID ).)
Solution :
(9.1-1)
c",(O) = CA .
+ !!. e<uLID )( 1
Yo eat
Y(tl)
cA = c",.
Yo
dc ,.{L)
~ =I
Solution :
t' d CA
dz' +
y eO)
Mod el :
dc",
til =
= ay(t) + x(t)
t L.
r o
297
dO
Y(O
x( O ---r----i~
FIGURE 9.1-3
Model
Initial or boundary
conditions
dO
Parameters il
A...!:4 B~ C
where the k's represent reaction rate coefficients. Then
the indi vidual equ ations and specified initial conditions
in Equation 9.1-5 would be
Model
Initial or boundary
conditions
dCA
y(O
(jj" = -k1cA
Ero
Estimated
parameters b
FIGURE 9.1-2
y(t) is the determ inistic model output; Y(t ) is the experimen tal
pro cess output ; Y(t) is the pred icted output.
~r =
.2
ar sYs
+ Xr( f )
8= 1
y.(O)
= YrO
1,2, . . . , v (9.1-4)
~ = ClY + X(f)
CB(O)
=0
CeCO)
s n
(9.1-6)
f:
(9.1 -7)
(9.1-5)
yeO) = Yo
where
a1 2
a 21
a 22
Y2
y =
; v
a
v x I
[X'('l]
:
mat rix
[au
.1
matrix
V X
Cc
1- k
1
2 -
k (k 2
1
- k
1
k1 e -
2 )
x lf)
au]
a2v
Cl=
.
a vl
X2(f )
X(f) =
a v2
aw
a
v x v.
matnx
f:'
+ e(lt)
(9.1-8)
~ -
298
d 2y
dt2
(Xl
dy
dt
+ (X2Y =
x(t)
follows:
J
,
9.1-2
t J. J.
y(t)
or g{t)
---
--
\/---
.....
-...:---~-
,""_......
FIGURE
9.1-4
-----="=----------:--~---------------------
9.1-1
Order of
Polynomial
dT/dz AT z
= Zo
Approximation of d'I'[dz at z
z~
Order of
Error
Term
(dT/dz)z=o
4.12
T l - To
--h
. 299
Numerical
Value of
4.71
2Ts - 9T2
-3T4
18Tl
6h
16Ts - 36T2
12h
lITo
48Tl
5.25
25To
3.47
Var"{D~+l(Y)} =
:2k Var{Y} 2: af
2
(9.1-12)
i=O
I
I
dT )
Var {( dz
.}
Z=Zo
I'V
4490
-4
[12(0.1)]2 (10 ) = 0.311
and
.G(dT/dz)z=o
0.56
or 16 percent of (dT/dz)z=zo'
In other words, if the numerically evaluated derivative
is to be used as the response in estimation, the error in
the stochastic dependent variable is amplified tremen
dously. Consequently, we conclude that the general
admonition to avoid numerical differentiation of experi
mental data, if possible, in the estimation of parameters
has a sound foundation. In the "equation error"
estimation technique, described in Section 9.6, we shall
assume that observations -of the derivatives themselves
are used rather than derivatives calculated from the
observations.
m }]
(9.1-11)
Var{D~+l(Y)}
9.2
300
1> = t
b =
(9.2-2)
Discrete Observations
[~]
Covar {b}
(XTr - 1X)-1
AA
1> = t
:1> = OT = Yo
' =1
l) :'
'I'(a, Yo,
tJ
VJO
(9.2-3)
n
:: = 0 = -
L [Y -'I'(a, Yo, t
iwr-
1(t :a'l'(a,
l)
. [~1]
'I' =
'F v
is differentiated with -respect to PI to give O'l'/OPI, next with
respect to P2 to give the matr ix 0'l'/OP2, and so on . The elements
o'l'/OPi assembled as a column vector comprise
0'1'
0(3 =
[ :~]
.
0'1'
opm
+ :0(1
- e"t!)
estimates, we get
Yo, tl)
yet,) - Yo eat!
1=1
L [Y(tl) -
i= l
.i[
Yo eat! + -; (1 -
eat!)] eat, = 0
'=1
n
L [yeti) - Yo eat! + ~o (I - e
1=1
Computational Problems
Data colIected in
this region influence
estimates the most
y=e"t
0-.::::...-------'------
(b)
FIGURE 9.2-1 Characteristics of stable and unstable solutions
for process models : (a) stable solution to deterministic model
and (b) unstable solution to deterministic model.
301
Experimental design
(Chapter 8)
Data collection:
input x
output Y
Assumption of values
of the parameters
Application of difference
scheme to solve differential
equation for each element of x
~
Nonlinear estimation of
parameters and associated
statistics (Chapter 6)
Analysis
I
(Chapter 5, 6, and 7) I
302
+ 3~2 + 2
k.
2+4~3+3
k3
3+5~4+4
k.
2+5~3+4
k.
5~2
+4
in which the k's are the forward rate constants and the
numbers designate the chemical species. The rate constants
were first assumed and later verified to be independent of
the composition of the mixture. The six equilibrium con
stants (Kl ) for the reactions were evaluated separately so
that only half of the twelve rate constants had to be esti
mated (the reverse rate constant could be derived from the
forward one) .
The react ion mechanism indicated that five species
existed, 1, 2, 3, 4, and 5, but they -were constrained by two
total material balance equations based on the original
compositions at t = 0, with the consequence that only three
independent differential equations were required to make
up the model. The remaining components then could be
obtained from the total material balances . The three differ
ential equations were
C:+ 1 = Cn -
+ ~h (2C~
C~-l + 2C~-2)
8CIj
8k;
kl
+ !i.k" . . . , k L )
CII(kl , .
kz, ... , k L )
!i.k r
1= 1,2, .. . ,6
dC3
dt
- -;
</>
IBM 7040
Machine Time (sec)
0
1
2
3
4
0.0278
0.0254
0.0215
0.0206
0
100
186
273
360
0.053
0.0418
0.0272
0.0218
0.0208
0.000532
0.0108
0.Q25
0.0048
0.0085
0.0069
0.0141
r------r-----..-----..--,
0.80
c
0
C4
'B
c;
9.2-3
Continuous Observations
of3J = -k o</>
dt
of3j
(9.2-4)
:LJ=1df3r
ds2 =
0'"
:E
C3
1000
3000
2000
Reaction time(min)
FIGURE
E9.2-1
data :
Experimental points:
= C1 = (rerr-BuO),Ti,
o = C3 =
...
= C2 = (tert-BuOMMe2N)Ti,
= C 4 = (tert-BuO)(Me2NhTi,
= 2.97
10- 1, K 3
= 5.33
= 2.58
= 9.50
x 10-4,
X 10- 5
X
10- 2
(9.2-6)
= Cs = (Me 2N)4Ti
(9.2-5)
C5
C2
D,
303
=*
o</> df11 +
L...
of1j ds
j= 1
'\[1 _*
(df1j)2]
L... ds
j =1
304
or
df3j
ds
1 ot/>
j = l, .. . ,m
= 2'\ OXt
(9.2-7)
j
fJ. in the
= l, . . . ,m
(9.2-4)
(~r
j=1 .
df3j
f35n + 1)
di ;;:;'
f3~n)
6.t
_ k ot/>
of3j
or
f35n+ 1)
= [~(Ot/
L. of3j
1=1
2]- Y,(Ot/
0f3j
(9.2-9)
= f35n) - k
(6.t) 0</>
of3t
k 0
- X(t)
Data input
-----+----..----i
-100
fo'dt'= t
FIGURE
E9.2-2
-ko -
ae/>
a{3o
(b 1 )
d{31
dr
-k1 -
ae/>
a{31
(b 2 )
oe/>
o{3o
ae/>
a{31
= f~ (Y
-r-
(C1)
(C2)
[1
dJ: =
-k :~ = k
I:
[Y -
(9.2-10)
(dY)
o~ dt
= ( Of(~, Y,
oY
t)(Oy) .
o~ +
of( ~, Y,
or-
t)
305
8~
o~
8y
o~
(92-11)
of
= oy u +
Of
8~
u(O)
=0
(9.2-12)
Continuous Estimation
di
dY2
di
= allYl
+ a12Y2 +
X1(t)
= a21Y1
+ a22Y2 +
X2(t)
,
306
50
40
30
all
20
10
9.5
9.0
a12
- - Experimental response
- - - Predicted response
8.5
8.0
7.5
7.0
40
50
60
70
80
90
100
Time (min)
(c)
2.0
FIGURE
- - Experimental response
- - - Predicted response
1.5
0
~
I 1.0
~
0.5
40 50 60
Time (min)
70
80
90
100
(ci)
FIGURE
t P. C.
E9.2-3a
Data
input
Sensitivity
coefficients
0+-----(
Data
accumulation
Tape
Predicted
response
E9.2-3c
Solution of
differential
equation
(b)
FIGURE
E9.2-3b
Estimated
coefficients
9.2-4
d y
dt 2
dy
+ ao dt
=
alY
+ a2Y 2 + aa
a 1
e,
jl
(0)
= Yo
dy
(dY)
dt - dt 1=0
+ ao(Y
=
al
- Yo)
y - Yo + [aoy o al
aa
:{ (I'i
a. y I y(t)
y(t = p(a.,Yo, y{tn), "" y(tl
p ( , 0 .n , .. . , 1
p(y(tn), .,y(t l
oo
(93-1)
., y(tl
l ), . .,
yell~
(9.3-) ',
= p(a., Yo)
(9.3<:)
p(a., Yo)
p(a., Yo I y{tn), " " y(tl =
( ()l
pytn, .. , y tl
(9.3- 5)
J. Loeb and G. Cahen, Automatisme 8, 479, 1963.
308
( *) _
1
L*
== Inp(a.,
Yo)
l)
1=1
(9.3-11a)
. ,
y(tl]
(9.3-6)
p(Yo)
and specifically,
p(e(t l
= (21T)v/2I n Yol Y.
independent,
(9.3-8)
where /rl is the det r, and r is the co variance matrix
of e, i.e. , of the responses.
After Equation 9.3-7 is substituted into Equation
+ Inp(yo)
-(Yo - y&OTny;,t
= or (9.3-12)
- (a * - a.*(O>yn;/
.r
Substitution of Equation 9.3-8 into the last equation of
9.3-9 makes it possible to solve for A(tl) :
A(tl)
= - (r(tl -le(tl)
= -(r(t.)-l[y(tl) - 'I'(a. , Yo, tl)]
(9.3-10)
= OT
9.4
a.*
SEQUENTIAL ESTIMATION
SEQUENTIAL ESTIMATIOi-1
309
= htS'{lJ} = hIL~
Covar {Y} = tS'{Y YT} = tS'{(hlJ +. e)(hlJ +. e)"}
tS'(Y)
= hQ~hT +. f
Therefore, the probability density function per ! is
and ex*, the column vector of the elements of ex, has been
previously defined by Equation 9.3-10. The essent ial
feature of Equation 9.4-1 that makes it tractable for
further analysis is that Yr(tj) is a linear combination of
the elements of lJ. In signal processing, lJ is regarded as
the unknown input to a linear system , and the major
emphasis is on the estimation of the value of lJ through
observations of Y. But here we regard lJ as a function of
known form and we want to estimate the parameters (3
in the function .
The elements of (3 can be estimated from a series of
observations bya method variously known as the
Wiener-Kalman method, the Kalman-Bucy method,
Schmidt's method, and other names (see references at
the end of the chapter), depending upon the particular
derivation and the computational algorithm. One of the
easiest developments, but not the only development, of
the estimator equations is to use Bayes' theorem as
outlined in Section 3.1-3.
We assume that the following information is known
(corresponding to the list in Section 3.1-3):
1. A set of observations of Y at successive times
t lo t 2 , , t;, all of which together will be denoted by
Y(tj) .
2. A functional relationship between the observations,
lJ, and e, narnelyEquation 9.4-1.
3. The joint density function - of lJ and e, p(lJ(t l ) ,
e(t j
here lJ(tl) and e(t l) are independent so P(lJ(tf),
e(tf = p(lJ(@p(e(tj. Furthermore
(9.4-2)
where k 1 is a normalizing factor that is not needed here.
2. Obtain the density function p(y IlJ) from the
relation given in Section 3.1-3:
p(y IlJ)
= p~~~) = pee)
= p(y - hlJ)
= k 2 exp [-t(Y - hlJ)Tf-1(Y - hrOJ (9.4-3)
+. (lJ - !J.'lYQ,;-l(lJ -
IL~)
tS'{lJ(tf)} = !J.'l
Covar {lJ(tl)} = tS'{lJ(tj)lJT(tj)} =
Q~
=0
Covar {e(@ = tS'{e(tf)eT(@ = f
tS'{e(tj)}
= O.
=0
310
YI = hi")i +
(9.4-8)
E1
= h*1)1 + SEt
(9.4-9)
I YI+1)
= k 1 exp {-t[(l)1+1 - h*iJl)TM;-+\(l)1+1 - h*iJ;)
+ (Yj+1 -
h1);+l)Tr -l(Yi +1
' (Y I +1
hh*1Ji)]}
(9.4-12)
and
= p(Ef)p(EI+1)
(9.4-10)
I YI)
p(1)
b1)i+1)
+ r)-1
(9.4-11)
= h*Q'llh*T + ersT == M I + 1
111+1 = h*lli
+ MI+1hT(hMl+1hT + r)-l(YI+1
bb*~i)
(9.4-14)
and
(9.4-]5)
or
Q'll+l = MI+1 - M;+1hT(hMI+1hT
+ r)- lhMl+ l
(9.4-16)
= 1);(~) +
~~~) ({3l
- Pl)
SEQUENTIAL ESTIMATION
or
8YI == YI - h(tlt(~)
= [h lVIl:'lI(~)] 8(3 +
(9.4-18)
8~1 +1
~I
i[
Y(tl) -
Yo edt, +
[ Y (t l )
Yo eat, +
~o (l
0, and
+ Ki + 1[Yl+1 -
h f +1'l f +1 (~I)]
(9.4-19)
eatt)]
+ Equation
(c) = 0
(e)
a~
(f)
where
J:Iexp [(tl -
(c)
Yo
we find
- edtl)] e'it, = 0
~1 +1 = ~l
~ (l
1= 1
+ f] - l
1=1
Because 8~1
311
r) a(ti) ]x(r) dr
=I
r = cr.I
+ f(t l +1)} -1
~(tl +1)
~(tl )
Q(O)
[~o ~~J
K(tt+1)[ Y (t l +1) -
. (g)
in which
dy
dt = ay
xo
(a)
Y(O) = Yo
I.
(b)
tf{a} =
tf{(a - a)2} =
a
~
312
1.005
I-
r--_
1.005 I
/\1'1
,_.,..._
~
..... - -- ....
a 1.000 r---V-----------=.=-..::..::..:~"""---=:----~.-----__..=:_=:;;:_;oro~..___;=_====..._;_--~-_I
\, r
0.995
---..".".,-.. - ---
I-
50
100
150
200
250
300
Number of trials
(a)
FIGURE E9.4-la Evolution of estimates for the unstable process model as a function of the
number of trials.
0. 8 . - - - - - - - - - . - - - - - - - - - , - - - - - - - , - - - - - - - - - - , - - - - - - - -T
.--------y
I
0.6
~
I
"
ft-
~
r-.,
I 0.4 II
<$
. .
>'"
/\,\
I \"-",
(
I
.........
J"
"---_J-____
------------------------
""- ---;
1\,'
0.2 I-\\J
0'----------------------------------.,.----'-------'--------'
- - - - - Maximum .Iikeli hood estimator
0.4
r----------------------------------------,
'"o
'; 0.2 I -
~,
~
0 r - - - - - - - - - - - - - - - - - - - - - - - - - - - -- - - - -- - - - - - - - - - - - I
.1
50
100
150
200
250
Number of trials
(b)
E9.4-1b Evolution of estimated variances for the parameter and initial condition estimates
with number of trials (unstable model).
FIGURE
300
3J :~
SEQUENTIAL ESTIMATION
CTa
n = 20
CT.
= 0.1
= 0.1
a
Yo
-I
I
Stable process
Unstable process
-I
Ql
-Ay
<: ..
.2
't:
Ql>
<:
4 -
.2
b
~:::::
E(~
Ql
.~
> 'c
-0 .~........
E
o
2 -
'\
(1)'"
g~
...
<:
0 ..
,<seqUential
.;: 'o'U
. ~
t:~
Q)
.s >
'x
x
x
x
e>
.~
Ql
0.001
0.01
0.1
1.0
Noise standard deviation, 0;,
(c)
NO
~'+-
MaXimum likelihood
E
~
and
-e>;:
~\
0.01
0.1
1.0
A priori error standard deviation, 0' 0'10= O'a
(d)
z .
-00.
Least squares
> ...
..,
~ ~
Ql
\
\
:J1
E<ti'
.s
0.0001
4.---'11r------.:r---....-----.--~
...
e
Ii;
't:
:
Ql ...
O'-'\r-'----.L----..l-----I..-~-L--___J
.x-, ~
.~~
-"
".
\x
\X
Maximum likelihood'.. ~\
and sequentia'~. X :'\
0"
-,
.....
~.2 0.4
,....-"r;r-----r----.---~---_r_--___.
~\7Least squares
....
...
~:~
-0 .
Ql
0.8
.~~
x
x
'"
...
Ql>
<: '"
.~ co
co .~
1.2
'+:f ""'"
\<sequential
.-
e ..
\.
"'~
E<.<>
1.6 , - J 1 r - - - - , - - - - . , - - - - , - -
...
~
b'U
'"''i
~
. _.
314
.....
00
4.0 -
lJ)
Ql
Ql
....
....
lJ)
._
'"
-+
-++-,
3.0 f-
1.0
f-
'-N
00
1:: ....
"' x
1.2
f-
<-<:>
.'
.... '"
E>
'1;; ",'
",'(;j
Least squares----v/
0.8
c~~
-
.~ .=
0.4 -
",-/;'/"
/.
//
,...,/
"''
>,E
Maximum likelihood
and sequential
-+
-+-0I
1::0
c x
"' ....
1.2
i..,.
'"
. '"
~>
t; ,-'"
"'.'!;l
0.8
'"
.-c E
"''''
c'"
",0
u'
0.: '
"'0
>
I
I
I
0 <<:1
.~
0.4
Least squares
<,
<,
--1;++1-1
+
-+
-;
0.6 -
-+
- ~Least squares(+++)
-+
-+
0.2 f-
Maximum
likelihoodand sequential("')-!;
I
-loj-_
0.1
0.2
0.3
0.4
I
'
ON
Iikelihood~+
f-
.E~
and sequential(''') - +
-+
-+
-+
Maximum
-+
2.0 -
~~
c .~
'" .-c
";::
'" ...
>,E
- ~ Least squares(+++)
Ql
C X
.2<.Cl
l:: ...
to CD
.
.... >.
+
+
+
0.4 I
'--
(9.4-20)
h(t)y(t)
fey) = 0
+ E(t)
It =
fey) ~ f(y(O
+ K(t)[Y(t)
- h(t )Y(t)]
K(t) = [Q(t)hT(t)
+ g(t)S(t)]r -1(t)
dt =
fQ
(9.5-3)
O
II]
o/v
0Yv
y=y(O)
+ QfT + gQgT
+ gS]r-1[STgT + hQ]
_ y(O
Yv
.
(9.4-21)
where
+ J(y(O(y
f(t)y(t)
(9.5-2)
= 1,2, . .. , v (9.5-1)
315
(9.4-22)
(9.5-4)
...~ [Qh T
Q(O) = Q
dy*
COM
316
( j f = f(y(O), x, t)
da2 =
dt
dam = 0
dt
yeO) = Yo
+ J(O)(y<l)' - Y<0)
(9.5-8)
(1)
~: = J(O )y~l)
(9.5-6)
+ f(y(O), x, t)
y~I )(O)
_ J(O)y(OI
=0
y(l)(t) = y~l)(t)
.2 C~l)W )(t)
(9.5-9)
/=1
- - = J (O)h(l)
(9.5-10)
dt
7d
(n+ l )
.2
"lijC j
+ Wj =
1, ... , v
+m
(9.5-11)
j =1
where
;;1
_-
. -
~P
o
1
2
3
4
5
6
7
9
11
14
19
24
29
39
(l) _
&l,})
The entire procedure can be repeated to obtain a.~ and
y~ ), and so on until the change in the estimators (all or
some) falls below some prefixed number. The precision
in the estimators is calculated in the same way as
described in Section 9.1.
Because J (n) contains many zero entries and has a
special structure, the computational effort can be
reduced on a digital computer by taking advantage of
this structure.] By any computational method, quasi
linearization has quadratic con vergence if the procedure
converges. But it also exhibits the ills of the Newton
Raphson procedure described in Section 6.2-3, such as
convergence toward a local rather than global optimum
and oscillation. The remedies discussed in Section 6.2
0
1.4
6.3
10.5
14.2
17.6
21.4
23.0
27.0
30.5
34.4
48.8
41.6
43.5
45.3
~1
(b)
dk 2 = 0
dt
d~~1 )
Y102 _
dk 1 = 0
dt
dk 2 = 0
dt
YlO)]
1O- 4Y10)][Y1 -
y eO) = 0
== 11
dk 1 = 0
dt
of Kinetic Coefficients by
k2 Y~
317
Cycle
k1
k2
0
1
2
3
4
1 X 10- 6
0.3413 X 10- 5
0.4859 X 10- 5
0.4578 X 10- 5
0.4577 X 10- 5
1 X 10- 4
0.2554 X 10- 2
0.3683 X 10- 3
0.2808 X 10- 3
0.2797 X 10- 3
-------~------ _ . _ -- -- _ .
0.21
10- 2
__. _- -- -- - -
. _.
__._---- - .
318
Y1
ditions
or
d~l
Y1(0) = Y01
11(Y1, Y2, t)
(9.5-12a)
Y2(L)
YL2
(9.5-12b)
= 1 - w1 Consequently,
or
(9.5-14)
The functional relation between Y1 and t and Y2 and
t for any cycle of calculation after the zeroth cycle can
be obtained by using theiweights given by Equation
9.5-14 and from
YrCt)
8Y2
d (q )y
CXq dt (q) + . . . +
CX1
dy
dt
(9.5-15)
+ CXoY
dx(t)
I'
( 0)
I'
(0)
- J21Y1
- J22Y2
d(m)x
d (l<)y
Yk
d (k)y
k =O,I, ... , q
= dt (k) = dt (k) + k
j = 0, 1, . .. , m
(assumed)
I
1
(assumed)
1,2
1 12Y2(0)
Y1(O) = Y01
Y2(O) = a2
(9.5-13)
Y1(0) = Y01
YiO) = a1
(Y2 _ y~O
dY2 = J21Y1
I'
+ J22Y2
I'
+ J20
I'
dt
81' (y (O ) y(O ) t)
'J 1 18~1 2 , (Y1 - YiO
dY1 = 1 llY1
. dt
W2
d~2 = 12(Y1,Y2, t)
minimizing the square of the " equation error " (" model
residue," "satisfaction error "), , defined as follows:
and
j
(9.6-2)
J= 1
k=O
dy
+ floY
fll -dt
319
I, ... ,m
I!
i
I'
,I
2: a"Y + L
k
'flJXJ + X
(9.6-3)
J= 1
k=O
- kq+m +l
db
( 1) ""dt
<X, Yo>
Uo
(9.6-6)
(9.6-4)
by the method of steepest descent, computing (k > 0):
oa"
r
J
l
o dt
oak
(9.6-5)
da; ..
-dao ,_
r
l
Jo
! dt
uak
fl' Y" dt ;: ( , Y k)
0
dt
= O, . . . , q
dt
~ o
320
(9.6-7)
where
Dk[Y(t)] =
bo
lXq
Dj[X(t)] = 13m
d(q) yet)
dt q + .. . +
d(m)X(t)
dtm
lXo
+ .. .+ X(t)
,\ = a constant
If the C{' (t)} = 0, the C{' (t)' (t)} = I, and the Laplace
transform of D , does not have real negative roots (i.e.,
Equation 9.6-7 represents a stable process), we can
proceed as follows.
Observations yet) and X(t) are made of yet) and
x(t), and Astrom gives the logarithm of the likelihood
function of the vector of parameters 6 (,\ is not included
in 6) as
IA
FIGURE 9.6-1
In
can be obtained after equating the right-hand side of
Equation 9.6-6 to zero. How well the estimation pro
cedure converges depends on the shape of the surface .p
in parameter space .
Figure 9.6-2 illustrates the progress of the steepest
descent search for two models when the error , is not
explicitly a function of time.
Calculation of the bias and precision of the estimates
Ok and b, is uncertain inasmuch as the statistics associated
with the terms in Equation 9.6-6 are unlikely to be known.
Astrom t showed that for a particular model of a
dynamic system a lower bound can be placed on the
tf In ,\
+ constant
(9.6-8)
J =
c{e~~L)( O~~Lr}
02 In L
0810 81
.i
r
(
02ln L 021nL
1
8 2.0
L _ _-
-C
.....
., -1.0
3.0 I : : : - - - - - - - - - - - - - - - - = : : J
820
20
30
40
50
021n L 021nL
08m08m 08m 0'\
02ln L
0'\ 081
021n L
0'\ 08m
(9.6-10)
(J21n L
8F""
02ln L
08m081
02 In L
3 t" 2
~ = -,\4)0 .~ (t)dt
+ X2f
:~,
60
,I
t H.
321
.p is not positive
co
c
c(O)
!2
dc
V dt
Fco Fe - Vk,
(a)
or
(b)
where
and
(9.6-12)
=co
eto =
F+ k,V
F
Introduction of Equations
of the coefficients yields
d(t2) =
.dt
_2[*
L..-
k=O
k y2
k
9 .6~12
+ L..~k X2] +
j
o (9.6-13)
ot
j= l
If we let k o = k l = 1 and
; =
l ,
Yo>]
Y1>
da = -;a _ ~
dt
(e)
where aoo and alO are the selected initial values of the coeffi
cients. The solution of Equation (e) is
a
= e -~tao
; -l~
or
.<
322
Input X
FIGURE
<Yo, Yo> = 5.
<Yo, Y >= 5.
1
1/
0
C2 dt
(C) (dC)
dt dt
1/ .
<X , Yo> =
5.
<X , Y 1 >=
5.e
1/
dC
Co dt dt
and
=
1
At the minimum of
Qo
e- <Y l ' Y1 )1 _
10
tP when of/oao
< ~ ~>
< ~, ~>
and
>
(g)
<X , Y 1
Y1 , Y1 >
<
=
0f/ oa1 = 0,
Q1
0:0
<~ ~>
--:-=-::'---:-"'
< ~, ~>
CoC dt
E9.6-1
,oc;:
SUPPLEMENTARY REFERENCES
b,W 2.0
0:0 0
t, sec
25~
~')2"=
ao
Start
2.0
1.0
(31
323
1.0
1 .5
1.0
10
3'0~
2.0
1.0~
0. 0
o
5
10
t, sec
~)::~; ~
0.5
10
Start
2.0
'~c
36
ao(t)/ao
(a)
1.0
36
ao(t)/ao
(b)
FIGURE E9.6-2 (a) Least squares estimation. (b) Instantaneous equation error estimation.
SupplementaryReferences .
General
Cuenod, M. and Sage, A. P., "Comparison of Some Methods
Used for Process Identification," Automatica 4,235, 1968.
Deutsch, R ., Estimation Theory, Prentice-Hall, Englewood Cliffs,
N .J., 1965.
Eschenroeder, A. Q., Boyer, D. W., and Hall, J. G., Phys. Fluids
, 5, 615, 1962.
Eykhoff, P., IEEE Trans, Auto. Control AC8, 347, 1963.
Eykhoff, P., "Process Parameter and State Estimation," Auto
matica 4, 205, 1968.
Mayne, D. Q., "Parameter Estimation," Automatica 3, 245,
1966.
Stevenson, P. c., Processing of Counting Data, National Academy
of Sciences, NRC, 1966, Chapter 6.
Least Squares Estimation
Aberbach, L. B., "Estimation of Para meters In Differential
Equations," Ph.D. Dissertation, Princeton University ,
Princeton, N .J., 1967.
Bekey, G . A. and McGhee, R. B., "Gradient Methods for the
Opt imization of Dynamic System Parameters by Hybrid
Computation" in Computing Methods for Optimization
Problems, ed. by A. V. Balakrishnan and L. W. Neustadt,
Academic Press , New York, 1964, p. 305.
Box, G. E. P., "Use of Statistical Methods in the Elucidation of
Basic Mechanisms," Bull. Inst. de Statistique 36, 215, 1957.
Box, G. E. P. and Coutie, G. A., " Application of Digital Com
-----
__.. _.. .
_-_.
---
- - -- _ .. . ..
yr'"
324
Other
FIGURE
dH = F(t) _ kH3f2
dt
1967.
P9.1
H(O)
Problems
9.1 A continuous flow, stirred tank reactor (illustrated in
Figure P9.l) can be described by the following de
terministic model for the component designated by c:
dc
V dl =
FIGURE
P9.2
where
F(CF -
c)
VR
C(O) = 0
where
V =
C =
CF =
1 =
F1
F2
P2 - Po
Po
= CAVPl - P2
= CBVP2 - Pa
= Hp
1 =
_
.~
PROBLEMS
Po
H
P2
Fl ---+.:...,,:.p....
l _ v .....:----t~L:.J--~
Valve
VI
Valve
A
B
FIGURE
P9.3
d 2y
dt 2
dy
al dt
a2Y = x (t )
dz
a = D/vL
D = dispersion coefficient
y = dimensionless concentration
z = dimensionless axial length, 0 =:; z =:; 1
fl
(a)
dw
dt
IE
dy
Flow
dt = w
(b)
>E'
z=O
dw
dt = -alW - a2Y- x(t)
TABLE
dt = w
=
= ki
v = velocity
L = length
dy
d 2y
dz
where
if
dt
32S
FIGURE
Packed
bed
'I
I
z=1
P9.5
P9.5
Boundary Conditions
Solution
Number
Entrance (z = 0)
z -+ 00 , Y
dy
y = 1.0
Solution to Model
Exit or Other
=0
y = 1.0
y =I+a
z = 1.0, dz = 0
dz
I,
y = ,,---
y=I+a
z -+ 00 , Y = 0
dz
1 - aml
l
. [
-I
'I
dy
1.0, dz
dy
dy
1
m, = 2a (l - V I
1
m2 = 2a (1
VI
+ 4afl)
+ 4afl)
<.
326
72
195
377
542
687
783
911
144
216
288
346
432
-k1CB
k 2CE
R, Initial Rate
(Ib moles alcohol/
(hr)(Ib catalyst))
(100% alcohol)
Pressure
of Alcohol
(atm)
0.0392
0.0416
0.0416
0.0326
0.0247
0.0415
0.0376
0.0420
0.0295
0.0410
0.01359
0.01366
0.01394
0.01367
0.01398
0.01389
0.01384
0.01392
0.01362
0;01390
1.0
7.0
4.0
10.0
14.5
5.5
8.5
3.0
0.22
1.0
F, Feed Rate
[k H + k'tI (1 + K APA)2]
u;
KAPA
_{[k H + k'tI (I
2k R
by minimization of
K APA)2]2 _
KAPA
L.l~ 1 (r, -
k'tI}Yz
R t )2 where R, = r l
+ EI.
(y - a)]
P9.8
Time (sec)
Run
80
160
320
640
1280
1
2
3
4
5
14.7
3.72
13.3
30.8
62.6
23.4
3.81
27.1
44.4
88.0
34.3
17.2
43.0
46.7
89.5
34.6
20.0
58.0
24.9
43.4
20.3
23.9
49.0
2.94
5.80
A+B~C+F
A+ C~D+F
PROBLEMS
dC
as follows:
d(A =
dA
dt
dB
-dt
dC
dt
= k1AB
dCB
d(=
-k1AB
-k1CACB k 2CACD
(a)
-k1CACB
(b)
sc;
-dt = k1CAC B
- k 2AC
(c)
dC
"dtD = k1CACB
dD
di = k 2AC -
k 3AD
=k
AD
Time (min)
A x 103
(mole/liter)
4.50
8.67
12.67
17.75
22.67
27.08
32.00
36.00
46.33
57.00
69.00
76.75
90.00
102.00
108.00
147.92
198.00
241.75
270.25
326.25
418.00
501.00
51.40
14.22
13.35
12.32
11.81
11.39
10.92
10.54
9.780
9.157
8.594
8.395
7.891
7.510
7.370
6.646
5.883
5.322
, 4.960
4.518
4.075
3.715
= 14.7
k = 1.53
k = 0.294
9.10
(e)
Time
0.00
1.00
2.00
3.00
4.00
5.00
6.00
7.00
8.00
k1
(d)
= D(O) = 0
A(O) = 0.02090 mole/liter
B(O) = (t)A(O)
k 2CACD
-dCE = k 2(CACD ) y,
dt
dE
dt
327
9.00
10.00
1.5000
1.1529
1.1652
0.1147
0.9333
0.9251
0.9465
0.7806
0.7941
0.7022
0.6675
0.6803
0.6495
0.5801
0.5724
0.5133
0.5104
0.5103
0.5383
0.4535
0.4528
0.4501
0.4060
0.4072
0.3689
0.3658
0.3705
0.3961
0.3314
0.3276
0.3397
1.0000
0.6747
0.6795
0.6198
0.4944
0.4887
0.5279
0.3828
0.3771
0.4256
0.3083
0.3106
0.2993
0.2558
0.2547
0.2468
0.2173
0.2215
0.1855
0.1881
0.1897
0.2103
0.1653
0.1626
0.1979
0.1473
0.1494
0.1489
0.1327
0.1327
0.1061
0.0000
0.3252
0.3185
0.3111
0.5055
0.5060
0.5352
0.6171
0.6224
0.6145
0.6916
0.6927
0.7034
0.7441
0.7869
0.6495
0.7826
0.7863
0.7144
0.8118
0.8237
0.8014
0.8345
0.8264
0.8312
0.8526
0.8624
0.9286
0.8672
0.8744
0.8383
0.0000
0.3035
0.3056
0.3033
0.4444
0.4442
0.4372
0.5148
0.5136
0.5058
0.5508
0.5570
0.5479
0.5684
0.5688
0.5916
0.5758
0.5790
0.5376
0.5772
0.5750
0.5738
0.5752
0.5766
0.5874
0.5711
0.5736
0.5725
0.5659
0.5613
0.5756
Characterization
of Data
0.0000 True
0.0434 True
0.0430 Good
0.0492 Bad
0.1221 True
0.1201 Good
0.1321 Bad
0.2044 True
0.2090 Good
0.2048 Bad
0.2815 True
0.2806 Good
0.2770 Bad
0.3513 True
0.3536 Good
0.3294 Bad
0.4136 True
0.4103 Good
0.4688 Bad
0.4691 True
0.4707 Good
0.4837 Bad
0.5186 True
0.5135 Good
0.4671 Bad
0.5628 True
0.5699 Good
0.6203 Bad
0.6024 True
0.6028 Good
0.6599 Bad
328
a 2
data
0.07,
0.09 .
(b)
(c)
dS
- k 8US
k gT
k 12W
kllX
+ k 12Y
dT
dt = k 1RS - k sT - kaTS - kgT
dU
dt
dW
dt
k 2RS
k 3RS - k 1 2W
dX
dt = kaTS
dY
+ ksT -
k 7US - kaUS
+ kllX + k 12 Y
k 7US - kss X
..
dt = k 8US - k 12Y
Design a suitable set of experiments to estimate k 1
through k 12 , and carry out the estimation.
Each k can be represented as follows:
k; = PI exp b,[;. -
4~3]
1
2
3
4
5
6
7
8
9
10
11
12
6.3
2.1
9.0
1.1
2.8
5.0
4.9
5.6
6.7
2.8
1.03
3.5
PI
10- 3
x 10- 3
x 10- 3
x 10- 3
x 10- 2
x 10- 4
x 10- 4
X
x 10- 4
x 10 - 2
x 10- 1
x 10- 1
b,
17,800
18,000
11,100
12,300
13,100
14,000
20,800
4,330
28,900
16,000
14,600
10,800
Y
(Ib H 20/lb dry solid)
t
(min)
0.1834
0.1634
0.1460
0.1313
0.1198
0.1117
0.9
1.0
1.1
1.2
1.3
1.4
PROBLEMS
9.18 In Table P9.18 are yields (concentrations) of the
desired component P formed in the follo wing react ion :
(c)
A+ B -+P
A+P -+R
dCA
di=
I
,.
di=
dCB
-k1cB k 2cp
CA(O) = CAO
-k1cB
CB(O)
cp(O)
=0
de;
di = ksc
- k 2cp
CBO
(a)
(a)
= k ec
CR(O)
=0
CBO
(mole/
Run liter)
I
2
3
4
5
6
7
8
9
10
11
12
dCR
(jf
(b)
329
13
14
15
16
I
I
2
2
I
I
2
2
1
1
2
2
1
1
2
2
t=
t=
t=
t=
t=
1 hr
2 hr
4 hr
8hr
16 hr
3.17
14.7
4.80
23.2
3.72
17.9
8.60
30.9
7.48
25.3
13.3
50.8
9.15
30.8
22.8
62.6
5.39
23.4
10.8
39.0
3.81
28.3
13.3
51.4 .
9.93
35.3
27.1
75.6
15.8
44.4
37.2
88.0
8.66
34.3
22.5
55.6
17.2
40.5
25.9
72.2
20.0
39.1
43.0
84.2
27.5
46.7
57.9
89.5
15.9
34.6
34.6
63.4
20.0
34.2
39.8
76.4
30.9
28.4
58.0
57.0
33.9
24.9
69.1
43.4
22.6
20.3
42.0
41.6
23.9
21.6
50.8
38.9
24.9
7.50
49.4
11.5
23.0
2.94
53.9
5.80
CHAPTER 10
Parameter Estimation in
Models Containing Partial
Differential Equations
.I'
I
I
I
330
- ,, --_.~-_ ._-~-----~----..,.---------------'
(c)
x( t)
.-
.-
.-
r-
- -
(d)
FIGURE 10.1-1 Typical input functions : (a) unit step, (b) impulse, (c) sinusoidal, and
(d) random sq uare wave (random binary sequence).
- -- - _. _- _ .--,---------------,---------------~~--------~~~----==------~
332
8y
mlA.
Yet, z) = .: o(t -
~)
(10.2-2)
Input
xtt)
asinwt
-a
aBet) o(z)
y(O-, z) = 0 (10.2-1)
Output
y(tl
bsin(wt+>/JJ
-b
TABLE
= (0-, z) = 0
f.:
+ v 8y
=
8z
y = (t,O) = fJ oCt)
In Equation 10.2-1, v represents the velocity of the
flowing fluid and y is the dependent variable . Because
oo
in the definition of o(x),
o(x) dx == I,o(t) must
have the units of time ", o(z) has the units of length ",
and the units of a and fJ must be properly assigned to make
the differential equation dimensionally consistent. Sup
pose that a fixed quantity of tracer material m, uni
formly spread over the cross-section A of the duct or
channel (in which the flow takes place), is introduced
into the duct at a fixed location (z = 0) at t = O. Then
the coefficient in source term specifically becomes
8y
8t
8y
-8t + v8z =
Process
Heat transfer to a pipe
with flowing fluid
Model of Process
oT (t, z)
ot
+v
Model Solution
oT (t , z) = h('" _ T(
)]
OZ
J a
t, Z
T(O, z) . = Ta[l - e- (h/VlO]
T(t,O) = ToU(t)
VL
Solid:
C
= 0, t
z
<
VL
z ::; 0 -
C(t,O) = cLOU(!)
10 = Bessel function
~
dummy variable
10.2-2
Zo:
source =
A 8(z
- zo) 8(t)
8(z - zo)
Model
Frequency Response
y(t, z) =
y(t, z) =
vz ( 1
exp [2K
G~) \]
y(O, z) = 0
y(t,O) = el"'t
oy (t, O) = 0
oz
oy
ot
oy
02y
v oz = K OZ2 - ky
J1 +
4K(kv + iw)]
2
.
exp (lOOt)
y(O,z) = 0
y(t, 0) = el "' !
oy (t , O) = 0
oz
Y(O,r,z) = 0
oy (t, 0, z) = 0
or
y(t, r, 0) = yoel"'t
y(t, R, z) = 0
,\n
oy (t, r, 0) = 0
oz
oy
H ot
ox
hot
+L
oy
V - = - ka(y - y*)
oz
Y ( t, z ) =
'\1'\2 [
Q
exp ( q2Z0 + q1Z)
ox
- = ka(y - y*)
OZ
x (t , z ) =
1
Q
[,\2 exp (Q2Z0 + Q1Z)
y* = Kx
y(O, z) = 0
y(t, zo) = e -I ",t
x(O, z) = 0
x(t, 0) = el"'!
"12
(10.2-4)
where
PARAMETER MODELS
oy = K 02y
ot
oz 2
333
VQ1,2
Q1 .2 = -
Kka
Hiw
+ ka
f3
= (Vh
VL ;
y = Hhiw + (h + HK)ka
+ LH)iw + (VK + L)ka
+ Q2Z)] el"'t
334
oCt)
8t +
fl OCA
0
8i =
02CA
fll OZ2 + fl2C~
CA(O, z)
= c.
CA(t,O)
= CAo
OCA{t, L)
oz
=0
= c.
CA(O)
= CAo
dCA(L) _ 0
dz
_.~
ot
= flV2y
2:
00
ajUj e-m,Pt
(10.3-1)
j=O
~;::::-r-.--.-------r-.,...-.-----r----r-.-~
0.9
0.8
0.7
Yo = initial temperature
0.6
0.5
0.4
0.3
0.2
J5
>':
.."
i
0.1
g
.;;;
335
= -flmot + constant
(10.3-3)
<:
~ 0.05
0.04
0.03
0.02
oy(O, t)
Infinitely wide
plate
II Infinitely long
square bar
nr Infinitely long
cylinder
IV Cube
V Cylinder of length
equal to diameter
VI Sphere
0.1
oz
(10.3-4)
=0
~ - 1)"+1 ~2
+;s
-cos (A" I) e-I>.;DtIL2] (10.3-5)
_qL [flt
2 - 3z
y(
Z, )
t - Yo - k 2 62
336
v1
ze = - L
'e
'e
6fJ
L2t
(10.3-6)
~
L
.
.= 1
Y(Z., tl ) - Yo
Y (z., ti) - YeO, t l )
6fJ]2
t
L2 l
P=
i (Y
Yo
Y~I - Y2 i
li -
i= 1
)t
tS'{p}
(-..!-)'
i
L: t?
1=1
(10.3-8)
1=1
~ UyCJLxUy -
._I_ exp
.y2;
[_!2 Uy2 -
(ZfLx - fLy)2 2 2]
2pxyuxuyz + Z Ux
(10.3-9)
(Yll
(Yll
Yo)
Y21)
Expected Value
Variance
(h - Yo)
(Yl - Y2)
a2
2a2
(10.3-7)
L: tf
1= 1
Unfortunately, the expected value of p is not fJ, that is,
Pis a biased estimator, and the calculation of the extent
of the bias is by no means easy.
8c
(a)
= 0
at t
= 0, Z
;::;
C = Co
at z
= L,
= Co
L
t > 0
7T n = 1
C2
(c)
= I
+ exp [ -~ exp [ -
(b)
Cl :
at z = 0, t > 0
337
9(Ir f)t]]
sin 7T~3
+ .. -} (d)
Note that all the even terms cancel and that the odd terms
combine during the addition of Cl to C2 because
. 7T(L - a)
sm
L
. 7Ta
sm/;
. 27Ta
. 27T(L - a)
sm
L
= -smT
etc .
An additional reduction in the complexity of Equation
(d) can be effected by proper choice of the location of the
two sampling points. Suppose that the distance a is selected
so that
L
0
. 37Ta
sm- =
or
a=
Then the third term in Equation (d) vanishes, and for any
reasonable times the coefficients of the exponents in the
expression exp [- 25(7T/L)2 ~ t ] and in higher order terms are
so small that effectively
f)t
. 7TU
-Cl = -U + -2 { -exp [(7T)2]
- smCo
7T
+ ~ exp
:: = L_-_u
Co
[-
, L
(~r f)t]sin
2Z + ...}
Cl +
Co
Ca
Co
C2
= I _
7T
L2
[(7T)
- L 2 ~t ]
f);]sin 7T_(.:....L---::--_u....:.)
- L
+ ~ exp [ - (~r f)t]sin 27T(LL- u) + .. -}
+ 3{-exp
7T
[_ (~)2
L
0.50
experimental data
~(
,....
~I
~0'
Pressure
equalization
(L-a)
./
0.40
v/
Predicted by
Equation (f)
,,/
0.30
=-.5
#1
/
-..,..u.......-Slide
y.
/
0.90
0.80
/
0.70
~"
/.
0.60
0.10
o '-----'---.:'----:':-----:-:' 0.40
FIGURE EIO.3-la
t H. S.
a8
I
~
.5
0.20
(e)
20
100
Time, min
FIGURE EIO.3-1b
338
at
ar
+ ~ at)
r or
f(R ; I ) = an cos wI
of
no.
02f
OZ2
-=ex
at
I)
f(o, t)
FIGURE
(a)
o f (O, I) = 0
or
where
EIO.3-2a
f(O , I) =
Iber" (R vw/ex)
variable
J':;.)
(b)
Amplitude ratio :
,p = w!!1t =
-A
ao
e - V-L"'/2a
- LJ w
,
2ex
(R~)]
ber(R~)
t = time
ao = amplitude of the ma ximum temperature deviation at
Z = 0, a deterministic variable
ex = thermal diffusivity
w = frequency of the temperature fluctuations
the time-dependent solution of Equation (a) is quite com
plicated, but the solution at the point z = L as I becomes
large is
an
bei 2 (R vw/ex)] %
_.
tit; I)
ao cos wt
where
(d)
,p =
w!!1t = tan ?
be i (R ~)
~
(ber
(R w/ex)
Amplitude ratio:
Ao =
1
an
[ber 2 (R V w/ex) + bei" (R V w/ex)] %
It is interesting to note from Figure EIO.3-2d that,p becomes
a linear function of R~ over certain ranges of w:
,p = blJ~ -bo
(g)
339
6
= experimental data
en
c
-c
.s
e
;a
.,
II
Time~
(1)
c,
C
<tJ
<tJ
f =
0.172J~ R ~ 0.631
(h)
0.8
~I<
0.6
.~
(1)
-c
='
0.4
..c
c..
v'W
FIGURE EIO.3-2d Phase angle between signal at radius Rand
response at center of cylinder.
&. calculated by least squares using Equation (g) but with the
.~
Q.
0.2
vw
FIGURE
340
oc*
oz*
1 02C*
where
t
c*
=T
t di
. Iess time
.
= V = T rmension
qt
Cay
d kc*(z* s)
k'
8- 0
ds
_ ( l) k
lim c*(z*, s) - mk
variable
1;,number
a dimensionless coefficient called the Peclet
8- 0
m,
m, = fa""
mf
(10.4-2)
t*c* dt*
fa"" (t*
m2
m 1 -
C*(Z*, s)
li 02
+ ml2 = s....
im
o
(10.4-5)
" 2
cJS
(10.4-3)
II
= " Cay
L'" tC dt
t 2C dt
1II
t 10
=-
-e- -
Mf =
/2[fa
(l0.4-6b)
fa""
M1 =
i"" e.dt
12 =
fa"" c* dt *
fa"" c* dt* =
t a
(10.4-4)
(l0.4-6a)
-ml)2c* dt*
-=-1
r oc*(z* , s)
= -Ln;} os
and
fa"" c*(z*, t) dt
11m
z* =
fa"" t kc*(z*, t) dt
~:;:--~--
then
Vt
p =
q2
II
mV
(10.4-7a)
(l0.4-7b)
1963.
I
I
(far]
~ (~r[I2 - 2 ~ If + (~rINo]
t o. Levenspiel and
I
(1O.4-6c)
Ii
I
10.4-1
341
RELATION BETWEEN MOMENTS AND PECLET NUMBER FOR MODELS WITH AXIAL
DISPERSION ONLY
Solution
Configuration
Moments
ml = 1
m#=-+
2
P
r
Response
/; input
+P
c* = eP/2
p2
Response
/; input
an =
~ (-l)n +18a~
~l 4a~ + 4P + p 2
e- ant
+ 4a~
4P
None available
ml =
1
1 + ];[2
.(1 - a)e-PZi - (I -
m#2
2 1
+
P r
= -
Response
/; input
,8)e-P (Z~-Z2 )]
,8)e-P (Z~-Z2)]}
Any input
zt
z~
. Response Response
None available
"TTR
Any input
zj
z~
6.ml = 1 6.mf
1 - ,8
--p- [1 1-,8
Response Response
jjb
,8 = jj
P = Peclet number;
x {4(l
zm
- z:)]
+ ,8)[exp (- P) -
1]
+ 4P(z~
- z:)
[P(z~
- z:)]
342
I:
<p(t)X(t) dt
I:
= I:
tS'{I } =
Var {I}
<P(t)tS'{X(t)} dt
(10.4-8)
<p(t) Var {X (t )} dt
(10.4-9)
P-x(t) =
~ ~ Xj(t)
(10.4-10)
1=1
Var {p,x(t)} =
Var {P-x(t)} =
ai~t) + ~ ~
(1 -
~) [rxx(k, t)
- /Li(t)]
k=l
n- l
uHt)
- n + -n2~
k= l
2
R xx(k , t) - P-x(t)
(10.4-13)
Io'"
tS'{I l } =
tS'{I2} =
Io'" t 2tS'{C} dt
tS'{ C} dt =
LX) c dt
Io'" tc2dt
r'" tcdt = m,
Jo
i tS'(~~)
~2 ~ ~
k = 0,1 ,2
Covar {Xi(t)Xk(t)}
1= 1 k=l
(10.4-12)
m,
2
1+P
After
q2 1
M1 = -mVn
2 In
1=1
<.
Var {M } =
1
Z (Mil
i_=_1
- M 1)2
n- 1
and thus
1'=_2_
343
M1 - 1
Va;{P}
1}
~ e~2r Va:{M
(m1)Z2 - (m1)Zl
(10.4-14)
dm:
(mf)Z2 - (mf)Zl
(10.4-15)
~M1
t R.
/1
+ (82
81 )
344
FIGURE
ElOA-la
12 =
[(Cltl + 1 -
+ tl+1 tl +
C1+1 tl)(tf+1
tJ)
1=0
-be
(M1);\ + (8 2
C1+1 -
t l + l tl
t1+1tl
+ t l3)]
(d)
10 (ppm)(sec)
11 (ppm)(sec)2
12 (ppmjtsec)"
81 )
C I( 3
t1+1
Point 1
Point 2
112.18
1.905 x 103
3.923 X 104
112.22 "
5.833 x 103
3.358 x lOS
Point 1
Point 2
16.99
349.7
51.98
2992.7
which gave
I1M1 = [(51.98- - 16.99)
+ (78.84 - 32.88)](0.7537)
60.96
.;
= 1.001
11M! = (2992.7 -
349.7)(~~~:67r
= 0.0351
-,
TIMES
TABLE El0.41
(continued)
.
,
'
Ii,
r-
0.000
0.012
0.024
0.083
0.282
0.836
1.461
2.061
2.814
3.772
4.832
5.314
5.772
6.189
6.492
6.746
6.913
6.999
6.991
6.913
6.754
6.563
6.304
6.025
5.730
5.078
4.416
3.785
3.220
2.719
2.263
1.900
1.556
1.271
1.056
0.871
0.715
0.590
0.479
0.400
0.330
0.268
0.227
0.194
0.150
0.125
0.109
0.089
0.073
0.063
Time,
t (sec)
0.0
0.6
1.2
1.8
2.4
3.6
4.8
6.0
7.2
8.4
9.6
10.2
10.8
11.4
12.0
12.6
13.2
13.8
14.4
15.0
15.6
16.2
16.8
17.4
.-_..--18;0"-_.
19.2
20.4
21.6
22.8
24.0
25.2
26.4
27.6
28.8
30.0
31.2
32.4
33.6
34.8
36.0
37.2
38.4
39.6
40.8
42.0
43.2
44.4
45.6
46.8
48.0
Time,
t (sec)
0.0
3.6
7.2
9.6
12.0
14.4
16.8
19.2
21.6
25.2
28.8
32.4
36.0
39.6
40.8
42.0
43.2
44.4
45.6
46.8
48.0
49.2
50.4
51.6
52.8
54.0
55.2
57.6
60.0
62.4
66.0
69.6
73.2
76.8
79.2
81.6
84.0
86.4
88.8
91.2
93.6
96.0
98.4
100.8
103.2
106.8
110.4
114.0
117.6
121.2
34S
Concen
tration,
C(ppm)
Time,
t (sec)
0.053
0.039
0.031
0.026
0.020
0.016
0.010
0.004
0.000
49.8
51.6
53.4
55.2
57.0
60.0
66.0
72.0
84.0
Concen
tration,
C(ppm)
Time,
0.015
0.008
0.000
127.2
132.0
134.0
t (sec)
D.M!
= 0.0376
-<:
P ~ D.M!
[Va0P}]y. ~
= 0.0376 = 53.2
(~2)[Va0D.M!}] Y.
4.96
0.89.
346
ot
OC
+ v. oz =
- 02C DR 0 ( OC)
D L OZ2 +
or r or
+ m o(z
- zo)f(r)
(10.4-18)
with all the coefficients assumed to be constant. The
last term represents the" source" (injection) of tracer at
a specific point . For simplification, Equation 10.4-18 is
put in dimensionless form as follows:
t* = v.t
R
z* =!...
R
r* =!....
R
C*
=.!!....
Cay
=~R2v.cav - -
:~: - ;L :;~: - ;R r
l*
[0:*
(r*:~:)]
= o(z* - zt)f(r*)
(10.4-19)
j-L
Entrance
Test section
t
section
-1
Tracer
input
First
measurement
point
FIGURE
Second
measurement
point
Exit
section
~.
t
,I
[L
TABLE
10.4-2
347
Response to Input
Experimental Scheme
~~
~/~
~
(1)
Tracer in
Measurement
point
(point source)
Restirction : DI/ =
ii,.:
Tracer in
with
(2)
with
(3)
Measurement
point
No restriction
Tracer in
Restriction : DL
=0
-~
~~
Tracer in
(finite injector Measurement
tube) . . ......point
z=o z=zo
.Z
with
(4)
radius of injector = E; e = E/R
Ze
~?'?2'71""'--
Tracer in
. 'J' ..
1 +-
a~
>
4 Pr.Pn
Measurement
point
For (a) :
Measurement
point
(5)
(a)
....
For N, for (b) , refer to reference 5
Tracer in
Measurement
point
(b)
i .
348
TABLE
10.4-3
1.
DL
-Flow
Boundary
Conditions
Differential Equations
Model
u=
e(oo,O) = finite
u.;. U()
oe
1 oe
1 02e
oe
1 02e
P Peclet Number
e(zlo t) = 1
Dispersion only
2.
Model Coefficients
e(z,O) = 0
oe ; oe . 102e
-+=-
ot
oz P OZ2
Il()
-+--=-ot
h OZ P OZ2
Same as Model 1
P
h = ulu
Dispersion only
3'j)_Flow
L Mass transfer
oe
=Il()
y ot
+ OZ = P OZ2
Same as Modell
10q k Be
~t~_
is
:
P
y = [l
k1
+ (k1/)J > 1
distribution coefficient for
equilibrium adsorption
4.
Same as Model 1
5.
oe 'oe
1 02
1 oq
- + - = - -e - - _.
e ot
ot
oz . P BZ 2
Dispersion plus accumulation
by finite rate adsorption
6.
15 '
~Flow
u=uo
Mass transfer
oq = k .. - e*) - k(e q
- )
ot . 2(e
k1
oe
oe
1+ -OZ
ot
Uniform concentration, C.
Dispersion plus interphase
mass transfer to a
(l - f)
oe
1 02e
= - - (1 P OZ2
oe.
at
oe
Same as Model 1
plus
q(z,O) = 0
P
k1
k 2 = mass transfer coefficient
Same as Modell
plus
e.(z,O) = 0
oe.
I) -ot
ka(e - e.)
1 02 e
I ot + fu = P OZ2
7. Dispersion plus accumulation by
finite rate adsorption plus mass
transfer to a stagnant region
_ (l - /)(1
(1 -
oe,
f).at
+ k 1 ) oe. _
Ie
= k a(e
dt
! oqm =
e,)
ot
1
I
0
.
Same as Model 1
plus
e.(z,O) = 0
qm(Z,O) = 0
ka
349
Deterministic Moments
2
P
w
p
2y 2
wy
p
2j2
wi
1<
w 2k 2
4+
2(1 - 1)2
ka
+p
2(I +kI1)2
2k~
+-+ek P
P [w 2
+ (k 2/k 1 )2]
w 2k a
(k a/(I - 1))2]
4+
P[w 2
w [
p I + (I
- f)[w 2
k~_1
(k a/(I -
1))ijJ
4+ P
k 2w
+ [w2 +
]
(k 2/k 1 )2]
350
TABLE
10.4-3
h = u/uo
Problems
10.1 A packed bed through which a fluid passes in steady
flow is a common piece of chemical processing
equipment and also has significance in oil reservoir
engineering. If heat transfer from the fluid to the
bed is of interest, the process can be described in a
number of ways. Suppose that a multiple gradient
balance is selected with one adjustable parameter in
the following form :
where
p = density
C; = heat capacity
v. = velocity
Z = axial direction
T = temperature, the random variable
r = radial direction
pCp [
OT
OT ]
at
+ v. oz
Supplementary References
Astarita, G., Ma ss Transfer with Chemical Reaction, Elsevier
Co., Amsterdam, 1967.
Bellman, R., Detchmendy, D., Kagiwada, H., and Kalaba, R.,
"On the Identification of Systems and Unscrambling of
Data III : One-Dimensional Wave and Diffusion Processes,"
+ v.
RT
2h
Ii (Tw
+
T)
/),.HRx n [
-koCexp
(-~:)]
where
C = concentration, a random variable
heat of reaction, a constant (known)
/),.E = energy of activation, a constant
R = ideal gas constant (known)
k o = a constant
h = interphase heat transfer coefficient, a
constant
T w = wall temperature (known)
/),.HRx n =
" ' --' '' ' - ' '' ' --' '-'~---~-----------'--------.,......._............III
PROBLEMS
and the remaining notation is the same as in Prob
lem 10.1.
By what type of experiments can estimates of k o,
h, and dE be obtained independently? What will be
the respective boundary conditions? Note: T and
C are quite sensitive to small changes in Ljo, and
T in certain ranges of these variables for the model
as formulated above.
10.3 For incompressible laminar flow, the Navier Stokes
equations are
Dv
Dt
p - = -V(p
pgz)
+ fLV2 V
where
p =
v =
p =
g =
z =
fL =
D = su b stantia
ial deri
-D
erivative = -88
t
,
t
+ v-V
c* =
z*
WA OO
!,.
Dt
= L 2
02 V
t* < 0,
t* ~ 0,
o .s z* :s;
z* = 0
z* = I
WA
02 V
+oy2
ox 2
+ .. .]
diffusion direction
L = slab width
WA = mass fraction of A
z
25.p)
WA .- 'W A co
WAO -
+ !exp(-9.p)
+ i\ exp ( 2
8c*
02C*
ot * = o.z*
where
351
l ap
(a)
=-
fL 8z
~ ~
V = _ 16b
3
7T fL OZ
L.."=' .3....
[(-1)";'
n3
cosh
'!ji cos
mra
cosh 2fj
n7TY]
2b
352
500 ,----r---.-------r----,,--.,....----,
'!!!!!!]}
.g400
s
c:
Response x 102
1.90
3.80
5.70
7.60
9.50
11.40
13.30
15.20
17.10
19.00
20.90
22.80
24.71
26.61
28.50
30.40
32.30
34.20
36.10
38.10
40.00
41.81
43.71
45.61
47.51
49.41
51.31
53.21
0.000
' 0.000
0.356
1.245
2.194
2.759
3.229
3.431
3.454
3.346
3.197
2.996
2.761
2.527
2.303
2.089
1.876
1.667
1.480
1.315
1.169
1.040
0.926
0.821
0.724
0.644
0.586
0.534
.~ ~
55.11
57.01
58.91
60.81
62.71
64.61
66.51
68.41
70.31
72.21
74.11
76.01
77.92
79.82
81:71
83.62
85.52
.87.42
\ 89.32
91.22
93.12
95.92
96;92
98.82
100.72
102.62
104.52
0.486
0.420
0.363
0.314
0.271
0.241
0.215
0.192
0.171
0.154
0.142
0.131
0.120
0.111
0.094
0.081
0.068
0.0583
0.0496
0.0422
0.0359
0.0305
0.026
0.022
0.018
0.016
0.014
g300
Q)
~ 200
...
.~
~ 100
a:
20
10
30
40
50
60
Time (min)
FIGURE
Water rate
0.94 ft/sec
Air rate
0.40 ft/sec
Volume fraction 0.8
liquid
e (mv) t (sec)
e (mv) t (sec)
(e mv) t (sec)
0
85
200
310
1.22
1.52
1.72
1.92
----
2.12
2.22
2.32
2.52
350
355
340
282
200
128
73
50
0
2.92
3.32
3.72
4.12
6.12
Water and air rates are total volumetric flows into the
column divided by the column cross-sectional area.
Based on Table PI0.1O, determine how well a :
(a) maximum gradient model
oe
ot
ec
+ v. oz
A 8(x) 8(t)
ec
+ v. ox
lh
02e
or
r-"' "
PROBLEMS
TABLE PIO.l1*
= concentration, g/liter
= tracer injection, g
= tube radius, ern
t W. E. Schiesser and
353
-.0,
Pulse Response
eQ/mo, sec ?
Time
(sec) Porous
Non-
porous
Time
Non
(sec) Porous porous
12.5
13.5
14.5
15.5
16.5
17.5
18.5
19.5
20.5
21.5
22.5
23.5
24.5
25.5
26.5
27.5
29.0
31.0
33.0
35.0
37.0
39.0
41.0
43.0
45.0
0.0015
0.0050
0.0100
0.0190
0.0380
0.0620
0.0850
0.1100
0.1030
0.1010
0.0910
0.0780
0.0650
0.0530
0.0420
0.0330
0.0240
0.0140
0.0090
0.0070
0.0050
0.0040
0.0030
0.0020
0.0020
1.0
3.0
5.0
7.0
9.0
11.0
13.0
15.0
17.0
19.0
21.0
23.0
25.0
27.0
29.0
31.0
33.0
35.0
37.0
39.0
41.0
43.0
45.0
0.0015
0.0050
0.0080
0.0160
0.0220
0.0350
0.0460
0.0590
0.0650
0.0670
0.0660
0.0630
0.0580
0.0510
0.0470
0.0400
0.0310
0.0220
0.0170
0.01250
0.0100
0.0075
,0.0060
0.0050
0.0030
1.000
1.000
1.000
1.000
1.000
1.000
1.000
0.990
0.950
0.820
0.630
0.620
0.500
0.410
0.340
0.290
0.260
0.220
0.210
0.200
0.190
0.180
Notation :
Q = volumetric flow rate, ft 3 liquid/sec
e = concentration
m OQ = moles injected in a pulse input
t = time elapsed since injection of input
1.000
1.000
1.000
1.000
1.000
1.000
1.000
0.950
0.880
0.750
0.450
0.290
0.190
0.120
0.070
0.050
0.040
.0.030
0.020
0.020
0.010
0.010
'1
. .....
. .. 1
. . .
;' 1
iI
CHAPTER 11
Parameter Estimation in
Transfer Functions
dy(O)
yeO) = --;{f
d 2y(0)
(fj2
= . .. =
d<n-lly(O)
dt<n 1) = 0
= ff[x(t)] =
f'
e - st 8(t) dt = 1
sn
+ an_1sn- 1 +
+ b1s1 + bo
+ alS + ao
or
x es )
~(s) = g(s)
(11.0-1)
sny(s)
xes)
(11.1-1)
where s (in this chapter) is the complex parameter and
the overlay (v) on the dependent variable signifies it is
in the transform domain rather than the time domain.
If we rearrange Equation 11.1-1 into the following form :
yes)
= g(s)x(s)
(11.1-4) .
= 2- 1[y (s )] = ff-1[g(s)x(s)]
= f>(t-a)x(a)da (11.1-5)
I
I
!
I
j
:; /
. ~ 1
,
~~, I
''.. 1
~,': ,1
. i
(l1.l-6)
gp s -
1'(8)
fis)
v
Xp S
(l1.l-7)
~-----..,.---------l
Outlet thermal
conductivity cell
'---r----'
Cp(s)
X (s)
Inlet section
....
Ms)x(s)
f c(s)
goc(s)y(s)
= go(s)yp(s)
Consequently,
(l1.l-8)
Equation 11.1-8 can be interpreted as relating the transfer
function for the nonobservable packed section of the
column to the observable transfer function for the entire
column with and (in the denominator) without the packed
section being present . By experimenting with and without
the packed section, the instrumental end effects can be
eliminated .
If the frequency response is desired, it can be obtained
from the impulse response and/or the step response by
the methods described in Hougen, t Nyquist et al.,t
and Schechter and Wissler. The frequency response can
more easily be obtained directly from the transfer
function by replacing the parameter s by iw and separat
ing the real and imaginary parts of g(iw). Figure 11.1-2
shows the relation between the time domain , the Laplace
transform domain, and the frequency domain. It is
because the inversion process indicated in the figure is so
difficult that techniques of process analysis have arisen
that make direct use of the parameters in the transfer
function , and hence make estimation of the parameters
in the transfer function itself of significance.
The general set of first-order linear equations given by
Equation 9.1-5 with constant ex and zero initial con
ditions :
t J.
Xp (s)
xp(S)
yes)
dy
dt
Inlet th ermal
conductivity cell
355
+ exy =
x(t)
yeO) = 0
(l1.l-9)
:I: J. K . Nyquist et al., Chern. Eng. Progr., Symp. Ser. No . 46, 98,
1963.
1959.
356
Direct
transformation
Multiplication by
transfer function
Convolution process
f g( t -r )x( r ) d r
Multiplication by
transfer function
yrw) = g(wmw)
y(s) = g(s)x(s)
Inverse
transformation
Inverse transformation
Time domain
Frequency domain
I Fourier transform)
space
TABLE
11.1-1
TRANSFER FUNcTION
FoR:' A DISTRIBUTION
Transfer Function
Model
Test section:
oe.
oe* 1 02e*
-+= P-OZ*2
ot *
oz*
Exit section:
oe*
ot *
_b
oe*
+_b
1 02ct
oz* = P b OZ*2
[(~ + ~t
(~ + ;J Yz ]
A2 =
[(~ + ~t
A3 =
[(~ + ~t
A4
[(~ + ~t
Al
e*(z*, 0) = ct(z*, 0) = 0
e*(z: - ) = ct(z: +)
e*(zo, t)=
c~
et(ex), t) is finite
e z;
a
oz*
Cb Ze
Pb
oz*
P = Peclet number
* = dimensionless variable
zt =
z: =
(~ +
;J Yz ]
exp {(z~ -
zt)p[~ + (~ ~ ~t])
{(Z~
- zt)P
G- (~ + ~t])
- zt)P
[~ + (~ + ~t])
- z: )P
[~ - (~ + ~t])
exp
";4
.,
,-.:.. :
==
(sI
+ )-1
yes) = g(s)i(s)
(11.1-10)
Yj(S)
L gjj(s)xJCs)
= 1,2, ... , n
f= l
357
Jrtf0 [G(t)
- get, (3)]2 dt
(11.2-1)
<P =
(11.2-2)
1=1
8.p
8f3f = 0,
= 1,2, ... , m
;~;~
tIf
i (s) =
~(s) e~f(S )
des)
Ie
~, :
358
= --E.L + ~ + ...
S+
Sl
(11.2-3)
S + S2
in which (3 has the elements a1' a2, and f3, the partial
fraction expansion can be shown to be
gV(S (.I)
, t'
+ a1
a2
S + a2 + if3
aa
S + a2 - if3
where
'1
= m + 1, .. . , 2m
o~/of3J
= 0 can be
!s
[1 - g(s)]
1t .
o~
oaJ
=0
(-2)f,t
0
r{G
Ct)
_ff-1[~
----!!L]}
L.- S + SJ
1=1
.{ ff - 1[S
o</> = 0 = (2)f,t, {G(t) OSJ
0
ff- 1[~
----!!L]}
L S + SJ
1=1
~ sJ}
Y1(t) =
I~ [1 -
Y2(t) =
I~ [yf
Ym+n(t)
yet)] dt
- Y1Ct)] ".
I~ [Y~+n -1
(11.2-7)
- Ym+n-l(t)] dt
In other words,
yt =
f'
y~ =
fo
[1 - yet)] dt
QO
[yt - Y1(t)] dt
!S
359
[1 - g(S)]X1(S)
8- 0
t- 00
8- 0
[1 - g(s)]
s
---* (a1
- b 1). Consequently,
Y:.
+ . ..
Then
etc.
8- 0
y~
= y!a1 - (a2 - b 2)
'
a1
y!
+ b1
- a2
y~
- b2
a3
y~
+ b3
- a4
y~
y!
y~
y~
y!
.Y4*
y~
y~
y!
as
yt - b4
y~
bs
(11.2-8)
so that
Var {a1} :::; Var {f n
Var {a2} :::; Var {f :')2} + Var {fn
etc.
Example 11-2.1 Estimation of Parameters in a Transfer
Function
A simulated experiment is described in this example to
illustrate the use of Equation 11 .2-8in parameter estimation.
A transfer function
g(s) =
+ a1S + Q2S
(a)
360
S~i =
7.61 x 10- 4
-0.0111
S~2 =
9.83 x 10- 6
,r,yj)2
+ 1.107)
6.41
10- 6
(b)
Yt
Y1* = 0.207
a
I 0]I [- a2
[0.207
1
(c)
0.207]
[ - 0.0111
(d)
or
a1
0.207
1.6 r--r--,--,--,----,,----,,........--,
Y(t)
oI<.-_'----'_--I._-'-_--'-_.....L.._...J
0.4r--,--,--,--,--,--,---,
0.3
~(s) =
2:
[G(s) - g(S)]2
(l1.2-9)
~(s) =
0.02
2:
s
[G(S) _ ii(S}X(S)]2
des)
or
0.01
~(S) =
Y2!"t)
2:
Y; = -0.0105
-0.01
2.0
2.4
2.8
.,
tl
then the response function G(t) for the same time interval
is
where V(t
for t < tl
bution of
expression
G21( t )
. )
G(tl +1) - G(t l ) (
[ . t H ..1 - t1... _ t - tl + 1
+ G( tl + 1) ]
V~t
we substitute st = z:
g(s) =
S -l
f ' e-'i(~)dz
AkljJ(t,J
k =l
- tl +1 )
361
~
e- st.(bl + GI(S)) ._ e- stl+l(bl + GI+1(S))
L..
S2
s
S2
S
11.3
n(s)
c n(s)
(11.3-1)
c = a constant
1= 1
bn_1s n - 1 +
(11.2-12)
n(s)
bns n
des)
ads d + ad_1sd -1
b1s + 1
a1s
+ 1
d~n+l
t R.
R. E. Kalaba , and M. C.
Time-Dependent Transport
York, 1964; R. Bellman ,
IEEE Trans. Auto . Control
1949.
N . N. Puri and C. N. Weygant , "Transfer Function Tracking
'--:---- -- -----_.
-
A ' "
--" . , ,~
362
00
z,
10
j=O,I , .. . ,p
g(t , (3)jf(t) dt
1
00
l 'floo
-2:
g(t, (3)jf(t) dt =
171
- 100
Zf
2~i
g(s,
~)};( -s) ds
j=O
lo(t)
j ~ 1
jf(t) = e - a,t
jf(t)
(e - aft
= -
af -
+y
j y(s) =
;,
s+y
= U(t)
.Io(s) =
==
[-g(S, (3)]
s - af
g(af) - g(y)
ds _ _
1 ):
2m 'j
n(O)
d(O)
(11.3-7)
J: g(t) dt
for k . < m
~(O) = c for k = m
d(O)
where jj, is still U(t). The procedure is to evaluatez.; for
k = 1, 2, 3, . . . until one finds the first nonzero ZOk,
which becomes the Zo used. Thus
fro
Jo
fk[g(t, (3)]/o(t) dt = c
zOaf
n(af)
-v-
Zy
d(af)
and the equivalent of Equation 11.3-6 is
Zf
(11.3-8)
hfd(aJ)
[- g(S, (3)]
ds
-h1af
s - y
= 1, 2, ... , p
-h2a~
(11.3-3)
h
h2
(11.3-5)
-h: + da~+d
(11.3-4)
= c -v- =c
Z =_IJ.-g(s,(3)ds==g()
y 2m 'j s - y
y
Zo
= 10 fk[g(t, (3)]/o(t) dt = 0
a'tii(af)
1
s
ZOk
ZOk =
+ y)(s + af)
'. =8
lo(t)
00
y - af
)'
Ji-$) = (s
af < y
Zo
s + af
};(s)
"'::;"e- yt) ,
+ Zy
_ zf
-
in which the a/s are arbitrary and U(t) is the unit step
function. Deex t proposed using the following functions ;
jj(t)
hf
f [g(t )] ==
1
h (s )
(11.3-6)
n(af) = hfd(af)
where
Iris) = -s
= U(t)
= (z, + Zy)d(af)
/(s)
I(t)
zo1l(af)
or
(11.3-2)
we find
aT
a~
[
=
hn+: - a:;'+d
J
(11.3-9)
363
CS
ass s
-t
IS
.24
-35
...!
-45
16
hIS + 1
a~s2 + al s
( )
a
15
-'30
hI
-t
(b)
which is equivalent to
des, ~) =
S2
+ ;s + 2
Zoo =
!o'"
JO[g(t)] dt =
k = 1:
ZOl
fa'"
Jl[g(t)] dt
J J.,.t
00
(2e - 21
I''
30
-t
al
-t
-35
-~!
a2
-70
-llo
-320
as
-lS30
(d)
SI
hI
al
"2
a2
as
0 and as
= 0 as in the assumed
(e)
k = 0:
__1
27
-15
(2e- 2 t
e- I) dt = 0
+ E(t)
u:;
.:
e -1) dr dt
z, =
+ (t)jj(t)] dt.
6"{Zj}
6"{f
OO
[get, ~) e-ajl
+ (t) e- aJI] dt }
= g(aj)
Y =-2
(11,.3-10)
a;j =
C{Z;} - [C{ZJ}]2
6"{[r g(t,~)
oo
e-aJI
dt
+ fo
(t) e-aJI dt
r}
- g2(aj)
e - aJI
dt
OO
OO
e-aJI
e-aJI
dt } -
~(aj)
OO
reo (2e- 2t
~O
e-t)(e- %t - e- 2t) dt =
.
9~O
+ 6"{Io fo
OO
OO
364
(11.3-11)
i
that
C{Z j}
Var {Zj}
,
g(aj)
= a;[y2
- 2yaj
2a jY(y
+ 2a T]
+ a j)
M (sec)
61
Ql
Q2
0.01
0.10
0.50
8.327
8.326
8.306
0.504
0.509
0.617
4.163
4.166
4.259
4.191
4.217
4.770
0.3, (t2
= 0.6, a3 = 0.9
(11.3-12)
If
C{E(t )E(r)} = 0
20 sec, a l
C{E(th (r)} = 0
a;
a;1'
and
=
The choice of fi(t)
improvement over the choice
= e - ajt -
e- yt is an
Example 11.3-2
The examples given here have been taken from Deex and
illustrate the influence of normally distributed noise with
zero mean as well as the effect of the selection of the values
of /),./, af> etc. The necessary integrations were carried out on
a digital com puter by the trapezoidal rule; matrix inversion
of Equation (11.3-9) for the parameter vector was by
Gaussian elimination.
6
10
20
/),./ =
0.665
' 0.666
0.667
0.01, (t1
Q1
Q2
0.096
0.471
0.500
0.918
1.304
1.333
0.021
0.315
0.333
365
a2
al
Exact solution
2
0.6
0.4
0.9
0.3
i 2.7
0.3
2.7
1.5
6
9
3
5.196
9
3
0.5196 0.90
0.3
0.6
0.9
0.3
0.001
5
10
10
0.001
0.1
0.566
0.8
0.4
0.6
0.8
0.4
0.489
0.6
0.4
0.5
0.6
0.4
0.09
0.03
0.06
0.02
0.03
om
.0.002
0.003
0.001
Absolute Value of
the Determinant
hl
al
a2
- 8.333
-8.333
-8.333
-8.333
-8.333
- 8.333
-8.333
- 8.333
- 8.333
-8.333
-8.333
-8.333
-8.333
-8.333
-8.333
- 8.333
- 8.333
-0.500
-0.500
-0.500
-0.500
-0.501
-0.500
-0.500
-0.500
-0.499
-0.505
-0.500
-0.500
-0.499
-0.499
-0.464
-0.448
- 0.388.
4.166
4.157
4.156
4.157
4.226
4.215
4.155
4.155
4.146
4.141
4.156
4.156
4.156
4.156
4.182
4.198
4.258
4.166
4.166
4.164
4.163
4.155
4.154
4.174
4.173
4.146
4.210
4.172
4.171
4.176
4.176
4.425
4.508
4.791
a3
t, = 20 sec
3.23 x
1.35 x
1.47 x
1.69 x
1.77 x
5.01 x
5.35 x
1.40 x
9.48 x
1.67 x
1.72 x
2.25 x
2.27 x
2.85 x
6.18 x
7.71 x
10- 3
10- 2
10- 2
10- 2
10- 2
10- 4
10- 4
10- 3
10- 5
10- 4
10- 4
10- 5
10- 5
10- 8
10- 11
10- 1 7
11.4
g(z)
!l'[g(nT)J
.2
g(nT)Z-n
(11.4-1)
n =O
The z-Transform
11.4-1
!l' - l [g(Z)]
. . (11.4-2)
Continuous function
00
g(s)
.2
genT) e- n ts
n=O
g(t)
Sampled function
a
g * (t)
=n;o g(nT)O(t -
ns)
FIGURE
. .. __ ..
. _ ~ . _--_.
__. _- _._--------------...,.-~~-:-----
366
TABLE
11.4-1
Laplace
Transform
Time Domain
Function
Z-
Transform
= bOx(tk) + b1x(tk -
8(t)
/ s
Z
Z- 1
TZ
t2
S2
(z - 1)2
S3
T2 z(z + 1)
"2 (z - 1)3
e- t
s+1
Z
Z - e-'
z-n/(z)
+ any(t k -
m)
t,
+ ... + anz-ny(z)
= box(z) + b1z- 1X(Z) + . . . + bmz-mx(z)
y(z) -t a1z-1y(Z)
+ bmz- m
+ anz- n
(11.4-4)
-; "
(11.4-5)
And, also as usual, we shall add the error, which incor
porates alI effects in the process response not attributable
to the process input, to the deterministic response
Y1 =
Y2
(11.4-6)
where
J
+"4
ao
a1
a1b1
= b0 + 2
~=
Y2
Y1
Ya =
T
an
-bo
y(t
k)
y(tk-T)
y(tk)
y(tk-T)
a~
y(tk - m )
Yk=
-b 1
x(tk)
X(tk-T)
-bm
x(tk-mT)
y(tk
Ek=
X(tk)
X(tk- T)
= sampling period
21T
(samples cycle)(frequency)
X(tk
- mT)
"
Minimize
2: ~
k=l
k =
~,2,
. .. ,K
K>n+m+l
to yield
(11.4-9)
(11.4-7)
11.4-4
367
'.' YK)
~ (Yk -
(11.4-10)
and ~ , the estimated coefficient vector, is the eigenvector
corresponding to ,\ = <Pmln'
Although the estimated parameters are usually biased,
Levin stated that if the errors are small in comparison to
the observations, the bias is small compared to the
standard deviations of the parameters. Levin also stated
that, when overlapping sets of observations are used, the
bias is no more than when nonoverlapping vectors are
used. An approximation for the variances of the estimated
coefficients was given as
.
(11.4-11)
I
(27T) ( -K(n+m+2 /2
exp [ -
y(tk)
(a)
2:
"( )
bo + blz- l
(b)
g Z = 71-+:--a- z"""'1~+-a-2z---;;2
l
if>
k=l
t M. J. Levin, IEEE
(11.4-8)
'
"
YK =::; YK}'
'_ _._-:_;_.."
._
"( )
g s
= S2
40
+ 4s + 40
(c)
368
(d)
100
800Cl
Cl + C2
....e
Ql
~
0
'"~
c~
c~
Ql
(e) .
E'
where
.!!!
Cl = -432
C2 =
.5
624
li;
....e
Ql
~ 10- 2
Q.
10-3 L..-::--_..J....-=-_-l---:-_---l
10- 3
10- 2
10- 1
1
..J...._ _...J
10
100
FIGURE EI1.4-1b
A-Consecutive equations, Equation (f)
B-Intermittent collocation, Equation (g)
C-Least squares of equatioll'errar; 'Equation'1I :4"7
D-Maximum likelihood, Equation 11.4-10
E-Maximum likelihood
Xn+2 Xn+l
1.2 r----,---,---r-....,--.-.....--.-~
X n +3
X n+ 2
Yn + 1
n
Yn + ][
Y
1
Xn +3
Yn + 2
-01
Xn + 5
Xn + 4
Yn + 3
[
X n +
.b.1 .
a2
n
Yn+2]
+
(f)
Yn+4
Yn + 5
(g) :
1.2
Xn+2 z.,.
... 1.0
X"+2 X"+1
s,0.8
[X p +
51
c:
~ 0.6
X P +1
XY + 2
XY + 1
- al
Yp + 2
r,
- a2
YY + 2
(g)
Ql
a: 0.4
fl,
0.2
FIGURE
-~
- - --
- .._-
Ul
= -1.824
bl = 0.030
U2
0.882
b2 = 0.030
0.5..-----,----...,..----r----,
'-------
'"
iii'"
,--
.........=.:==-==-1
I
-0.5 ~----'-----::;'_:;_--:_'::_----='-=---::'
o
2.0
-1.0 ~----'-----::;'_:;_--_:_I_=_--....,..L:,...._---::I
o
2.0
0.75..-----,----.---...,..----,
I
I
I
0.25 ..-----,-----,------r-----,
0.50
.s
'"
0.25 -
iii'"
Of-
\
\
\
//
'"
c
..0
.;;; -0.25
iii'"
,,,- -..,...
I
I
-0.50
I
-0.25
0
0.5
1.0
1.5
-0.75
2.0
0.5
1.0
1.5
----
....
369
2.0
370
IN PARENTHESES)*
Sampling
Interval
0.25
0.50
1.00
2.00
b1
02
01
b 2
II
II
II
II
0.109
(0.22)
0.108
(0.20)
0.116
(0.15)
0.173
(0.17)
1.825
0.125
(0.25)
0.151
(0.25)
0.169
(0.24)
0. 578
(0.61)
2.133
0.172
(0.55)
0.187
(0.37 )
0.205
(0.25)
0.485
(0.47)
1.457
0.152
(0.50)
0.154
(0.31)
0.138
(0.17)
0.159
(0.17)
1.191
' 0.221
0.137
0.312
0.223
0.175
0.252
0.103
0.148
b1
02
II
II
b2
II
II
Normalized bias
Sample Standard Deviation
32.92
0.040
Overlapping Sequences
-0.184 -28.56
0.186
0.348
0.042
0.333
-0.311
0.030
0.123
0.123
3.801
0.032
- 0. 166
0.148
Normalized Bias
Sample Standard Deviation
16.76
0.078
Nonoverlapping Sequences
0.205 - 14.49
- 0.203
7.49
0.081
7.438
0.711
0.043
-0.091
0.705
2.37
0.056
0.107
2.25
01
02
b1
b2
0.0648
(0.07 11)
0
(- 0.48 1)
0
( -0.346)
0
(0.276)
0.0902
(0.117)
0.696
(0.933)
- 0.591
( - 0.906)
0.160
0.199
- 0.849
-0.971
0.156
0.191
Supplementary References
Bigelow, S. C. and Ruge, R ., " An Adap tive System Using..Periodic
Estimation of the Pulse Transfer Fun ction, " IRE Nat.
Convention Rec.; Part 4, 24, 1961.
10,411, 1961.
Shinbrot , M., "A Description and Comparison of Certain Non
I
"
PROBLEMS
No.
Problems
11.1
-+
30.5
27.4
23.5
19.9
15.9
10.6
9.3
8.4
8.0
7.5
7.3
7.1
6.6
6.4
10
25
50
75
100
150
200
250
300
350
400
450
500
550
31.0
29.2
27.0
24.3
22.1
17.3
13.7
12.4
11.5
11.1
10.8
10.4
10.0
9.7
Coolant out at T
L-"":""'-T
in
...
PI 1.2
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
Coolant in
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
mm
48.5
46.7
44.9
43.3
41.8
40.3
39.0
37.7
36.4
35.4
34.2
33.1
32.1
31.2
30.3
29.4
28.6
28.0
27.1
26.5
25.7
25.1
24.5
23.9
23.2
22.8
22.3
21.8
21.3
20.9
20.4
20.0
19.6
19.3
18.5
18.0
18.0
17.8
17.6
17.3
17.0
16.7
16.5
16.2
16.0
15.7
15.5
15.3
15.1
14.9
No.
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
15
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
371
mm
14.7
14.5
14.4
14.2
14.0
13.9
13.7
13.6
13.4
13.3
13.2
13.1
-12.9
12.8
12.7
12.6
12.5
12.4
12.5
12.2
12.5
12.3
12.0
11.9
11.8
11.8
11.7
11.6
11.6
11.5
11.5
11.4
11.4
11.3
11.1
11.1
11.0
11.0
11.0
11.0
11.0
11.0
11.0
10.9
10.8
10.8
10.7
10.5
10.5
10.6
372
TLTa(D~-Df-Dr;.-I+IYt-I)-fg(M.-Df)
Steam t.
where
hLs
kLs
L=Z:+Z:+
Ta =
f=
has
G +
kL
mG
Entering air
temperature,~
to
!:.!:..
Lm
~l~
/
k r
g=G
Exit air
temperature,
~
Condensate
FIGURE
PI 1.4
kc
0.766
interphase mass transfer coefficient, grams/
(minjtcrn")
ho , he = total organic, water phase holdup, grams/em",
reported to be 0.075 and 0.5080, respectively
D 1 = T LA1
6.t
..,.-(s) =
f,.l.t,
{tCmCg h h A1 A
1 2
D 2 = T LA 2
of
A
_ (TLTa - fg + 1) v (TLT a - fg + 1)2 - 4TLTa
.
1 ,2 -
2TL
where
6. =
Cm =
Cg =
A1 =
A2 =
K; =
Q =
hI, ha =
t (sec)
C x 103
.' .t(sec)
C x 103
0
20
40
60
80
100
120
140
160
180
200
220
240
260
280
300
320
340
360
380
0
0.250
0.451
0.750
1.250
1.750
2.500
3.054
3.750
4.580
4.967
5.050
5.063
5.001
4.850
4.586
4.349
3.851
3.510
2.726
400
420
440
460
480
500
520
540
560
580
600
620
640
660
680
700
720
740
760
2.250
1.870
1.575
1.298
1.068
0.875
0.726
0.599
0.474
0.376
0.291
0.223
0.169
0.124
0.076
0.045
0.020
0.008
0
--
--
change in temperature
0.61 Btu /"F (metal total heat capacity)
0.0021 Btu /"F (air total heat capacity)
0.905 ft 2 (inside heat transfer surface)
19.7 ft2 (outside heat transfer surface)
0.0194 x 60 x Q-Btu/( ~F)(hr~
flow rate , ft3/min
heat transfer coefficients
6.1
{( 1
1 )
6.t, (s) = tCg h1A 1 + h2A 2 S
Ka(h1~1
h2~J
t}
-1
(b)
6.1
6.t, (s) =
(c)
h 2A 2
c,
PROBLEMS
Frequency
(cycles/min),
w
Normalized
Amplitude
Ratio
Phase
Angle
(degrees)
1.00
0.99
0.93
0.87
0.0
5.5
15.3
26.2
41.6
41.1
46.3
50.3
58.7
58.9
73.9
1.17
2.26
3.24
4.62
5.72
6.80
7.90
8.10
10.66
12.50
0.~2
0.74
0.68
0.59
0.67
0.56
0.48
= Pd -
Pd
Pw
Q(l _
VZQ +
Q2) y'
where
Q = Hy8 V~
V+ us
Ostwald absorption coefficient = 42.25
g-mole/cc liquid/g-mole/cc in gas at 25e
and (42.25)(1.1815) at 20
0e
8-668 sq ern
t-19.78 e
y-l.292
TABLE PI1.S
Frequency
(cps)
0.159
0.250
0.388
0.612
0.636
0.999
-r .
373
1.557
2.445
Pd
(in H 2O)
Pw
(in H 2O)
!1P/P w
lO.ll
9.83
9.86
9.98
10.01
10.06
10.09
10.14
10.17
10.18
10.16
10.21
10.21
10.23
10.25
10.28
10.26
0.03398
0.03173
0.02695
0.02517
0.02404
0.02271
0.01945
0.01730
0.01539
0.01809
0.01360
0.01569
0.01233
0.01265
0.00949
0.01260
10.11
10.21
10.21
10.27
10.27
10.31
10.31
10.31
10.31
10.33
10.33
10.35
10.35
10.36
10.36
CHAPTER 12
fooo g{t
(12.]-]a)
- or)x(-r) dr
where w = 21Tf has the units of radians per unit time and
f has the units of cycles per unit time. The lower limit
on the integral can be zero instead of -00 because
g(t) = 0 for t < O. Only modest restrictions are placed on
the nature of g(t), restrictions that can be found in any
advanced calculus text. In effect, we have replaced s in
the transfer function by ica, as mentioned in Chapter ] I
in describing the relation between the frequency response
and the transfer function.
The transfer function (frequency response) can be
characterized in two ways in the frequency domain.
Because g(w) is complex, one way of representing the
transfer function is
g(w)
8?[g(w) ]
+ i J[g(w)]
(12.]-2)
2[g(w) ]
=r
375
g(w)
I
I
Id[g(w)
I
t/!~[g(w)] I
Real axis
FIGURE
i
== -li
J[g(w)]
1> = -1
~[g(w)r
g(w) =
r(cos
tP + i sin ifJ) =
eil/!
= Ilg(w)/1
eit/l(W)
(12.1-7)
(12.1-4)
oo
II Yew) - y(w)112 dw
v
7T
7T
(12.1-8a)
oo
+ {[J[Y(w)]
(12.1-3)
dw
7TJO
- J[y(w)]}2] dw (12.1-8b)
/---
yew) = Ilg(w)llllx(1)llei(\II+8)
/
I
I
/ /
---~------
/"
'I
o.
(a)
..
yet)
Imaginary axis
Real axis
"
'I
" '...........
/"
.
//
-"'---~One possible
(w)
(b)
yet) = yet)
+ E(i)
(12.1-5)
376
V[cos W
+ i sin W)
=!1'"
7T
II'"
=-
7T
II'"
ep = -
7T
It.
= t.
I
~
1wt
yet) e
dt
(V - V)2 dw
v
v
~ [yew) yew)]
Yew) - yew) = x(w) x(w) - X(w)
= x(w)[G(w) - g(w)
so that Equation 12.1"8a becomes
+i
It.
0
(12.2-1)
(12.1-9)
12.2
v
Yew)
<P
yet) ~
k~l
(12.2-2)
where U(t - t k ) is the unit step function , t k is the time
location of the observation, and ak, bk, etc., are coeffi
cients selected to best represent the response between
two observations. If only ak and bk are used, the data are
represented by a piecewise series of straight line seg
ments: if Ck is added , the curves are segments of para
bolas, and so forth.
The Fourier transform of Equation 12.2-2 is
(12.2-3)
374, 1967.
12.2-1
TABLE
Type of Input
LL
o to
377
x(w)
x(t)
Impulse at t = 0
~(t)
Impulse at t = to
~(t -
Triangular input
initiated at t = 0
cotU(t)
to)
C1(t - t 1)U(t - t 1)
+ C2(t - t 2)U(t - 12)
and Yew) can be split into its real and imaginary parts
for introduction into Equation 12.1-8b
m [ Y(w)]
/1}j
L:" [a
-- + k
2Ck
w3
+ .. .J'SIn wt"
, k=1
y(z, 0) , = 0
J[Y(w)]
"
~ L:
Y(O, t) = x(t)
k=1
._limro y(z, t) =
(a)
- d2y(Z, s)
n . dz 2
-
V ~ -
'(
) _ 0
sy z, s -
(c)
y(O, s) = x(s)
lim y (z, s)
._ro
(d)
we can solve for y(z, s) and evaluate the two arbitrary coeffi
cients in the solution by using Equations (d) to obtain
y(z, s) =x(s) exp
V [
V v2 - 4Ds ]
2D
z
(e)
(b)
); s) = x(s) exp
[;~ (I
- J1 --4~f) ]
(f)
[;~ (I
(g)
f,,
378
%(t)
Y(t) =
FIGURE
EI 2.2-1
+ 2(t
- 1)U(t - 1)
x (w ) = - 2
w
(h)
" .'
50'"
+ (t)
(12.3-1)
we can minimize
1> = }.
tf
Itr 2(t) dt
(12.3-2)
= 50'"
for
~O
(12.3-3)
f8
where
Txy(T)
C{X (t ) Y(t
+ :r)}
C{X(t)Y(t
T)} = C{ X(t)
fa"; g(A)X(t +
+ C{X(t)(t +
Tn
A) dA}
(12.3-4)
rxy(T) =
L'" g(A)~XX(T -
A) dA
(12.3-5)
rxAT)
Lf~oo
Syx(w) (l2.3-7a)
"'0
(a)
'U'WJ~
o
) W
Wo
(b)
''W'l_ _
o --------'-----=----------;)~w
W
r
(c)
'U(WJ~
(d)
eIWlsxy(w) dw
f oo
7T
(12.3-7b)
Pm
Dirac delta
function
O~)W
SXy(w) =
379
sxx(w) dw -+ 00
00
sxx(w) = k
=0
( - 2~ -< w -<
~)
w >2
1'
1-
'~
380
FIGURE 12.3-3
Typ e of
input
Random telegraph
signa l
Random binary
sequence
I _M
I.
~
- too 0
.L:
02 _" ... w
-=--
.
-L
k 8(1)
Sinusoidal
4 sin' (wl./2)
2
w t""
-L.
c2
White no ise
I..
--
cos ("'oT)
r.
i-. l....
l
c' 8(w)
8(w + "'0)
8(w - w.)
their corre
(12.3-8)
leads to
SXY(w) = SX(w)gT(w)
(12.3-11)
r oo 11(>.)f2(t -
.\) d>"
few) =
=
r o r ",
11(>")
1wt
L"'",!l(>")/2(t - >")d>"dt
e - IWfj"2(t - >') dt dA
With t - >. = 'T, the last integral on the right-hand side becomes
r oo rs e
- Iwh
r oo e-
t wf
where h ew) and ! 2(W) are the Fourier transforms of 11(t) and
I.(t), respectively.
!'
(12.3-9)
t The
t.
'. .
ifH < I .
if H > I .
Con stan t
Iz,.{w)
~ ~
.
e- c h l
Autocorrelation function
'Z,.{T)
g(w)
ksxy(w)
(12.3-13)
- ,-"-,-- -=- - - - - - - - - - -
_I
f
!
381
xit}
I
i
-At 0 At
(a)
grAY
.1
T
(b)
CI
-a
1= 0
and .
M-l
CjC(I+ i ) mo duJoM =
'
0
M ,j=
-a,j"# 0
1= 0
communication, 1968.
' ., :..
382
TABLE
12.3-1
RANDOM INPUTS
X(t)
$$$
4
100
3
101
6
102
103
2
PROGRAM RANDOM(INPUT,OUTPUT)
NX IS THE NUMBER OF DIGITS IN THE SERIES
DIMENSION IA(3000)
READ 100,NX
FORMAT(15)
IF(NX)2,2,3
PRINT 101,NX
FORMAT(23H TOTAL NO. OF ELEMENTS =,15/ /)
IZ = -1
DO 5 M = 1,NX
IA(M) = IZ
CONTINUE
Y = 0.5*FLOAT(NX)
LA = 1
IZ = 1
A = 1.0
B = A*A
X = FLOAT(NX)
o = AMOD(B,X)
ID = (FiXeD + 0.5)
IA(ID) = IZ
LA=LA+1
A = FLOAT(LA)
FORMAT(10(10X,1515/)/)
FORMAT(1515)
GO TO 4
CALL EXIT
END
FIGURE
Y(t)
XI(t)
12.3-5
ance free. t But how to treat the process data for more
I
!
.i
<)
1 (tl
<X (t ) Y(t
[
,;. /
Rxy{-r)
.
or) dt
\1
.
or} dt
.;:
Rxx(or) =
, I
,~ i....
.
,: /
.,
=~
2: X(t
k)
Y(t k
r)
k=l
rxx(r)
rn = ,rxy(r)
=f~<Xl e-lrofRxx(r) dr
SXy(w)
=J~<Xl e-lrofRxy~r) dr
X (t) ~-+---+.-~
FIGURE
383
=
=
w(r)RxxH
w(r)R x y( r)
w(w)Sxx(w)
SXy(w) = w(w)SXy(w)
! - - - - - - - + (t)
8xy (W)
12.3-6 Information flow to calculate the transfer function .
384
o
j
oI----Ir--~.......~"""""==---wa(w)/1.08
0'--""'
----'
FIGURE
Tm
""'--'
-1
:~.
l!dI
1m
if HI :s; 1m
if 11.,.11 >
= 0
,"
~.
1m
_
(Sin t wlm) 2
( ) -/m
WlW
~
Hanning
2
w2(r) =
\,
if H I :s; 1m
= 0
W2(W)
if
= !-Qo(w)
IH
>
+ t Qo ( w + t) +
1m
QO(w -
i)
sin
() = 2tm QOw
wlm
wl m
Hanning
3
W3("') = 0.54
= 0
W3(W)
if H I
if IITil>
-s
1m
1m
~) +
QO(W -
E)]
' .'
N-f
I _ j
+ ~. + .
J
N+ f
ag
ag
(12.3-16)
n = 0, 1,2
..
L X(k M) Y(k M + j M)
k =O
= lag window
= integer used with the frequency interval to
2n+l6.
J= - N
Rxy(j M) ;;; N
w(j 6.t)
'A
385
..
,
.
0 (12.3-15)
k=O
j<O
where
i= v=1
j = integer multiplier of 6.t used to designate a
lag , T
N = number of time intervals between samples;
N + I = n = number of data samples
M = basic binary interval for switching = sampling
interval if one sample is taken in each interval ;
N M = tf where tf is the end of the time
record
.::'
386
Original function
.~
l!'r- ; t~
:,"
FIGURE
yet)
= yet) + (t)
sxx(w) = sx.,,(w)
= SIIy{W)
Syy(w)
.2 (
, llsxy{w)II 2
) _
yXY W - [sxX(W)+SfOfO(W)][SIIII(W)+S,,(w)]
_
.
yXY W
1.
If see(w)
(12.3-21)
( )
S.. W
+SIIy{W)
-
Syy{w), then
y2
xt
(w) ,..", I _
,..",
s..(w)
SIIy{W)
r-;
o ~yh(w) s
(12.3-20)
+ SfOfO(W)
+ see (w)
SXy(w) = SXIl(W)
(12.3-18)
387
1 (12.3-19)
-<:
II~) -
E(t)
Y(t)
,,(t)-~-~
'-------'
'
Eo(t)
FIGURE
X(t)
388
[1
ak
(12.3-24)
(12.3-25)
where
+N
ak = t
L w~
k= -N
f[Sxy(0 .100)]
= 0.04822
Sxx(0.100)
= 0.09875
Syy(0.100)
= 0.003153
= 240
= 0.100 radian
Time of record
I hr
= 10 min
Maximum time
lag for correlation
= 20 min
v
Hence,
= ---:;---:----_::_
ak[ri:(w) -
rh(O.IOO) = 0.800
A
rxy(O.IOO)
= 0.89
and
Lower
= 2(60r~12
10
log vIG(w)1
~-a
]6
i - 0.80
0.90 = 1 - [ 1 _ 0.80 cos 2 8
from which 8
II II g(w) II
!I ~) -
and
ances of rh(w),
taken :
.
A
IIG(w)/I,
Var {YXy(w)} ~ ~k [I
+ rh(w)J
(12.3-23)
Upper
A
10g.vIG(w)!
X:
y2 =
11
[{[sX(w)]-lsXY(wWJ*sXY(w)[liY(w)J -l (12.3-26)
Chern.
==-- - --
time, min
interphase heat transfer coefficient, Btu/(min)(ft2WF)
V = volume of fluid in the tank, ft3
W = mass flow rate of fluid through coils, lb/rnin
p = density of fluid, Ib/ft a
Equations (a) can be rearranged as follows:
1 =
Tz
dT2 + [F
dt
V
FIGURE
E12.3-1a
dT.
dt
Accumulation
input
interphase
transfer
output
inpu t
389
+ UA(T2 - TJ (a2)
output
inte rphase
transfer
where
A = outside area of coolant coils, ft2
Cp = heat capacity, Btu /(IbWF)
F = volumetric flow rate, ft3/min
+ ~]T2
P2CP2 Y
[W
M
[~]T.
P2CP2V
UA ]T. _ [ UA ]T2
t;JCp,
MCp,
= P1CP1FT
P2CP2V
=
WC p3Ta
MCP4
(bl)
(b2)
. +a odd digit} .
(n - 1)lm <
{ -a even d"igtt with
.
1 < nt m
t D. M. Himmelblau and K.
2.0 gal/min
\,
-al
I
0.0 gal/min
I
I
I
Output temperature, Tz
FIGURE
390
_ {a
2(
I -
o
i
for IH > tm
The autocorrelation function has a triangular shape similar
to the diagram in the second row of Figure 12.3-2 except
that the peak is at a2. As t m approaches zero, rww(-r)
approaches an impulse function, and the crosscorrelation
function rWT between the input and output approaches that
of an impulse response function. The input power spectral
density is determined by taking the Fourier transform of
rww(T):
n - m
a2
rim
-1 m
= 2a2
= a2 t
(I _M) e-
I on
_ _2_
(c)
wtm
v = 0.7671 fe
F = 32.0 lb/min
Pi = P2 = P = 62.4 ft3
][
W1(t)
t=l
n-m
m
sm. wt )
L..
1=1
I- "
n-mL..
[-
.-
dr
n~ W.(t)T1+f
In]
T
'- "
n- -m
L..
f= m + l
. (e)
'f f+l
SWT(W) =
i -N
wl(T)[aj
+ (3j(T
(f)
where
Wl(t) = the lag window = [1 - (T/t m)2] 1;
I = 2
a j = the value of a correlation function at time n /),.t
(3j = the average slope of the correlation function between
timej /),.t and U + 1) /),.t, or
"
/),.t
record
.i"
- - _._----_ . .. . _ - _
..
.. ........
_.
_--__._----r-----r------,.----..,...------,-------r----,..------,-----r----,
Normalized crosscorrelation function
2.0
1.0
.40
20
60
100
80
120
140
160
180
1.0
0.8
Rww(r) 0.6
Rww(O)
0.4
0.2
0
20
40
60
80
100
120
140
160
180
r
FIGURE
10
"t
~930
05180
1.0
0.1 L..-
---'
0.1
0.01
....&....
1.0
10.0
Sww (w).
-.L--&.._--I-......L-I
100
391
392
1{
I N sin wN At
1{
+~
+ w IAt [ (Ii -
I - N sin wN At
+ ~. { (IN cos wN At +
i{
t w At
N- 1
~t [ -(l-N -
;S
N - l
IN -1 ) sin wN At -
- (N-1 )
+ J ~1
]}
]}
(g)
300
200
100
OJ
930
"0
'cC'I
"'
E
o 3180
o 5770
OJ
'E
a;
"'
a:
10
5
0
0.01
0
0.1
10.0
100
30
60
C'I
OJ
"0
~ 90
c:
"'
l}l
&120
130
180':-:0,01
----l
-L
0.1
F IG UR E
1.0
Freq uency. rad ians per minute
E12.3-Ie
...L
10.0
- ---~-
]}
'--_o_- l
100
or
"()
g w =
(h)
sx.(w)
sxy(w) - sx.(w)
sxx(w)
-""~:-7"'"'~~
r
;
~
...
,g
e
0.1
A
D
r
I
'iO
.I
Theoretical
930
'
..
3180 Number . of
5770 data pairs
Sinuso idal input
om
om
OJ
393
l.
LO
---L
10
----l
100
394
Supplementary References
0.01
0.04
0.Q7
0.10
0.20
0.40
0.70
1.00
2.00
4.00
7.00
1.00
1.00
0.98
0.97
0.94
0.90
0.84
0.75
0.54
0.34
0.23
Ph ase Lag
(degrees)
o
3
6
9
13
23
32
36
55
63
64
Problems
g(s) = -
.
. T S +. 1
Magnitude
Ratio
..~,.
Input Frequency
(cycles/sec)
Input Frequency
(cycles/sec)
Magnitude
Ratio
Phase Lag
(degrees)
0.01
0.02
0.04
0.Q7
0.10
0.20
0.40
0.70
1.00
1.0
1.0
1.0
0.99
0.98
0.94
0.69
0.35
0.17
13
' 15
19
27
48
87
131
164
11
----- _....
PROBLEMS
~umber
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31-"
32
33
34
35
36
37
3.8
39
40
41
42
43
44
45
46
47
48
49
50
I>ata
18.7 + 66.7
13.1
8.0
3.3
43.1 + 22.6
39.1
35.4
32.0
28.9
26.0
23.3
20.8
18.4
16.2
14.2
12.3
10.5
8.9
7.3
5.9
46.9 - 19.8
45.6
44.4
43.3
42.2
41.2
40.2
39.3
38.4
37.6 ~
'36~S--
36.1
35.4
34.7
34.1
33.5
32.9
32.4
31.9
31.4
30.9
30.4
30.0
29.6
29.2
28.8
28.5
28.1
27.8
27.5
N umber
I>ata
51
52
53
54
55
56
57
58
59
60
61
62
63
27.2
26.9
26.6
26.3
26.1
25.8
25.6
25.4
25.1
24.9
24.7
24.5
24.4
24.2
24.0
23.8
23.8
23.5
23.4
23.3
23.1
23.0
23.0
22.9..
22.8
22.8
22.6
22.4
22.3
22.2
22.0
21.9
21.8
21.8
21.7
21.6
21.5
21.5
21.5
21.3
21.4
21.4
21.2
21.3
21.2
21.1
21.0
21.0
20.9
21.0
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
395
+ aY=
X{t)
X{t) = ao
is
Rxx(-r) =
a~ +
2: ~k
COS
Wk-r
k=l
as tf becomes large.
12.7 The structure of turbulence can be analyzed by calcu
lating the crosscorrelation function for two fluid
velocities in a given coordinate direction observed
at different positions. Assume the velocity fluctuations
VA{t) and VB{t) at positions A and B are ergodic, the
estimated crosscorrelation function is R AB{T) =
<VA{t) VB{t) , and the standardized estimated cross
correlation function is
R* =
<VA{t) VB{t)
v<Vl)<V~>
Let
K2 - 1
1
= K2 +
".12$$
a.. j
~ ----
-- -_..., _-
..
..
APPENDIX A
Concepts of Probability
397
I B} = P{B I A\}P{A\}
P{B}
398
APPENDIX A
P{B I Aj}P{A I }
n
2: P{B I Aj}P{A j}
1= 1
(A-I)
1-"'",
1-"'",
.'
peA l> or A 2 ,
0' or An)
=,L peAk)
(A-4)
k= i
CONCEPTS OF PROBABILITY
399
(A-8)
A.1
=
=
or
Lq=nq=l
k=l
1
q = - = P(A k )
n
(A-5a)
u .u
An)
P(A I )
I B)
P{A)P(B I A)
. (A-8a)
P(B)P(A
(A-8b)
I B) = peA).
= P(A I)P{A 2 )
(A-4a)
P(A n )
P(A k )
(A-9)
k=l
+- ,PCB) -
Example A.l
Rules
FIGURE
~~~""",,,",",,~-~
. ...,.,...,..:,--;;~_.' __ .,- . - .. - - -_ _ ._ ... __ ._._ ... _ .
P(A u B)
(A-lO)
TABLE
EA.1
(A-7)
Red
1,.~
1,5
2,4
2,5
3,4
3,5
Upper
Face
(Out
come)
1, 1
1,2-. 1, 3
2,1
2,2
2,3
3, 1
3,2
3,3
1,6
2,6
- -- -
3,6
4,1
4,2
5,1
5,2
6,1
6,2
FIGURE
The toss of two dice, one red and one blue, provides a
simple illustration which makes the above rules more
meaningful. When the two dice are tossed, because each
has six sides and can yield one number (as the up number),
the events which can occur can be portrayed in a two
dimensional array termed the "sample space." .For fair
dice it is possible to predict in advance the sample space; in
other cases the same information must be obtained by
experimentation. In Table EA.1, each box represents one
possible outcome.
P(A fl B)
4,3
4,4
5,3
5,4
6,3
6,4
4,5
- - - - -5,5-
- . . . - - - - . - .- ,- - . , - ~ _
.. ---.-~-._--~----.-, - - - - - 0 - . , - -.. - - - - - - - - - - . - - - . - - . - . - -.. - - - .~ - - ' _ . , - - - - - - . - - .. - - - . - -
6,5
4,6
5,6
6,6
400
APPENDIX A
R = 10]
= P[(6,
+ R < 41 (B =
P[(B
1)]
R < 4) fl (B
= 1) = 6h\) = t
= I)] = 2hL6) = -ls
so that
P[(B
+R <
4)
fl
(B = 1)]
=Ts = t
t
B=:;3
}
(1, 1), (I, 2), (I , 3)
I)]
=fi
.s 2)
l6 + n + :l6 + -h + -h + n
= G!)m) = t
.s 2) =
4)
fl
PCB
R =:; 2
}
. ..
12
(I, 1), (1,2),
, (I, 6)
4)
= PCB + R = 4)
P(B + R = 4)
= (-k)(-h) =
Th
Supplementary References
h.
18(n)
+ 12(ftr) -
6(n)
+ PCB)
- peA
fl
B) = peA u B)
= tt = t
APPENDIXB
Mathematical Tools
(B.I -I)
C1(Sl
or they may
= P 1(x):';
(B.l-3)
Cl
= 0
= 0
= 0
or other functions.
The x's are said to be linearly dependent if for some
set of the CI'S (assuming the CI'S are not all zero) the
following is true:
CIXI
dF(f1
= 0 (B.1-6)
1=1
+ 11) =
dF(fl)
+ dF(f2)
dF(kf) = kdF(f)
The polynomials
P 3 (x )
= Xl
(B.2-2)
only if the c;'s are all zero, then the x;'s are said to be
linearly independent.
P 2(x ) = Xl
(B.2-l)
2. Proportionality :
P1(X) = ZX1 -
(c)
(B.l-4)
(B.I-S)
(b)
x, = Pn(x)
C1X 1
+ 3C2
S2: ZC1 + 2C2
S3: 3C1 + C2
Sl :
(B. 1-2)
~eJ).~lyI1omials
Xl
= 0
== all Sl + a12 S2 + .
x, == an 1 Sl + an 2 ,S2 + .
+ ZS2 + 3S3
3S1 + ZS2 + S3
Xl
2'
Xl = Sl
X2 =
_ .2-
X2
+ ZX2
+ 4X2
402
APPENDIX B
TABLE
B.2-1
Linear
Nonlinear
du
2(u) ==
dx
% (u) == R(x)u2
2(u) == ,Q(X)U
%(u) ==
(:)U
%(U) ==
J:
+ x) ds
H(x, S)U(S)U(S
+ B(x)u
2(u) == A(x)
anu
% (u) == V(u) ax n
du
= C(X) dx
Yl(t)
+ Y2(t) =
[xHt)
B.4
(B.2-4)
(B.3-1)
= f[X 2(t)]
(B.3-2)
aml
'"
-. _
, -
...,
~ ,- ,_
.,- - ~ - -
--- _.
__
_ - - _ .-
. _ _ ._
a12
a22
:::]
(B.4-I)
a mn
am2
(B.3-3)
__
+ X2(t)]2
MATRIX ALGEBRA
lau
a21
a=
........ . .
i= [Xl(t)
Y2(t)
+ x~(t)]
(B.2-3)
~ [~ ~
:]
a 3 x 3 matrix
(OM)
-. _
, ,-
. -
-_. _ _ .,
- .
....,.....- _ .~
- - - -.
-...,. _.-:-._
..
- .~ -
'"c" _ _
-
.-
----:'"._ _
~
.
~
MATHEMATICAL TOOLS
For example,
1 0
a3 x 3
identity matrix
(B.4-3)
aT
a12
a22
Au
= (1)(4 - 3) = 1
A 12
= (-1)(2 - 0) = -2
A 13 = (1)(3 - 0) = 3
etc.
-(2 - 3y)
0
1
a m2
(B.4-4)
4 '
a mn
a'
~[ ~
-1
0 12]
=
[
1 2 3.,
234
a-I
d
= __
:]
(B.4-5)
cofactor matrix
= a":
[All]
(B.4-6)
det a
. 12.
A 1n
(B.4-7)
56
-28
-12
62
-4]
-12
-28
56
= 392
a-I
(B.4-9)
An illustration is
[""i
-LL
392
-392
-28
..&.L
-12
_ 0_
-28
"392
"
= adjoint matrix =
if det a =I- 0
det a
392
.#
(B.4-8)
(2x - 1)
a- 1a = I
a2n
-I]
Y
(1 - 2..) ]
-(x - y)
2x
For example, if
032
-3x
aln
a.,]
[au a"
=
Y]
1
2 1
a =
[
0]
403
=~
98
392
-"392
-/4
-.J.L
196
38
-9
~r
~\~
-:]
1
7
~]
Column
vector
[1
2 3 4]
Row
vector
404
APPENDIX B
L alJb
Cll =
(B.4-IO)
il
a2lXl
+ a12x2 +
+ a22X2 +
a..lxl
allxl
ab =
+ 0 + 6)
+ 2 + 3)
[
(0 + 2 + 6)
(0
(0
[:
~:]
[;
+ 0 + 4)
+ I + 2)
(0 + I + 4)
(1
(2
a=
b2
(B.4-13)
::: :::
a..l
al"l
a2..
a..2
a....
X~m b~m
;]
== b
+ 0 + 2)]
+ 0 + 1)
(0 + 0 + 2)
(3
(6
(B.4-15)
if det a '# O.
Example B.4-1 Solution of a Set of Simultaneous Indepen
dent Linear Equations
"
bl
in which the al/s are constant coefficients and the x/s are
unknowns. In compact matrix notation with
J=l
~ [~ :n ~
+ al..x.. =
+ a2..x..,=
d[al'{x)]
dx
(B.4-11)
2Xl
= 1
3Xl
= 1
= 1
= 1
Solution:
We know that a and bare
(B.4-12)
B.4-1
a- l
[_OOOOm6
0.050725
- 0.188406
-0.176812
0.237681
0.002899
-0.072464
0.013043
-0.091304
0.139130
0.021739
0.298551
-0.089855
-0.037681
- 0.057971
020lMSl
MATHEMATICAL TOOLS
Xl] [0.065217]
0.008696
X2
X=
0.082609
X3
0.113043
'
(B.4-16)
(B.4-17)
(B.4-18)
det (a - AI)
(all - A)
a12
aln
a21
(a22 , A)
a2n
Then
P(A) =
or
(a - AI)x
(B.4-21)
(a - .\I)xl = 0
X4
405
[1 -2 A 1 -2]A
= A2 _ 2A _
30
=
(a)
2 ] [X
1 - 2( -1)
1 - (-1)
X:
(b)
= 0
-:.: .
anI
=0
On;
(ann - A)
which yields
2Xl +
(B.4-19)
2x l
= .0
+ 2x 2
= 0
(c)
or
(d)
2X2
1- 3
2]
2
1- 3
[Xl]
(e)
X2
or
(f)
(B.4-20)
The scalar multiplier A we sought is one of the n roots
(real or complex) of Equation B.4-20; each of these roots
is called an eigenvalue (or characteristic value or latent
root) of a and , in general , can be obtained by iterative
methods.
If Al is an eigenvalue of a, then for this value of A
Equation B.4-19 is satisfied, and has a nontrivial solution.
Momentarily, let us assume that Equation B.4-19 does
not have multiple roots so that there are n distinct A's.
Xl
(g)
406
APPENDIX B
If
'\1
= '\2 = I;
,\a
Norm
v'P +
Norm
v'P + 12
(-1)2 =
=
v'2
v'2
= 2X2 + 0 + 0 = 0
2Xl + X2 + 0 + 0 = 0
o + 0 + 4xa - 2X4 = 0
o + 0 - 2xa + X4 = 0
4Xl
and
which yields
2Xl
+ X2
2xa -
X4
= 0
= 0
- v'S
-1'
v'S
v'5
v'S , and
v'S
v'S
= 6,
- v'S
or
For ,\
li:]
v'S
LL
alJxjxJ
= xTax
(B.4-23)
1=1J=1
B.4-3
Normalization
(XTx)y.~Ji X~
(BA-22)
1=1
+ 02 = V14
Norm x
vl 2 + 22 + (-3)2
Normalized vector}x
(unit vector)
[_l_,~ , -3 , 0]
+ xf + !x1xa + -!-X3X1
(B.4-24b)
MATHEMATICAL TOOLS
-2
1
o
where a is called the symmetric bilinearform.
Of particular interest in interpreting empirical models
is a method of reducing the general quadratic form ,
which may contain crossproduct terms, to the so-called
canonical form which does not contain crossproduct
terms . For example, if
q1
407
]x
12
5
-2]
5 y
OJ
A1
~ [0
WaU
(B.4-2)
A,. . . A"
A1 Y~
+ A2Y~ + ...
(B.4-27)
q1 = yTbTaby =
A1Y~
+ A2 Y~ + ... + A,.y~
or specifically Itexe
q1 =
-l _~
U =
,.
V2
=y~+y~
[~ ~J
vi
Then, by multiplication,
--11
[ 1-2]
_2
vi
5 above 'I
~2J
In the other case, with A1
1
V5
2
V5
U=
,
0
0
= 1, A2 = 1, A3 = 6, and A4 = 6,
0
0
1
V5
2
V5
V5
1
V5
0
0
0
0
2
V5
1
V5
-- .-. HI
408
APPENDIX B
lP'aU =
::::],
[
o
0 6 0
10~
2XlX3
4X2X3
7x~
~ .'ox ~ [x,
det
x, x.{-;
and
OXj.
10
7- A
-2
-2
10 - A
-21
-2
7- A
[
OXI
-2
-2
and
Ulj
4XlX2
Solution:
000 6
Let
=0
or
(7 - A)[(1O - A)(7 - A) - (-2)( -2)]
+ 2[( -2)(7 -
A)
+ 2]
180A - 432 = 0
(a)
B.4-1
+ U13X3
+ U23X3
X2
=
=
X3
= U3 l Xl +. U32X ii +. U33X3
Xl
U11Xl +Ui2X2
U2l Xl
U22X2 '
(B.4-27)
+ , A2~ .+ , "~~
6x~+ 12~
(b)
or
:II:
= Ui
Note that U11' U12, and U13 (the first row) are the cosines
of the angles OXI makes with OX1> OX2' and OX3'
respectively, while U11, U21> and U:n (the first column) are
the angles that OXI makes with Ox. , Ox2, and OX3'
respectively.
One can show that U is an orthogonal matrix
(U -1 = UT) since
j
= (1,2,3) (B.4-28a)
and since
(i # j)
(BA-28b)
Also one can solve for i in terms ofx and thus relate the
new coordinates to the old ones :
(BA-29)
- 5Ul -
2U2
U3
= 0
MATHEMATICAL TOOLS
U=
V3
'\1'2
V6
V3
V6
'\1'3
'\1'2
V6
DIFFERENTIAL EQUATIONS
as follows:
1
yfj
V3
UTaU
409
V3
1
V2
V2
V6
V6
V6
'\1'3
v2
V6
-2
[-:
-:]
10
-2
;2
V3
V6
V3
V2
V6
i ='
V3
Y3
V2
va
1
o -Y2
(d)
121
V6
V6
V6
Mich~, .1945.
410
APPENDIX B
+a
J:d + by = f(t)
(B.5-3)
(D2 + aD
+ b)y = f(t)
dy
a 1(t) dt
+ ao(t)y =
r2
(B.5-1)
x(t)
+ ar + b =
r=
va
2
ClYI
+ y v(t)
TABLE
B5.I
dx 2
(B.5-2)
where
temperature
distance from center
A = cross-section of rod
D = diameter of rod
Q = energy generated/volume of rod,
T
Solution
Case
1. Homogeneous equations
a2 - 4b > 0
Real (equal) roots (r1
rz
r)
a2 - 4b = 0
r1 = ex + {11}
Complex roots {
.
r2 = ex - {11
Y = ea l(c1 cos {1t
= A eat sin (1(t
4b
k d 2T = (h)(2D)T _ Q
1= 1
Y = Yc
+ C2 sin (1t)
+ 8)
+ Yv(t)
- . ~ - --- -:-."_ . _ - _ . _ -
-a
0,
I
,
MATHEMATICAL TOOLS
at x
To
at x = 0
dT = 0
dx
dx2 - m T =
-I
(a)
= -q
where
m 2 = (h)(27TD)
Ak
The solution of the homogeneous equation"is
T = A e- mx
B emx
(b)
T-.!L
p m2
(c)'
[c
+ ht 2S -
b(1 - a - r)t'
+ b2t21]y =
0 (B.5-5)
411
+ B emx + .!L2
(d)
q
+2
(e)
(B.6-1)
~;
(B.5-4)
For' multiple roots, independent solutions of the form
Y = f(t) e't can be employed under certain conditions.
Methods of solution using Laplace transforms will be
discussed later.
.
dy
dt = fey , t)
(B:6-la)
or
y
= F(t)
(B.6-2a)
412
APPENDIX B
C'
n set of
to find a specific closed-form solution lor a give
.
Equations B.6-1 may prove to be most difficult.
.
In another form , a set of differential equations can exist
as a single equation of nth order:
1
d"y - ( d__
(B.6-3)
Y , d 2y, .. . , dy n- , t )
dtn - g y'dt dt 2
dt n- 1
This can be transformed to the canonical form by use of
the following transformations:
y = Z1
dy
dZ1
dt = dt = Z2
(B.6-4)
dy":?
dZn - 1
dtn -1 = ----ett"" = zn J
., >,
dZ1
tli = Z2
dZ2
Zn
,~ ,
dz;
'
" .
]
- = G[zt> Z2, " ., Zn, t
dt
Thus an nth order differential equation can alw~ys be
reduced to a system of differential equations equIvalent
to Equations B.6-1. (The converse is not true-that a
system such as Equations B.6-1 can always be trans
formed into a single nth-order equation.)
Example B.6-1 Reduction of Higher-Order Equations to
First-Order Equations
Let us put
ii
ii
+
+
31i - 4u
31i
+ v + v + 3v =
- v + 3zi + v =
(a)
(b)
Then,
U,
X2
Ii,
+ Xs
4X1
+ 3X2 +
3xa
X4
= 0
or
In matrix notation:
0
Xl
-3
-2
-2
X2
xa
X4
X4
XS
-1
Xs
Xl
X2
Xa
Za
(B.6-5)
dZn 1
dZ
=
X2
tli =
Xa =
v,
X4
= v,
Xs
= ii
Yl = allYl
Y2 = a21Y1
+ a12Y2' +
+ 'a22Y2 +
+ a1nYn
+ a2nYn
(B.6-6) ,
MATHEMATICAL TOOLS
dy
= dt =
(B.6-7)
ay
or
eB<t-!o)y - Iyo = ft [eBw-to)]x(t') dt'
Jt o
y(O) = Yo
ay
Jt o
= e:"
=
x2
e" = I + x + 2! + . . .
dy_d(t)
dt - dt e" Yo
d(
I:
I:
e"t'x(t') dt'
e"<t'-tlx(t') dt'
(B.6-14)
I
=dt l+at+2!a2t2+3!a3t3+
... ) Yo
.'_.-. '.
.-
(B.6-15)
(a + a t + d! a t
I
= a (I + at + 2! a t
=
2 2
-p ;. .)Yo
2 2'
+ .. .) Yo
(B.6-13)
(B.6-12)
n!
~ .
413
Then
(at)2]
= [ I + at + -2+ ... Yo
= a(eat)yo = ay
Next consider an inhomogeneous .set of equations
where x(t) is the forcing function (or set of inputs) for
the set of equations
dy
dt
ay
a - M = 0 or
(B.6-1O)
= x(t)
= h(l)h - 1
a = AI
a = h(M)h- 1
As with a single scalar differential equation, an inte
grating factor e"(t -to) can be introduced such that
[eBa-to)]
~;
a 2 = (h(M)h-1)(h(M)h- 1) = h(AI)2h - 1
etc.
Finally,
Then
!!... [ea(t-to>y] =
dt
[e"(t- to)] dy
dt
= [e"(t-t o)] dy
dt
= [e"<t-!o)]x(t)
[d eB<t-to>]y
. dt
+ a e"<t-to)y
.
(B.6-11)
(B.6-16)
4$
,.
APPENDIX B
414
[-'
} ~O
-2
-2
- 3
[ -~
[-2
}~O
-2
(B.6-17)
so
h,
'0
h,
[]
~ [~]
so that
e"1
~h l l :h21 ::: ~ ]
.
..
o 0
h- 1
(B.6-18)
h 1
}~O
~~
'0
[-]
This means
[;
-;]
and
h-
[12
= i -;
-6
6
0
-;]
and
[~ -;]
0
,\I
= h-
1ab
(B.6-19)
If the roots Aj are complex, as long as the y occur in con
jugate pairs, Equat ion B.6-19 can be arranged to contain
onl y real numbers including sines and cosines.
".
Example B.6-2 Matrix Solution of Sets of Differential
Equations
Solve y - 3y + 2y = e : ' with initial conditions y = 0
and y = 1 at t = O.
= b eh1th- 1 yo
or
Y1]
Solution:
Let
Y2
Yl = y,
Y2
[ Y3
y,
21 e
[ -te+1+ te
+ i -.
-tel + te2t - ie-I
e- I
I..]
.....
Then
= Y2
Y2
Y, .0 = 0
2Y1
3Y2
+ Y3
Y3 = - Y3
'1
Y2,O
Y3 .0
= 1
or
y= [-:o
~]y
0-1
ATt ~ ATt
in which ATis a q x q matri x (where q is.the multiplicity): .
AT
(0 - ,\)
- 2
(3 - ,\)
1.
= (-
( -1 - ,\)
=0
1 - ,\)(A - 2)('\ - 1)
AT
AT =
(B.6-20)
0
AT
AT
MATHEMATICAL TOOLS
415
The solution is
y(t) =
eatYo
= h e At h - 1 yo
tq-1/(q - I)!
t q-
/(q - 2)!
(B.6-21)
000
or
Yl =
Y2 - Y3
Y2 =
2Y2 + Y3
Y3 = 4Yl - 2Y2
+ 5Y3
with
Yl(O) =
with
Y2(O) =
with
Y3(0)
= -2
Solution:
_~
a = [:
-;]
[-: _: -;]h 0
1
h
1
so
y=
by
z=
dz
+ cz
and
(B.6-23)
the same form as Equation B.6-7, or
(B.6-23a)
[y*] = [a][y*]
(a - .2I)h2 = ,hI
where
y* =
so
[tJ
y* =
[~]
and
From (a - 3I)h3 = 0, we find
a=
Example B.6-4
Then
h =
r:
3
2
[-~ -;]
Yl = 3Yl- + Y2
:J
AI = h- 1ah
-' -,
[~J
h- =
1
-2
[:
2
0
:]
e2t
+ sin t +
(a)
(b) .
Solution:
Let
z3=sint;
Zs
-1
~--
-.;
' ..
~.
APPENDIX B
416
+ 4z 1 =
za + Za =
Zl - 4z 1
or
Z2 = 4z2 - 4z 1
or i 4 = - Za
Zs = 0
2. Poisson's equation
In matrix notation,
i1
Z2
-4
za
Zl
0 0
Z2
Za
i4
0 0
-1
Z4
is
0 0
Zs
-4 4
0 0
1 0
3. Diffusion equation
4. Wave equation
5. Damped-wave equation
(telegraph equation)
6. Helmholtz's equation
-' .~ - !.
40;
-
d=
0 0
- 1 0 0
0 0
c =
[- ~
1 0
- 2 0
-~]
and
:h
Y2
i1
i2
3 11 - 2 1
1 0
0 - 2 0
- 1 21
Y1
-2
Y2
ia
0 01-4 4
0 0'I 0 0
i4
0 0'I
is
O!
0 0
0
Zl
Z2
Za
- 1 --0
Z4
O' 0
Zs
.:.: ..
I
I
I
= !(s ) =
Io'" e- s!j(t) dt
'I
i
i
J
(B.8-I)
"
1. Linearity . If
,~~
! 1(S) = .P[/1(1)]
and
! 2(S)
= .P[/2(t )]
then
.P[ed1 (t)
+ c2f2(1)] =
C1.P [/1(t )]
= ed 1(s )
+ C2.P [/2(t )]
+ e2!2(s)
(B.8-3)
,I
MATHEMATICAL TOOLS
. 2. Transform of a derivative.
z[~~)]
417
8(%)
= L[f'(t) =
sl(s) - 1(0)
(B.8-4)
_ _ _......L.... _ _
3. Transform of an integral.
FIGURE
(B.8-5)
4. Complex translation. If
!(s) = 2[1(t)]
then
I(s - a) = ..P[ea'l"(t)]
(B.8-6)
=1=
6.
d{::)
Z[(-t)nl(t))
(B.8-7)
lim !(s)
as s ---+
(B.8-8)
00
7. Integration of a transform.
f'
lex) dx
z[!~t)]
. (B.8-9)
--.-=1
(B.8-12)
t>O
Initial value:
lim s!(s)
s-+ 00
Final value:
= 1
t < 7
t
>
. t-+
e-S~
1.
f(t) = U(t) =
2.
/(t) = eat
3.
/(t) = cos at
if f(t - 7)
for
0 <
=0
<
'T
1.
2'[U(t)] =
fo"' e-
2.
FIGURE
B.8-!
:;
T
2[eat]
e- 8t ]
( 1) dt = -
1
s
1
s
U(t-r)
8t
=--[-1]=- "'3""'>0
I~
{~ ::~}
Solution:
Both
U(t)
OJ
'T.
1
9'[U(t)] =
.
s
t-+o
(B.8-II)
then
2[U(t - 7)] =
Iimf(t)
U(t - 7) = 0
'T
- 00
e - steat dt
.. 0
100 e(a-
s)t
dt
~ t
a-s
_1_
s-a
418
APPENDIX B
= ~ [e(ta-.lt
2 (ia - s)
e<-la-Ilt] '"
(:;--~i:-a---s-:")
I[ Iia + i+l ia ]
2:
S -
= S2
+ a2
2[f'(t)] = 2[d/(t)] =
dt
S2y(S) -
+ Py(t)} = 0
L{y'(t)
Solution:
= 0
dt
S2y(S) -
SCI -
(;2
k 2y(s)
= 0
+S
y(s)
e-'IJ(t) dt
+ sJ(s)
SCI + C2
SCI
.
C2
= S2 + k2 = S2 + k2 + S2 + P
Next,
y(t) =
Cl
[e- slJ'(t)]Q"
+ s fo'" f'(t) e - t dt
OC
ot
Evaluation of the inverse of Laplace transforms of
functions (a topic of considerable importance) may be
exceedingly difficult to accomplish. In determining a
functionf(t) from a functionj'(s), as a practical matter we
usually first seek a table of Laplace transforms and try
to match the transform of interest with one in the table.
If this is successful, the inverse is immediately found.
Other ways to evaluate 2 - 1 are by:
I.
2.
3.
4.
5.
ox2
~(:C)
= constant =
uX %=0
(3)
t = 0,
= 0,
= 0,
t> 0,
k,
0,
x=O
-+ CXJ
Solution:
s) -
(
C x,
a
= :u
2c(x,
dx 2
s)
(a)
(b)
(3')
.fL'[ ~(Oc(x, t)
.
dx
x=o
] = ~ (dc(x,
s)
dx
2[c(x, t)]
x ... 00
= c(x,
s) = 0
x,- oo
=
x =o
.fL'(k)
=~
s
MATHEMATICAL TOOLS
TABLE
B.8-1
Function f(t)
s
t"-l
(n - I)!
s"
1
(7Tt) Yz
1
sYz
(n - I)!
(s - a)"
1
(s - a)(s - b)
a:f. b
s(1
+ as)(1 +
bs)
1 .
- smat
a
COS at
. h at
-1 sm
a
cosh at
erfc(2~t)
\,
1 exp (P)
v;r
- 4t
f(t)
/(s)
/(t - b)U(t - b)
e -bB/(s)
/'(t)
s/(s) - /(0)
j<")(t)
t"/(t)
( - 1)"f<")(s)
8(t)
U(t)
419
"J! .;;,
-:;:
420
APPENDIX B
0; then
Uit - T)
,I
I'
II' I
Now
so that
T
FIGURE
k ,( ~) Yz -(v':i)x
c(x, s) = - - - ' e
!!d
s~
s
v
(e)
_~
~Y.
sy.
v~
eric (
2 v~t
)]
(f)
t> 0,
co,
,x = 0
U(t - T)
8(t - T)
~,
s
so that
Then
Co
c(x, s) = -e
-(v'1)x
f$
={
O,
1,
<
t >
T}
(B.9-l)
Figure B.9-2.
we would have
f~
x = 0
<Xl
= 0,
t =1=
8(t - T) dt = 1
T
(B.9-2)
and
C(x, t) =
co2-1Ge-(v'~)X)
Co
eric ( -
2:~t)
(i)
B.9
Finally
a 1--------.
GENERALIZED FUNCTIONS
II
j
rI
MATHEMATICAL TOOLS
6(1-T)
0'1"1
0'1"1
dX1 + dX2
OX1
OX2
= 0,
+ . . . .+ -0'1"1
=0
OX
n
0'1"2
0'1"2
.
0'1"2
- d x1 + - d X2 + . .. + OX1
OX2
OX"
O'Yp d
Xl
OX1
FIGURE
=' 0 (B.Io 4)
o'Y p d
o'Y p
0
+ -X2 + . .. + - =
OX2
OX" .
+(
p
Of +A1 0'Y 1+ A2 0'1"2+ .. . +A p O'Y ) dX2+'" =0
OX2
OX2
OX2
OX2
dt
(B.I0-5)
(B.10-1)
of .
-=0
oX2 ,
. : -. ~
of dX2
OX2
+ .. . +
.
of dx;
ox"
(B.I0-3)
Xn )
0'1"2
OXm
o'l"p
OXn
0'1"1
OX"
O'Y p
OXm
etc.
df = 0 = of dX1
OX1 .
421
+ Al 0'1"1 + A2 0'1"2 + . . .)
OXm
OXm
(B.10-6)
(
of
OX"
OX"
+ Al 0'1"1 + A2 0'1"2 + . .. +
OX1
of + Al 0'1"1
OX2
.o X2
OX1
Ap o'Y p = 0
OX1
(B.1O-7)
o'Y p = 0
" POX2
+ A2 0'1"2 + .. . + A
OX2
etc.
If we simultaneousl y solve Equations B.10-7 together
with Equations B.1O-4 and B.1O-6, we can find the values
of the x' s at the extremum and also the ,\'s. If J( x m , . , x n )
= 0, it may prove feasible to interchange the role of some
422
APPENDIX B
x n) ,
(B.1O-9)
Example B.10-1
Solution :
The distances (function to be optimized), as shown in
Figure EBI0.l, are
d = f(x, y) = .yx 2
+ y2
Sx 2
Sy2 - 8
r + y2
(a')
'FIGURE EBI0.1
Equations 8.10-6 and 8.10-7 are
or
2x
+ A(lOx + 6y) =
A o'Y = 0
oy
or
2y
+~ A(6x +
of o ' Y
-ox + Aox
of
oy
0 (c)
lOy) = 0 (d)
(e)
x2 = t
()
x2 = 2
Consequently,
x2
+ y 2 = t + t = I;
= 2 + 2 = 4;
f(x, y)
f(x, y)
= x 2 + y2
I(min)
d = 2(max)
x n)
(B.1O-1O)
(b)
j = 1, 2, .. . , n
Supplementary References
6xy
of
-ox} = 0
(a)
= 1,2, . . . ,p
(B .IO~8)
Matrix Methods
Amundson, N. R., Mathematical Methods in Chemical Engineer
ing, Prentice-Hall, Englewood Cliffs, N.J., 1966.
Bellman, R., Introduction to Matrix Analysis, McGraw-Hili,
New York, 1960.
Frazer, R. A., Duncan, W. J., and Collar, A. R., Elementary
Matrices, Cambridge Univ. Press, 1960.
Hohn, F. E., Elementary Matrix Algebra, MacMillan, New York,
1958.
Ordinary Ditrerentlal Equations
Ayres, F. , Theory and Problems of Differential Equations, Schaum;
New York, 1952.
Davis, H. T., Introduction to Nonlinear Differential Equations,
Govt Printing Office, Washington , D.C., 1960.
Kaplan, W., Ordinary Differential Equations, Addison-Wesley,
Reading, Mass., 1958.
Murphy, G. M., Ordinary Differential Equations and Their
Solutions , D.Van Nostrand, New York, 1960.
Struble, R. A., Nonlinear Differential Equations, McGraw-Hili,
New York, 1962.
Partial Differential Equations
Greenspan, D., Introduction to Partial Differential Equations,
McGraw-Hili, New York, 1961.
Moon P. and Spencer, D. M., Field Theory for Engineers; D. Van
Nostrand, New York, 1961.
Sagan', H., Boundary and Eigenvalue Problems in Mathematical
Physics, John Wiley, New York, 1961.
Sneddon, I. N" Elements of Partial Differential Equations,
McGraw-Hili, New York, 1957.
1
I
Operational Calculus
I
t
I,
I
APPENDIX C
Tables
T ABLE
C.l
P { U :5 u} == P(u) =
---=
1 e- (u')2 /2 du'
0)
P(u)
V 21T
~
0
0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
1.9
2.0
2.1
2.2
2.3
2.4
2.5
2.6
2.7
2.8
2.9
3.0
3.1
3.2
3.3
3.4
0.00
0.5000
0.5398
0.5793
0.6179
0.6554
0.6915
0.7257
0.7580
0.7881
0.8159
0.8413
0.8643
0.8 849
0.9032
0.9192
OA).332
0.9452
0.9554
0.9641
0.9713
0.9772
0~98 21
0.9861
0.9893
0.9918
0.9938
0.9953
0.9965
0.9974
0.9981
0.9987
0.9990
0.9993
0.9995
0.9997
0.01
0.02
0.5040
0.5080
0.5438
0.5478
0.5832
0.5871
0.6217
0.6255
0.6591
0.6628
0.6950
0.6985
0.7291
0.7324
0.7611
0.7642
0.7910
0.7939
0.8186
0.8212
0.8438
0.8461
0.8665
0.8686
0.8869
0.8888
0.9049
0.9066
0.9207
0.9222
- 0.9345
0.9357
0.9463 -:. 0.9474
0.9564
0.9573
0.9649 . 0.9656
0.9719
0.9726
0.9778 " 0.9783
0.9826
,0.9830
0.9864
0 .9868
0.9896
0.9898
0.9920
0.9922
0.9940
0.9941
0.9955
0.99~6
0.9966
0.9967
0.9975
0.9976
0.9982
0.9982
0.9987
0.9987
0.9991
0.9991
0.9993
0.9994
0.9995
0.9995
0.9997
0.9997
0.03
0.04
0.5120
0.5517
0.5910
0.6293
0.6664
0.7019
0.7357
0.7673
0.7967
0.8238
0.8485
0.8708
0.8907
0.9082
0.9236
0.9370
0.9484
0.9582
0.9664
0.9732
0.9788
0.9834
0.9871
0.9901
0.9925
0.9943
0.9957
0.9968
0.9977
0.9983
0.9988
0.9991
0.9994
0.9996
0.9997
0.5160
0.5557
0.5948
0.6331
0.6700
0.7054
0.7389
0.7704
0.7995
0.8264
0.8508
0.8729
0.8925
0.9099
0.9251
0.9382
0.9495
0.9591
0.9671
0.9738
0.9793
0.9838
0.9875
0.9904
0.9927
0.9945
0.9959
0.9969
0.9977
0.9984
0.9988
0.9992
0.9994
0.9996
0.9997
0.05
0.5199
0.5596
0.5987
0.6368
0.6736
0.7088
0.7422
0.7734
0.8023
0.8289
0.8531
0.8749
0.8944
0.9115
0.9265
0.9394
. 0.9505
0.9599
09678
0.9744
0.9798
0.9842
0.9878
0.9906
0.9929
0.9946
0.9960
0.9970
0.9978
0.9984
0.9989
0.9992
0.9994
0.9996
0.9997
0.06
0.5239
0.5636
0.6026
0.6406
0.6772
0.7123
0.7454
0.7764
0.8051
0.8315
0.8554
. 0.8770
0.896 2
0.9131
0.9279
0.9406
0.9515
0.9608
0.9686
0.9750
0.9803
0.9846
0.9881
0.9909
0.9931
0.9948
0.9961
0.997 1
0.9979
0.9985
0.9989
0.9992
0.9994
0.9996
0.9997
u in table
0.07
0.08
0.5279
. 0.5675
0.6064
0.6443
0.6808
0.7157
0.7486
0.7794
0.8078 .,
0.8340
0.8577
0.8790
0.8980
0.9147
0.9292
0.94 18
0.9525
0.9616
0.9693
0.9756
0.9808
0.9850
0.9884
0.9911
0.9932
0.9949
0.9962
0.9972
0.9979
0.9985
0.9989
0.9992
0.9995
0.9996
0.9997 .---
0.09
0.5319
0.5359
0.5714
0.5753
0.6103
0.6141
0.6480
0.6517
0.6844
0.6879
0.7190
0.7224
0.7517
0.7549
0.7823
0.7852
0.8106 . 0.8133
0.8365
0.8389
0.8621
0.8599
0.8810
0.8830
0.8997
0.9015
0.9162
0.9177
0.9306
0.9319
0.94 29
0.9441
0.9535
0.9545
0.9625
0.9633
0.9699
0.9706
0.9761
0.9767
0.9812
0.9817
0.9854
0.9857
0.9887
0.9890
0.9913
0.9916
0.9934
0.9936
0;9951
0.9952
0.9963
0.9964
0.9973
0.9974
0.9980
0.9981
' 0.9986
0.9986
0.9990
0.9990
0.9993
0.9993
0.9995
0.9995
0.9996
0.9997
0.9997
0.9998
0.75
0.50
0.674
0.90
0.20
1.282
0.95
0.10
1.645
0.975
0.05
1.960
0.99
0.02
2.326
423
0.995
0.01
2.576
0.999
0.002
3.090
0.9995
0.001
3.291
0.99995
0.0001
3.891
0.999995
0.00001
4.417
APPENDIX C
424
TABLE
C.2
THE
X DISTRIBUTION *
P {X2
s X~ } ==
P(rl.) =
f2
o p(XZ) dX2
X2
X~
0
P(x~)
d.f.
0.005
1
0.04392
0.010
0.072
0.207
. 0.412
0.01
0.02
0.025
0.05
0.10
0.20
0.25
0.30
0.50
0.0 3157
0.0201
0.115
0.297
0.544
0.03628
0.0404
0.185
0.429
0.752
0.03982
0.051
0.216
0.484
0.831
0.00393
0.103
0.352
0.711
1.145
0.0158
0.211
0.584
1.064
1.610
0.0642
0.466
1.005
1.649
2.343
0.101
0.575
1.213
1.923
2.675
0.148
0.713
1.424
2.195
3.000
0.455
1.386
2.366
3.357
4.351
3.070
3.822
4.594
5.380
6.179
3.455
4.255
5.071
5.899
6.737
3.828
4.671
5.527
6.393
7.267
5.348
6.346
7.344
8.343
9.342
10
0.676
0.989
1.344
1.735
2.156
0.872
1.239
1.646
2.088
2.558
1.134
1.564
2.032
2.532
3.059
1.237
1.690
2.180 ..
2.700
3.247
1.635
2.167
2.733
3.325
3.940
2.204
2.833
3.490
4.168
4.865
11
12
13
14
15
2.603
3.074
3.565
4.075
-4.60 1
3.053
3.571
4.107
4.660
5.229
3.609
4.178
4.765
5.368
5.985
3.816
4.404
5.009
5.629
6.262
4.575
5.226
5.892
6.571
7.261
5.578
6.304
7.042
7.790
8.547
6.989
. 7.807
8.634
9.467
10.307
7.584
8.438
9.299
10.165
11.037
8.148
9.034
9.926
10.821
11.721
10.341
11.340
12.340
13.339
14.339
16
17
18
19
20
5.142
5.697
6.265
6.844
7.434
5.812
6.408
7.615
7.633
8.260
6.614
7;255
7;906
8.567
9.237
6.908
7.564
8.231
8.906
9.591
7.962
8.672
9.390
lQ.117
10.851
9.312
10.085
10.865
11.651
12.443
11.152
12.002
12.857
13.716
14.578
11.912
12.792
13.675
14.562
15.452
12.624
13.531
14.440
15.352
16.266
15.338
16.338
17.338
18.338
19.337
,21
22
23
24
25
8.034
8.643
9.260
9.886
10.520
8.897
9.542
10.196
10.856
11.524
9.915
10.600
11.293
lL992
12.697
10.283
10.982
11.689
12.400
13.120
11.591
12.338
13.091
13.848
14.611
13.240
14.041
14.848
15.659
16.473
15.445
16.314
17.187
18.062
18.940
16.344
17.240
18.137
19.037
19.939
1 7~ 1 82
18.101
19.021
19.943
20.867
20.337
21.337
22.337
23.337
24.337
26
27
28
29
30
11.160
11.808
12.461
13.121
13.787
12.198
12.879
13.565
14.256
14.953
13.409
14.125
14.847
15.574
16.306
13.844
14.573
15.308
16.047
16.791
15.379
16.151
16.928
17.708
18.493
17.292
18.114
18.939
19.768
20.599
19.820
20.703
21.588
22.475
23.364
20.843
21.749
22.657
23.567
24.478
21.792
22.719
23.647
24.577
25.508
25.336
26.336
27.336
28.336
29.336
~' . .
"(continued )
..
- ..
.;
TABLES
TABLE
a.r,
425
C.2 (continued)
P(x~)
0.70
0.75
0.80
0.90
0.95
0.975
0.98
0.99
0.995
0.999
1
2
3
4
5
1.074
2.408
3.665
4.878
6.044
1.323
2.772
4.108
5.385
6.626
1.642
3.219
4.642
5.989
7.289
2.706
4.605
6.251
7.779
9.236
3.841
5.991
7.815
9.488
11.070
5.024
7.378
9.348
11.143
12.833
5.412
7.824
9.837
11.668
13.388
6.635
9.210
11.345
13.277
15.086
7.879
10.597
12.838
14.860
16.750
10.827
13.815
16.268
18.465
20.517
6
7
, 8
9
10
7.231
8.383
9.524
10.656
11.781
7.841
9.037
10.219
11.389
12.549
8.558
9.803
11.030
12.242
13.442
10.645
12:017
13.362
14.684
15.987
12.592
14.067
15.507
16.919
18.307
14.449
16.013
17.535
19.023
20.483
15.033
16.622
18.168
19.679
21.161
16.812
18.475
20.090
21.666
23.209
18.548
20.278
21.955
23.589
25.188
22.457
24.322
26.125
27.877
29.588
11
12
13
14
15
12.899
14.011
15.119
. ...- 16.222
. ".. .,-
17.322
13.701
14.845
15.984
17.117
18.24~
14.631
15.812
16.985
18.151
19.313
17.275
18.549
19.812
21.064
22.307
19.575
21.026
22.362
23.685
24.996
21.920
23.337
24.736
26.119
27.488
22.618
24.054
25.472
26.873
28.259
24.725
26.217
27.688
29.141
30.578
26.757
28.299
29.819
31.319
32.801
31.264
32.909
' 34.528
36.123
37.697
16
17
18
19
20
18.418
19.511
20.601
21.689
22.775
19.369
20.489
21.605
22.718
23.828
20.465
. 21:615
'2 2.760
23.900
25.038
23.542
24.769
25.989
27.204
28.412
36.296
27.587
28.869
30.144
31.410
28.845
30.191
31.526
32.852
34.170
29.633
30.995
32.346
33.687
35.020
32.000
33.409
34.805
36.191
37.566
34.267
35.719
37.156
38.582
39.997
39.252
40.790
42.312
43.820
45.315
21
22
23
24
25
23.858
24.939
26.018
27.096
28.172
24.935 "
26.039
27.141
28.241
29.339
26.171
27.301
28.429
29 .553
30.675
29.615
30.813
32.007
33.196
34.382
32.671
33.924
35.172
36.145
37.652
35.479
36.781
38.076
39.364
40.647
36.343
37.659
38.968
40.270
41.566
38.932
40.289
41.638
42.980
44.314;
41.401
42.796
44.181
45.559
46.928
46.797
48.268
49.728
51.179
52.620
26
27
. 28
29
30
29.246
30.319
31.391
32.461
33.530
30.435
31.528
32.621
33.711
34.800
31.795
32.912
34.027
35.139
36.250
35.563
36.741
37.916
39.087
40.256
38.885
40.113
41.337
42.557
43.773
41.923
43.194
44.461
45.722
46.979
42.856
44.140
45.419
46.693
47.962
45.642
46.963
48.278
49.588
50.892
48.290
49.645
50.993
52.336
53.672
54.052
. 55.476
56.893
58.302
59.703
* Adapted from Table IV of R. A. Fisher and F. Yates, Statistical Tables f or Biological, Agricultural and Medi cal Research, Oliver &
Boyd, Ltd" Edinburgh and London, 1953, by permission of the authors and publishers.
d.f. = degrees of freedom = v. For 30 < v < 100, linear interpolat ion where necessary will give four significant figures. For v > 100,
take x~. = !-(t. + v'Zv - 1)2.
- - : - : - - - - , - - - - - ' - - - - --
: --
. _ , . , ..
_.. _
,- _ .,-,
..
.
.
. .':
.... .
TABLES
TABLE
C.3
427
THE I-DISTRIBUTION
P{I ~ I.}
== P(t.) = _ pet) dt
00
P(t.J
t.
P(/*)
d.f.
0.975
0.99
0.995
12.706
4.303
3.182
2.776
2.571
31.821
6.965
4.541
3.757
3.365
63.657
9.925
5.841
4.604
4.032
636.619
31.598
12.941
8.610
6.859
2.447
2.365
2.306
2.262
2.228
3.143
2.998
2.896
2.821
2.764"
3.707
3.499
3.355
3.250
"3.169
5.959
5.405
5.041
4.781
'4.578
1.088 1.363
1.083 1.356
1.079 .1.350
1.076 1.345
1.974 1.341
1.796
.1.782
1.771
1.761
1.753
2.201
2.179
2.160
2.145
2.131
2.718
2.681
2.650
2.624
2.602
3.106
3.055
3.012
2.977
2.947
4.437
4.318
4.221
4.140
4.073
1.337
1.333
1.330
1.328
1.325
1.746
1.740
1.734
1.729
1.725
2.120
2.110
2.101
2.093
2.086
2.291
2.583
2.898
2.567
2.552
2.878
2.539 ..,2.861
2.528
2.845
4.015
3.965
3.922
3.883
3.850
0.686
0.686
1.323
1.321
1.319
1.318
1.316
1.721
1.717
1.714
1.711
1.708
2.080
2.074
2.069
2.064
2.060
2.518
2.508
2.500
2.492
2.485
2.831
2.819
2.807
3.819
3.792
3.767
~.797
~3.745
2.787
3.725
0.684 0.856
0.684 0.855
0.683 0.855
0.683 0.854
0.683 0.854
1.058 1.315
1.057 1.314
1.056 1.313
1.055 1.311
1.055 1.310
1.706
1.703
1.701
1.699
1.697
2.056
2.052
2.048
2.045
2.042
2.479
2.473
2.467
2.462
2.457
2.779
2.771
2.763
2.756
2.750
3.707
3.690
3.674
3.659
3.646
0.681 0.851
0.679 Q.848
0.677 0.845
0.674 0.842
1.050 1.303
1.046 1.296
1.041 1.289
1.036 1.282
1.684
1.671
1.658
1.645
2.021
2.000
1.980
1.960
2.423
2.390
2.358
2.326
2.704
2.660
2.617
2.576
3.551
3.460
3.373
3.291
0.55
0.60
0.65
0.70
0.75
1
2
3
4
5
0.158
0.142
0.137
0.134
0.132
0.325
0.289
0.277
0.271
0.267
0.510
0.445
0.424
0.414.
0.408
0.727
0.617
0.584
0.569
0.559
6
7
8
9
10
0.131
0.130
0.130
0.129
0.129
0.718 0.906
0.711 0.896
0.706 0.889
0.703 0.883
0.700 0.879
11
12
13
14
15
0.129
0.128
0.128
0.128
0.128
0.260 0.396
0.259 0.395
0.359 0.394
0.258 0.393
0.258 0.393
0.540
0.539
0.538
0.537
0.536
0.697
0.695
0.694
0.692
0.691
16
17
18
19
20
0.128 0.258
0.128 0.257
0.127 0.257
0.127 0.257
0.127 '0.257
Q~~92
0:392
Q.392.
0.391
0.391
0.535
0.534
0.534
0.533
0.533
21
22
23
24
25
0.127
0.127
0.127
0.127
0.127
0.257
0..256
0.256
0.256
0.256
0..257 .
0.3,90
0.390
0.390
0.390
0.532
26
27
28
29
30
0.127
0.127
0.127
0.127
0.127
0.256 0.390
0.256 0.389
0.256 0.389
0.256 0.389
0.256 0.389
40
60
120
0.126
0.126
0.126
0.126
0.255
0.254
0.254
0.253:
00
0.388
0.387
0.386
0.385
0~532
0.532 0.68~
0.531 0.685
0.531 0.684
:0.531
0.531
0.530
0.530
0.530
0.529
0.527.
0.526
0.524
0.80
0.876
0.873
0.870
0.868
0.866
0.859
0.858
0.858
0.857
0.856
0.85
1.063
1.061
1.060
'1.059
1.058
0.90
0.95
0.9995
...
* Adapted from Table III of R. A. Fisher and F. Yates, Statistical Tables for Biological, Agricultural and Medical Research, Oliver &
Boyd, Ltd., Edinburgh and London, 1963, by permission of the authors and publishers.
428
APPENDIX C
TABLES
T ABLE
><
429
C.4a (continued )
10
12
15
20
24
30
40
60
120
00
1
2
3
4
2.0419
1.3450
1.1833
1.1126
2.0674
1.3610
1.1972
1.1255
2.0931
1.3771
1.2111
1.1386
2.1190
1.3933
1.2252
1.1517
2.1321
1.4014
1.2322
1.1583
2.1452
1.4096
1.2393 .
1.1649
2.1584
1.4178
1.2464
1.1716
2. 1716
1.4261
1.2536
1.1782
2. 1848
1.4344
1.2608
1.1849
2.1981
1:4427
1.2680
1.1916
5
6
7
8
9
1.0730
1.0478
1.0304
1.0175
1.0077
' 1.0855
1.0600
1.0423
1.0293
1.0194
1.0980
1.0722
1.0543
1.0412
1.0311
1.1106
1.0845
1.0664
1.0531
1.0429
1.1170
1.0907
1.0724
1.0591
1.0489
1.1234
1.0969
1.0785
1.0651
1.0548
1.1297
1.1031
1.0846
1.0711
1.0608
1.1361
1.1093
1.0908
1.0771
1.0667
1.1426
1.1156
1.0969
1.0832
1.0727
1.1490
1.1219
1.1031
1.0893
1.0788
10
11
12
13
14
1.0000
0.99373
0.98 856
0.98421
0.98051
1.0116
1.0052
1.0000
0.99560
0.99186
1.0232
1.0168
1.0115
,1.0071
1.0033
1.0349
1.0284
1.0231
1.0186
1.0147
1.0408
1.0343
1.0289
1.0243
1.0205
1.0467
1.0401
1.0347
1.0301
1.0263
1.0526
1.0460
1.0405
1.0360
1.0321
1.0585
1.0519
1.0464
1.0418
1.0379
1.0645
1.0578
1.0523
1.0476
1.0437
1.0705
1.0637
1.0582
1.0535
1.0495
15
16
17
18
19
0.9773 2
0.97454
0.97209
0.96993
0.96800
0.98863
0.98582
0.98 334
0.98116
0.97920
1.0000
0.99716
0.99466 . .
0.99245
:0.99047
1.0114
1.0086
1.0060
1.0038
1.0018
1.0172
1.014,3
1.0117
1.0095
1.0075
1.0229
1.0200
1.0174
1.0152
1.0132
1.0287
1.0258
1.0232
1.0209
1.0189
1.0345
1.0403
1.0461
1.0373
1.0315
1.0431
1.0289
1.0347
1.0405
1.0267 '.. 1.0324 .. '1.0382
1.0246
1.0304
1.0361
20
21
22 '
23
24
0.96626
0.96470
0.963 28
0.96199
0.96081
0.97746
0.97587
0.97444
0.97313
0.97194
0.98870
0.98710
0.98565
0.98433
0.98312
1.0000 .
0.99838
0.99692
0.99558
0.99436
1.0057
1.0040
1.0026
1.0012
1.0000
1.0114
1.0097
1.0082
1.0069
1.0057
1.0171
1.0154
1.0 139
1.0126
1.0113
1.0228
1.0211
1.0196
1.0183
1.0170
1.0285
1.0268
1.0253
1.0240 .
1.0227
1.0343
1.0326
1.0311
1.0297
1.0284
25
26
27
28
29
0.95972
0.95872
0.95779
0.95694
0.95614
0.97084
0.96983
0.96889
0.9680 2
0.96722
0.98201
0.98099
0.98004
0.97917
0.97835
0.99324
0.99220
0.99125
0.99036
0.98954
0.99887
0.99783
0.99687
0.99598
0.99515
1.0045
1.0035
1.0025
1.0016
1.0008
1.0102
1.0091
1.0082
1.0073
1.0064
1.0159
1.0148
1.0138
1.0129
1.0121
1.02 15
1.0205
1.0195
1.0186
1.0177
1.0273
1.0262
1.0252
1.0243
1.0234
30
40
60
120
0.95 540
0.95003
0.94471
0.93943
0.93 418
0.96647
0.96104
0.955 66
0.9503 2
0.94503
0.97759
0.97211
0.96667
0.96128
0.95593
0.98877
0.98323
0.97773
0.97228
0.96687
0.99438
0.98880
0.98328
0.97780
0.97236
1.0000
0.99440
0.98884
0.98333
0.97787
1.0056
1.0000
0.99441
0.98887
0.98339
1.0113
1.0056
1.0000
0.99443
0.98891
1.0 170
1.0113
1.0056
1.0000
0.99445
1.0226
1.0169
1.0112
1.0056
1.0000
00
-~ ,
* Reproduced by permission of E. S. Pearson from "Tables of Percentage Points of the In verted Beta (F) D istribution," Biometr ika
33, 73- 88, 1943, by Maxine Merrington and Cather ine M. Thompson.
Where necessary, interpol ation should be carried out using the reciprocals of the degrees of freedom. The function 120/v is convenient
for this purpose, Vi = n umerator, V2 = denom inator.
TABLES
TABLE
431
10
12
15
20
24
30
40
60
120
CX)
1
2
3
4
9.3202
3.3770
2.4447
2.0820
9.4064
3.3934
2.4500
2.0826
9.4934
3.4098
2.4552
2.0829
9.5813
3.4263
2.4602
2.0828
9.6255
3.4345
2.4626
2.0827
9.6698
3.4428
2.4650
2.0825
9.7144
3.4511
2.4674
2.0821
9.7591
3.4594
2.4697
2.0817
9.8041
3.4677
2.4720
2.0812
9.8492
3.4761
2.4742
2.0806
5
6
7
8
9
1.8899
1.7708
1.6898
1.6310
1.5863
1.8877
1.7668
1.6843
1.6244
1.5788
1.8851
1.7621
1.6781
1.6170
1.5705
1.8820
1.7569
1.6712
1.6088
1.5611
1.8802
1.7540
1.6675
1.6043
1.5560
1.8784
1.7510
1.6635
1.5996
1.5506
1.8763
1.7477
1.6593
1.5945
1.5450
1.8742
1.7443
1.6548
1.5892
1.5389
1.8719
1.7407
1.6502
1.5836
1.5325
1.8694
1.7368
1.6452
1.5777
1.5257
10
11
12
13
14
1.5513
1.5230
1.4996
1.4801
1.4634
1.5430
1.5140
1.4902
1.4701
1.4530
1.5338
1.5041
1.4796
1.4590
1.4414
1.5235
1.4930
1.4678
1.4465
1.4284
1.5179
1.4869
1.4613
1.4397
1.4212
1.5119
1.4805
1.4544
1.4324
1.4136
1.5056
1.4737
1.4471
1.4247
1.4055
1.4990
1.4664
1.4393
1.4164
1.3967
1.4919
1.4587
1.4310
1.4075
1.3874
1.4843
1.4504
1.4221
1.3980
1.3772
1.4263
1.4014
1.3911
1.3819
1.4127
1.3990
1.3869
1.3762
1.3666
1.4052
1.3913
1.3790
1.3680
1.3582
_1.3973
1.3830
1.3704
1.3592
1.3492
1.3888
1.3742
1.3613
1.3497
1.3394
. 1.3796
1.3646
1.3514
1.3395
1.3289
1.3698
1.3543
1.3406
1.3284
1.3174
1.3591
1.3432
1.3290
1.3162
1.3048
1.2943
1.2848
1.2761
1.2681
1.2607
15
16
17
18
19
.. L4"491 -- - 1.4383
1.4366
1.4255
1.4256
1.4142
1.4042 1.4159
1.4073
1.3953
~ '1.4 130
20
21
22
23
25
1.3995
1.3925
1.3861
1.3803
1.3750
1.3873
1.3801
1.3735
1.3675
1.3621
1.3736
-.,.3661
1.3593
1.3531
1.3474
1.3580
1.3502
1.3431
1.3366
1.3307
1.3494
1.3414
1.3341
1.3275
1.3214
1.3401
1.3319
1.3245
1.3176
1.3113
1.3301
1.3217
1.3140
1.3069
1.3004
1.3193
1.3105
1.3025
1.2952
1.2885
1.3074
1.2983
1.2900
1.2824
1.2754
25
26
27
28
'29
1.3701
1.3656
1.3615
1.3576
1.3541
1.3570
1.3524
1.3481
1.3441
1.3404
1.34~2
1.3374
1.3329
1.3288
1.3249
1.3252
1.3202
1.3155
1.3112
1.3071
1.3158
1.3106
1.3058
1.3013
1.2971
1.3056
1.3002
1.2953
1.2906
1.2863
1.2945
1.2889
1.2838
1.2790
1.2745
1.2823
1.2765
1.2712
1.2662
1.2615
1.2698
1.2538
1.2628
1.2474
1.2572
1.2414
1.2519 . '1.2358
1.2470
1.2306
30
40
60
120
1.3507
1.3266
1.3026
1.2787
1.2549
1.3369
1.3119
1.2870
1.2621
1.2371
1.3033
1.2758
1.2481
1.2200
1.1914
1.2933
1.2649
1.2361
1.2068
1.1767
1.2823
1.2529
1.2229
1.1921
1.1600
1.2703
1.2397
1.2081
1.1752
1.1404
1.2571
1.2249
1.1912
1.1555
1.1164
1.2424
1.2080
1.1715
1.1314
1.0838
CX)
1.3213
1.2952
1.2691
1.2428
1.~163
1.2256
1.1883
1.1474
1.0987
1.0000
Reprodu ced by permission of E. S. Pearson from " Tables of Percentage Points of the Inverted Beta (F) Distribution," Biometrika
33, 73-8 8, 1943, by Maxine Merrington and Catherine M. Thompson.
Where necessary, interpolation should be carried out using the reciprocals of the degrees of freedom. The function 120lv is convenient
for this purpose. Vi = numerator, V2 = denominator.
--
''\'"'"" -
432
APPENDIX C
TABLES
433
.
Ii
r!
,
TABLE
C.4c (continued)
12
15
20
24
30
40
60
00
60.195
9.3916
5.2304
3.9199
60.705
9.4081
5.2156
3.8955
61.220
9.4247
5.2003
3.8689
61.740
9.4413
5.1845
3.8443
62.002
9.4496
5.1764
3.8310
62.265
9.4539
5.1681
3.8174
62.529
9.4663
5.1597
3.8036
62.794
9.4746
5.1512
3.7896
63.061
9.4829
5.1425
3.7753
63.328
9.4913
5.1337
3.7607
5
6
7
8
9
3.2974
2.9369
2.7025
2.5380
2.4163
3.2682
2.9047
2.6681
2.5020
2.3789
3.2380
2.8712
2.6322
2.4642
2.3396
3.2067
2.8363
2.5947
2.4246
2.2983
3.1905
2.8183
2.5753
2.4041
2.2768
3.1741
2.8000
2.5555
2.3830
2.2547
3.1573
2.7812
2.5351
2.3614
2.2320
3.1402
2.7620
2.5142
2.3391
2.2085
3.1228
2.7423
2.4928
2.3162
2.1843
3.1050
2.7222
2.4708
2.2926
2.1592
10
11
12
13
14
2.3226
2.2482
2.1878
2.1376
2.0954 ..
2.2841
2.2087
2.1474
2.0966 '
2.0537
2.2435
2.1671
2.1049
2.0532
2.0095
2.2007
2.1230
2.0597
2.0070
1.9625
2.1784
2.1000
2.0360
1.9827
1.9377
2.1554
2.0762
2.0115
1.9576
1.9119
2.1317
2.0516
1.9861
1.9315
1.8852
2.1072
2.0261
1.9597
1.9043
1.8572
2.0818
1.9997
1.9323
1.8759
1.8280
2.0554
1.9721
1.9036
1 .8462
1.7973
15
16
17
18
19
2.0593
2.0281
2.0009
1.9770
1.9557
2.0171 ~,
1.9854
1.9577
1.9333
1.9117 {
1.9722
1.9399
1.9117
1.8868
1.8647
1.9243
1.8913
1.8624
1.8368
1.8142
1.8990
1.8656
1.8362
1.8103
1.7873
1.8728
1.8388
1.8090
1.7827
1.7592
1.8454
1.8108
1.7805
1.7537
1.7298
1.8168
1.7816
1.7506
1.7232
1.6988
1.7867
1.7507
1.7191
1.6910
1.6659
1.7551
1.7182
1.6856
1.6567
1.6308
20
21
22
23
24
1.9367
1.9197
1.9043
1.8903
1.8775
1.8924
1.8750
1.8593
1.8450
1.8319
1.8449
1.8272
1.8111
1.7964
1.7831
1.7938
1.7756
1.7590
1.7439
1.7302
1.7667
1.7481
1.7312
1.7159
1.7019
1.7382
1.7193
1.7021
1.6864
1.6721
1.7083
1.6890
1.6714
1.6554
1.6407
1.6768
1.6569
1.6389
1.6224
1.6073
1.6433
1.6228
1.6042
1.5871
1.5715
1.6074
1.5862
1.5668
1.5490
1.5327
25
26
27
. 28
29
1.8658
1.8550
1.8451
1.8359
1.8274
1.8200
1.8090
1.7989
1.7895
1.7808
1.7708
1.7596
1.7492
1.7395
1.7306
1.7175
1.7059
1.6951
1.6852
1.6759
1.6890
1.6771
1.6662
1.6560
1.6465
1.6589
1.6468
1.6356
1.6252
1.6155
1.6272
1.6147
1.6032
1.5925
1.5825
1.5934
1.5805
1.5686
1.5575
1.5472
1.5570
. 1.5437
1.5313
1.5198
1.5090
1.5176
1.5036
1.4906
1.4784
1.4670
30
40
60
120
1.8195
1.7627
1.7070
i.6524
1.5987
1.7727
1.7146
1.6574
1.6012
1.5458
1.7223
1.6624
1.6034
1.5450
1.4871
1.6673
1.6052
1.5435
1.4821
1.4206
1.6377
1.5741
1.5107
1.4472
1.3832
1.6065
1.5411
1.4755
1.4094
1.3419
1.5732
1.5056
1.4373
1.3676
1.2951
1.5376
1.4672
1.3952
1.3203
1.2400
1.4989
1.4248
1 .3476
1.2646
1.1686
1.4564
1.3769
1.2915
1.1926
1.0000
* Reproduced by permission of E. S. Pearson from "Tables of Percentage Points of the Inverted Beta (F) Distribution," Biometrika
33, 73-88, 1943, by Maxine Merrington and Catherine M. Thompson.
Where necessary, interpolation should be carried out using the reciprocals of the degrees of freedom. The function 120{v is convenient .
for this purpose. Vi = numerator, V2 = denominator.
f
t
120
1
2
3
4
00
10
-- ----
-_.__..._-- --_.
434
APPENDIX C
,.
o .
;-:-.~
TABLES
TABLE
435
C.4d (continued)
V\l
10
12
15
20
24
30
40
60
120
00
1
2
3
4
241.88
19.396
8.7855
5.9644
243.91
19.413
8.7446
5.9117
245.95
19.429
8.7029
5.8578
248.01
19.446
8.6602
5.8025
249.05
19.454
8.6385
5.7744
250.09
19.462
8.6166
5.7459
251.14
19.471
8.5944
5.7170
252.20
19.479
8.5720
5.6878
253.25
19.487
8.5494
5.6581
254.32
19.496
8.5265
5.6281
5
7
8
9
4.7351
4.0600
3.6365
3.3472
3.1373
4.6777
3.9999
3.5747
3.2840
3.0729
4.6188
3.9381
3.5108
3.2184
3.0061
4.5581
3.8742
3.4445
3.1503
2.9365
4.5272
3.8415
3.4105
.3.1152
2.9005
4.4957
3.8082
3.3758
3.0794
2.8637
4.4638
3.7743
3.3404
3.0428
2.8259
4.4314
3.7398
3.3043
3.0053
2.7872
4.3984
3.7047
3.2674
2.9669
2.7475
4.3650
3.6688
3.2298
2.9276
2.7067
10
11
12
13
14
2.9782
2.8536
2.7534
2.6710
2.6021
2.9130
2.7876
2.6866
2.6037
- 2.5342
2.8450
2.7186
2.6169
2.5331
2.4630
2.7740
2.6464
2.5436
2.4589
2.3879
2.7372
2.6090
2.5055
2.4202
2.3487
2.6996
2.5705
2.4663
2.3803
2.3082
2.6609
2.5309
2.4259
2.3392
2.2664
2.6211
2.4901
2.3842
2.2966
2.2230
2.5801
2.4480
2.3410
2.2524
2.1778
2.5379
2.4045
2.2962
2.2064
2.1307
74035
2.3275
2.2756
2.2304
2.1906
2.1555
2.2878
2.2354
2.1898
2.1497
2.1141
2.2468
2.1938
'2.1477
2.1071
2.0712
2.2043
2.1507
2.1040
2.0629
2.0264
2.1601
2.1058
2.0584
2.0166
1.9796
2.1141
2.0589
2.0107
1.9681
1.9302
2.0658
2.0096.
1.9604
1.9168
1.8780
2.0825
2.0540
2.0283
2."0050
1.9838
2.0391
2.0102
1.9842
1.9605
1.9390
1.9938
1.9645
1.9380
1.9139
1.8920
1.9464
1.9165
1.8895
1.8649
1.8424
1.8963
1.8380
1.8128
1.7897
1.8432
1.8117
1.7831
1.7570
1.7331
15
16
17
18
19
. 2.5437 . -2.4753
2.4935
2.4247
2.4499
2.3807
2.3421 .
2.4117
2.3779
2.3080
2~'3522
2.3071 .
2.26~.6
'2.2341
20
21
22
23 "
24
2.3479
2.3210
2.2967
2.2747
2.2547
2.2776
2.2504
2.2258
2.2036
2.1834
2.. 2033
2.1508
2.1282
2.1077
2.i242
2.0960
2.0707
2.0476
2.0267
25
26
2'
28
29
2.2365
2.2197
2.2043
2.1900
2.1768
2.1649
2.1479
2.1323 .
2.1179
2.1045
2.0889
2.0716
2.0558
2.0411
2.0275
2.0075
1.9898
1.9736
1.9586
1.9446
1.9643
1.9464,
1.9299
1.9147
1.9005
1.9192
1.9010
1.8842
1.8687
1.8543
1.8718
1.8533
1.8361
1.8203
1.8055
1.8217
1.8027
1.7851
1.7689
1.7537
1.7684
1.7488
1.7307
1.7138
1.6981
1.7110
1.6906
1.6717
1.6541
1.6377
30
40
60
120
2.1646
2.0772
1.9926
1.9105
1.8307
2.0921
2.0035
1.9174
1.8337
1.7522
2.0148
1.9245
1.8364
1.7505
1.66.64
1.9317
1.8389
1.7480
1.6587
1.5705
1.8874
1.7929
1.7001
1.6084
1.5173
1.8409
1.7444
1.6491
1.5543
1.4591
1.7918
1.6928
1.5943
1.4952
1.3940
1.7396
1.6373
1.5343
1.4290
1.3180
1.6835
1.5766
1.4673
1.3519
1.2214
1.6223
1.5089
1.389.3
1.2539
1.0000
00
2~\l757
1~8657
* Reproduced by permission of E. S. Pearson from "Tables of Percentage Points of the Inverted Beta (F) Distribution," Biometrika
33, 73-88, 1943, by Maxine Merrington and Catherine M. Thompson.
Where necessary, interpolation should be carried out using the reciprocals of the degrees of freedom. The function 120/v is convenient
for this purpose. Vl = "numerator, V2 = denominator.
t<.
436
APPENDI X" C
. !
~l
'.
TABLES
437
'\
10
12
15
20
24
30
40
60
120
00
1
2
3
4
968.63
39.398
14.419
8.8439
976.71
39.415
14.337
8.7512
984.87
390431
14.253
8.6565
993.10
39.448
14.167
8.5599
5
6
7
8
9
6.6192
5.4613
4.7611
4.2951
3.9639
6.5246
5.3662
4.6658
4.1997
3.8682
6.4277
5.2687
4.5678
4.1012
3.7694
6.3285
5.1684
4.4667
3.9995
3.6669
6.2780
5.1172
4.4150
3.9472
3.6142
6.2269
5.0652
4.3624
3.8940
3.5604
6.1751
5.0125
4.3089
3.8398
3.5055
6.1225
4.9589
4.2544
3.7844
3.4493
6.0693
4.9045
4.1989
3.7279
3.3918
6.0153
4.8491
4.1423
3.6702
3.3329
10
11
12
13
14
3.7168
3.5257
3.3736
3.2497
3.1469
3.6209
3.4296
3.2773
3.1532
.3.0501
3.5217
3.3299
3.1772
3.0527
2.9493
3.4186
3.2261
3.0728
2.9477
2.8437
3.3654
3.1725
3.0187
2.8932
2.7888
3.3110
3.1176
2.9633
2.8373
2.7324
3.2554
3.0613
2.9063
2.7797
2.6742
3.1984
3.0035
2.8478
2.7204
2.6142
3.1399
2.9441
2.7874
2.6590
2.5519
3.0798
2.8828
2.7249
2.5955
2.4872
15
16
17
18
19
'3.0602
2.9862
2.9222
2.8664
2.8173
.. '2.9633
2.8890
2.8249
2.7689
2.7196
2.8621
''.2.7875
2.723Q .
2.6667
2.6171
2.7559
2.6808
2.6158
2.5590
2.5089
2.7006
2.6252
2.5598
2.5027
2.4523
2.6437
2.5678
2.5021
2.4445
2.3937
2.5850
2.5085
2.4422
2.3842
2.3329
2.5242
2.4471
2.3801
2.3214
2.2695
2.3953
2.4611
2.3831
2.3163
2.2474
2.3153
2.2558 ' .2.1869
2.1333
2.2032
20
21
. 22
23
24
2.7737
2.7348
2.6998
2.6682
2.6396
2.6758
2.6368
2.6017
2.5699
2.5412
2.5731
2.. 5338
2.4984
2.4665
2.4374
2.4645
2.4247
2.3890
2.3567
2.3273
2.4076
2.3675
2.3315
2.2989
2.2693
2.3486
2.3082
2.2718
2.2389
2.2090
2.2873
2.2465
2.2097
2.1763
2.1460
2.2234
2.1819
2.1446
2.1 107
2.0799
2.1562
2.1141
2.0760
2.0415
2.0099
2.0853
2.0422
2.0032
1.9677
1.9353
25
26
27
28
29
2.6135
2.5895
2.5676
2.5473
2.5286
2.5149
2.4909
2.4688
2.4484
2.4295
2.411,0
2.3867
2.3644
2.3438
2.3248
2.3005
2.2759
2.2533
2.3224
2.2131
2.2422
2.2174
2.1946
2.1735
2.1540
2.1816
2.1565
2.1334
2.1121
2.0923
2.1183
2.0928
2.0693
2.0477
2.0276
2.0517
2.0257
2.0018
1.9796
1.9591
1.9811
1.9545
1.9299
1.9072
1.8861
1.9055
1.8781
1.8527
1.8291
1.8072
30
40
60
120
2.511 2
2.3882
2.2702
2.1570
2.0483
2.4120
2.2882
2.1692
2.0548
1.9447
2.3072
2.1819
2.0613
1.9450
1.8326
2.1952
2.0677
1.9445
1.8249
1.7085
2.1359
2.0069
1.8817
1.7597
1.6402
2.0739
1.9429
1.8152
1.6899
1.5660
2.0089
1.8752
1.7440
1.6141
1.4835
1.9400
1.8028
1.6668
1.5299
1.3883
1.8664
.7242
1.5810
1.4327
1.2684
1.7867
1.6371
1.4822
1.3104
1.0000
00
1001.4
1005.6
1009.8
1014.0
997.25
1018.3
39.456
39.465
39.473
39.481
39.490
39.498
14.124
14.037
13.992
14.081
13.947
13.902
8.4111
8.5109
8.4613
8.3604
8.3092
8.2573
Rep roduced by permission of E. S. Pearson from " Tables of Percentage Points of the Inverted Beta (F) Distribution ," Biometrika
33, 73- 88, 1943, by Maxine Merr ington and Catherine M. Thompson.
Where necessary, interpolation should be carried out using the reciprocals of the degrees of freedom. The function 120lv is convenient
for this purpo se. VI = numerator, V2 = denominator.
438
TABLE
APPENDIX C
C.4r
THE F -DlSTRIBUTION-P(F.)
= 0.99
P(F ~ F.) == P(F.)
p(F) dF
~FJ~
F.
\.
5403.3
99.166
29.457
16.694
4
5624.6
99.249
28.710
15.977
5
5763.7
99.299
28.237
15.522
5928.3
99.356
27.672
14.976
5
6
7
8
9
16.268
13.745
12.246
11.259
10.561
13.274
10.925
9.5466
8.6491
8.0215
12.060
9.7795
8.4513
7.5910
6.9919
11.392
9.1483
7.8467
7.0060
. 6.4221
10.967
8.7459
7.4604
6.6318
6.0569
10.672
8.4661
7.1914
6.4707
5.8018
10.456
8.2600
6.9928
6.1776
5.6129
10.289
8.1016
6.8401
6.0289
5.4671
10.158
7.9761
6.7188
5.9106
5.3511
10
11
12
13
14
10.044
9.6460
9.3302
9.0738
8.8616
7.5594
7.2057
6.9266
6.7010
; 6.5149
6.5523
6.2167
5.9526
5.7394
5.5639
5.9943
5.6683
5.4119
5.2053
5.0354
5.6363
5.3160
5.0643
4.8616
4.6950
5.3858
5.0692
4.8206
4.6204
4.4558
5.2001
4.8861
4.6395
4.4410
4.3779
5.0567
4.7445
4.4994
4.3021
4.1399
4.9424
4.6315
4.3875
4.1911
4.0297
4.8932
4.7726
4.6690"
4.5790
4.5003
4.5556
4.4374
.4.3359
4.2479
4.1708
4.3183
4.2016
4.1015
4.0146
3.9386
4.1415
4.0259
3.9267'
3.8406
3.7653
4.0045
3.8896
3.7910
3.7054
3.6305
3.8948
3.7804
3.6822
3.5971
3.5225
5.4170
5.2922
. 5.1850
5.0919
5.0103
5981.6
99.374
27.489
14.799
4052.2
98.503
34.116
21.198
6.3589
:{t.2262
6.1121
6.0129 ' .
, 5.9259
5859.0
99.332
27.911
15.207
1
2
3
4
. 15-- . 8.6831
16
8.5310
17
8.3997
8.2854 .
18
19
8.1850
4999.5
99.000
30.817
18.000
6022.5
99.388
27.345
14.659
20
21
22
23
24
8.0960
8.0166
7.9454
7.8811
7.8229
5.8489
4.7804
5.7190
5.6637
5.6136
4.9382
4.8740
4.8166
4.7649
4.7181
4.4307
4.3688
4.3134
4.2635
_,4.2184
4.1027
4.0421
3.9880
3.9392
3.8951
3.8714
3.8117
3.7583
3.7102
3.6667
3.6987
3.6396
3.5867
3.5390
3.4959
3.5644
3.5056
3.4530
3.4057
3.3629
3.4567
3.3981
3.3458
3.2986
3.2560 .
25
26
27
28
29
7.7698
7.7213
7.6767
7.6356
7.5976
5.5680
5.5263
5.4881
5.4529
5.4205
4.6755
4.6366
4.6009
4.5681
4.5378
4.1774
4.1400
4.1056
4.0740
4.0449
3.8550
3.8183
3.7848
3.7539
3.7254
3.6272
3.5911
3.5580
3.5276
3.4995
3.4568
3.4210
3.3882
3.3581
3.3302
3.3239
3.2884
3.255.8
3.2259
3.1982
3.2172
3.1818
3.1494
3.1195
3.0920
30
40
. 60
120
7.5625
7.3141
7.0771
6.8510
6.6349
5.3904
5.1785
4.9774
4.7865
4.6052
4.5097
4.3126
4.1259
3.9493
3.7816
4.0179
3.8283
3.6491
3.4796
3.3192
3.6990
3.5138
3.3389
3.1735
3.0173
3.4735
3.2910 .
3.1187
2.9559
2.8020
3.3045
3.1238
2.9530
2.7918
2.6393
3.1726
2.9930
2.8233
2.6629
2.5113
3.0665
2.8876
2.7185
2.5586
2.4073
00
,.
I,
~.
(continued)
1.
TABLES
.I
TABLE
439
"
C.4f (continued)
~'
10
1 6055.8
2
99.399
3
27.229
14.546
4
12
6106.3
99.416
27.052
14.374
15
6157.3
99.432
26.872
14.198
20
6208.7
99.449
26.690
14.020
24
6234.6
99.458
26.598
13.929
30
6260.7
99.466
26.505
13.838
40
6268.8
99.474
26.411
13.745
60
6313.0
99.483
26.316
13.652
120
6339.4
99.491
26.221
13.558
00
6366.0
99.501
26.125
13.463
5
6
7
8
9
10.051 ,
7.8741
6.6201
5.8143
5.2565
9.8883
7.7183
6.6591
5.6668
5.1114
9.7222
7.5590
6.3143
5.5151
4.9621
9.5527
7.3958
6.1554 .
5.3591
4.8080
9.4665
7.3127
6.0743
5.2793
4.7290
9.3793
7.2285
5.9921
5.1981
4.6486
9.2912
7.1432
5.9084
5.1156
4.5667
9.2020
7.0568
5.8236
5.0316
4.4831
. 9.1118
6.9690
5.7372
4.9460
4.3978
9.0204
6.8801
5.6495
4.8588
4.3105
10
11
12
13
14
4.8492
4.5393
4.2961
4.1003
3.9394
4:7059
4.3974
4.1553
3.9603
3.8001
4.5582
4.2509
4.0096
3.8154
3.6557
4.4054
4.0990
3.8584
3.6646
3.5052
4.3269
4.0209
3.7805
3.5868
3.4274
4.2469
3.9411
3.7008
3.3476
4;1653
3.8596
3.6192
3.4253
3.2656
4.0819
3.7761
3.5355
3.3413
3.1813
3.9965
3.6904
3.4494
3.2548
3.0942
3.9090
3.6025
3.3608
3.1654
3.0040
15
16
17
18
19
3.8049
3.6909
3.5931
3.5082
3.4338
3.6662 ., .3.5222
3.5527 ' 3.4089
3.4552
3.3117
3.3706
3.2273
3.2965
3.1533
3.3719
3.2588
3.1615
3.0771
3.0031
3.2940
3.1808
3.0835
2.9990
2.9249
3.2141
3.1007
3.0032
2.9185
2.8442
3.1319
3.0182
2.9205
2.8354
2.7608
3.0471
2.9330
2.8348
2.7493
2.6742
2.9595
2.8447
2.7459
2.6597
2.5839
2.8684
2.7528
2.6530
20
21
22
23
24
3.3682
3.3098
3.2576
D106
3.1681
3.2311
3.1729
3.1209
2.0740
3.0316
. 3.0880
'\3.0299
2.9780
2.9311
2.8887
2.9377
2.8796
2.8274
2.7805
2.7380
2.8594
2.8011
2:7488
2.7017
2.6591
2.7785
2.7200
2.6675
2.6202
2.5773
2.6947
2.6359
2.5831
2.5355
2.4923
2.6077
2.5484
2.4951
2.4471
2.4035
2.5168
2.4568
2.4029
2.3542
2.3099
2.4212
2.3603
2.3055
2.2559
2.2107
25
26
27
28
29
3.1294
3.0941
3.0618
3.0320
3.0045
2.9931
2.9579
2.9256
2.8959
2.8685
2.8502
2.8150
2.7827
2.7530
2.7256
2.6993
2.6640
2.6316
2.6017
2.5742
2.6203
2.5848
2.5522
2.5223
2.4946
2.5383
2.5026
2.4699
2.4397
2.4118
2.4530
2.4170
2.3840
2.3535
2.3253
2.3637
2.3273
2.2938
2.2629
2.2344
2.2695
2.2325
2.1984
2.1670
2.1378
2.1694
2.1315
2.0965
2.0642
2.0342
30
40
60
120
2.9791
2.8005
2.6318
2.4721
2.3209
2.8431
2.6648
2.4961
2.3363
2.1848
2.7002
2.5216
2.3523
2.1915
2.0385
2.5487
2.3689
2.1978
2.0346
1.8783
2.4689
2.2880
2.1154
1.9500
1.7908
2.3860
2.2034
2.0285
1.8600
1.6964
2.2992
2.1142
1.9360
1.7628
1.5923
2.2079
2.0194
1.8363
1.6557
1.4730
2.1107
1.9172
1.7263
1.5330
1.3246
2.0062
1.8047
1.6006
1.3805
1.0000
00
3~5070
,2.5660
,-
2.4893
* Reproduced by permission of E. S. Pearson from "Tables of Percentage Points of the Inverted Beta (F) Distribution," Biometrika
33, 73-88, 1943, by Maxine Merr ington and Catherine M. Thompson.
Where necessary, interpolation should be carried out using the reciprocals of the degrees of freedom . The function 120/v is convenient
for this purpose. Vl = numerator; V2 = denominator.
'4
'"
......,
440
APPENDIX C
OF
FOR
THE
SIGN
TEST
(a = confidence level)
0.01
0.05 '
0.10
0.01
0.25
n
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
0.005
0.025
0.05
0
0
0
0
1
1
1
2
2
2
3
3
3
4
4
4
5
5
6
6
6
7
7
7
8
8
9
9
9
io
10
11
11
11
12
12
13
13
0
0
0
1
I
1
2
2
2
3
3
4
4
4
5
5
5
6
6
7; '
0.005
0.025
0.05'
0.125
13
0
0
0
46
47
48
49
50
14
14
15
15
15
16
16
17
17
16
17
17
18
18
18
19
19
19
20
15
16
16
17
17
18
18
18
19
19
19
19
20
20
20
20
21
21
22
22
20
20
21
21
21
21
21
22
22
23
23
23
24
24
25
22
22
23
23
24
24
25
25
25
26
26
27
27
28
28
23
24
24
24
25
25
26
26
27
27
25
25
26
26
27
27
28
28
29
29
28
28
28
29
29
28
29
29
30
30
30
30
31
31
32
30
30
31
31
32
32
32
33
33
34 ,
81
82
83
84
8,5
17
18
18
19
19
20
20
20
21
21
22
22
22
23
23
24
24
25
25
25
26 '
26
27
27
28
28
28
29
29
30
32
33
33
33
34
86
87
88
89
90t
30
31
31
31
32
31
31
32
32
32
33
33
34
34
35
2
2
3
3
3
3
3
3
4
4'
5
-5
6
6
6
65
72
10
73
74
75
76
11
10
11
12
12
13
11
12
13
11
12
12
12
12
13
13
14
14
14
15
15
16
16
17
17
18
13
64
10
10
14
14
15
15
61
62
63
66
67
68
69
70
71
10
11
13
14
14
15
15
16
16
51
52
53
54
55
56
57
58
59
60
7
7
8
8
9
9
7
7
8
8
9
9
9
10
13
0.25
1
2
2
10
0.10
0.125
0
0
1
1
1
4
4
5
5
5
6
6
7
7
7
8
8
9
9
0.05
10
11
77
78
79
80
34
35
35
36
36
34
35
35
36
36
37
37
38
38
39
Adapted with permission from W. J. Dixonand F. J. Massey, Jr., Introduct ion to Statistical Analysis (2nd. ed.),
t For values of n larger than 90, approximate values of r may be found by taking the nearest integer less than
(n - 1)/2 - k v'n+T, where k is 1.2879, 0.9800, 0.8224, 0.5752 for the 1, 5, 10, 25 percent values, respectively.
''-'''
,
....
TABLES
TABLE
m= 4
m=3
~
0
1
2
0.250
0.500
0.750
0.100
0.200
0.400
0.050
0.100
0.200
0.600
0.350
3
4
5
I~
0
1
2
0.200
0.400
0.600
0.067
0.133
0.267
0.028
0.057
0.114
0.014
0.029
0.057
0.400
0.600
0.200
0.314
0.100
0.171
0.429
0.571
0.243
0.343
3
4
0.500
0.650
5
6
7
8
03 .44
0.557
m= 5
~
0
1
2
3
4
5
6
7
8
9
10
11
12
13
1
0.167
0.333
c,0.500
0.667
2
0.047
0.095
0.190
0.286
m=6
3
0.018
0.036
0.071
0.1,25
0.008
0.016
0.032
0.056
0.004
0.008
0.016
0.028
0.196 0.095
0.286 _0.143
0.048
0.075
0.393
0.500
0.607
0.206
0.278
0.365
0.111
0.155
0.210
0.452
0.548
0.274
0.345
- - -
- - 0.4 29
0.571
- -
- - - -
- -
-
0.421
0.500
0.579
I~
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
0. 143
0.286
0.4 28
0.57 1
0.036
0.071
0.143
0.214
0.01 2
0.0 24
0.048
0.083
0.005
0.010
0.019
0.033
0.00 2
0.004
0.009
0.0 15
0.001
0.002
0.004
0.008
0.321
0.4 29
0.571
0.131
0.190
0.274
0.057
0.086
0.129
0.026
0.041
0.063
0.021
0.032
0.357
0.45 2
0.548
0.176
0.238
0.305
0.089
0.123
0.165
0.047
0.066
0.090
0.381
0.457
0. 545
0.214
0.268
0.331
0.120
0.155
0.197
0. 396
0.465
0.535
0.2 42
0.294
0.350
- - - -
- - - - -
0.013
- - - - -
- - - - -
- - - - -
- - -
0.409
0.469
- 0.531
(continued )
441
""".:.
442
APPENDIX C
TABLE
C.6 (continued )
m = 7
<;
0
1
2
3
4
0.125
0.250
0.375
0.500
0.625
0.Q28
0.056
0.111
0.167
0.250
0.008
0.017
0.033
0.058
0.092
0.003
0.006
0.012
0.021
0.036
0.001
0.003
0.005
0.009
0.015
0.001
0.00 1
0.002
0.004
0.007
0.000
0.001
0.001
0.002
0.003
0.333
0.444
0.556
0.133
0.192
0.258
0.055
0.082
0.115
0.024
0.037
0.053
0.011
0.017
0.026
0.006
0.009
0.013
0.333
0.417
0.500
0.583
0.158
0.206
0.264
0.324
0.074
0.101
0.134
0.172
0 .037
0.051
0.069
0.090
0.0 19
0.027
0.036
0.049
0.394
0.464
0.538
0.216
0.265
0.319
0.117
0.147
0.183
0.064
0.082
0.104
0.378
00.438
0.500
0.562 .
0.223
0.267
0.314
0.365
0.130
0.159
0.191
0.228
0.418
0.473
0.527
0.267
0.310
0.355
5
6
7
8
9
10
11
12
13
14
15
16
17
,~ - - - .
.- p
..
18
19
20
21
22
23
24
25
0.402
0.451
0.500
0.549
(continued)
.. ,_~----c,-----------,--------:----------:-
._---
_ -.,---- -- ~
TABLES
TABLE
443
C.6 (continued)
m=8
~
0
1
2
3
4
5
6
7
8
normal
0.111
0.222
0.333
0.444
0.556
0.022
0.044
0.089
0.133
0.200
0.006
0.012
0.024
0.04 2
0.067
0.002
0.004
0.008
0.014
0.024
0.001
0.002
0.003
0.005
0.009
0.000
0.001
0.001
0.002
0.004
0.000
0.000
0.00 1
0.00 1
0.002
0.000
0.000
0.000
0.001
0.001
3.308
3.203
3.098
2.993
2.888
0.001
0.001
0.001
0.001
0.002
0.267
0.356
0.444
0.556
0.097
0.139
0.188
0.248
0.036
0.055
0.077
0.107
0.015
0.023
0.033
0.047
0.006
0.010
0.015
0.0 21
0.003
0.005
0.007
0.010
0.001
0.002
0.003
0.005
2.783
2.678
2.573
2.468
0.003
0.004
0.005
0.007
0.315
0.387
0.4 61
0.539
0.141
0.184
0.230
0.285
0.064
0.085
0.111
0.142
0.030
0.041
0.054
0.071
0.014
0.020
0.027
0.036
0.007
0.010
0.014
0.019
2.363
2.258
2.153
2.048
0.009
0.0 12
0.016
0.020
0.341
0.404
0.467
0.533
0.177
0.217
0.262
0.311
0.091
0.114
0.141
0.172
0.047
0.060
0.076
0.D95
0.025
0.032
0.041
0.052
1.943
1.838
1.733
1.628
0.026
0.033
0.041
0.052
0.362
0.416
. 0.472
0.528
0.207
0.245
0.286
0.331
0.116
0.140
0.168
0.198
0.065
0.080
0.097
0.117
1.523
1.418
1.313
1.208
0.064
0.D78
0.094
0.113
0.377
0.426
0.475
0.525
0.232
0.268
0.306
0.347
0.139
0.164
0.191
0.221
1.102
0.998
0.893
0.788
0.135
0.159
0.185
0.215
0.389
0.433
0.478
0.522
0.253
0.287
0.323 .
0.360
0.683
0.578
0.47 3
0.368
0.247
0.282
0.318
0.356
0.399
0.439
0.480
0.520
0.263
0.158
0.052
0.396
0.437
0.481
9
10
11
12
13
14
15
16
17
18
19
20
..
"
21
22
23
24
25
26
27
28
29
30
31
32
"
Reprod uced by permission of the publisher from H. B. Mann and D. R. Whitney, Annals Math . St at. 18, 52-54 , 1947.
'.
444
APPENDIX C
~ U at }
= ce,
nl +2 n2 )
= n2 = -
1 - : 0.01
0.025
0.05
0.95
0.975
0.99
n/2
o : 0.99
0.975
0.95
0.05
0.025
0.01
5
6
7
8
9
10
11
12
2
2
3
4
4
5
6
7
7
8
9
10
11
13
17
21
25
30
2
3
3
4
5
6
7
7
8
9
10
11
12
14
18
22
27
31
36
40
45
49
54
58
63
68
3
3
4
5
6
6
7
8
9
10
11
11
8
10
11
12
13
15
16
17
18
19
20
22
24
26
32
37
43
48
54
59
65
70
75
81
86
91
97
102
107
113
9
10
12
13
14
15
16
18
19
20
21
22
25
27
33
39
9
11
12
13
14
15
16
18
20
25
30
35
40
45
50
55
60
65
70
75
80
85
90
95
100
. ....;,:
34
38
43
47
52
56
61
65
70
74
79
84
72
77
82
86
13
15
19
24
28
\ 33
37
42
46
51
56
60
65
70
74
79
84
88
44
50
55
61
66
72
77
83
88
93
99
104
109
115
13
15
16
17
18
20
21
22
23
26
28
34
40
46
51
57
63
68
74
79
85
90
96
101
107
112
117
Reproduced fro m J. S. Bendat and A. G. Piersol, Measure ment and Analysis of Random
Data, John Wiley, New York, 1966, with permission.
1: ''
, .J
~",.......-c-----:----
. _.__.._ ._._ . --
TABLES
C.S
TABLE
CRITICAL
FOR
VALUES
THE
SUM
OF
n:
P{N ~ Na}:S a
a
TABLE
0.10
0.05
0.025
0.01
3
4
5
6
7
8
9
10
11
12
13
14
15
18
26
34
38
46
54
62
68
76
84
92
98
106
18
32
38
44
52
60
70
78
86
94
102
110
118
18
32
42
50
60
68
78
86
96
104
112
122
130
18
32
50
58
68
80
90
100
108
118
128
136
146
C.9
CRITICAL
DISTRIBUTION
0.99
0.976
0.95
10
12
14 \
16 \,
18
20
30
40
50
60
70
80
90
100
9
16
24
34
45
59
152
290
473
702
977
1299
1668
2083
11
18
27
38
50
64
162
305
495
731
1014
1344
1721
2145
13
21
.30
41
54
69
171
319
514.
756
1045
1382
1766
2198
0.05
0.025
0.01
31
33
47
63
81
102
125
272
474
729
1038
1400
1815
2283
2804
35
49
66
85
107
130
282
489
751
1067
1437
1860
2336
2866
44
60
78
98
120
263
460
710
1013
1369
1777
2238
2751
445
446
APPENDIX C
'TABLE C.lO
2:
D=
(E t -Et _ 1)2
n
2:1
EF
Values of
Sample
Size
n
Probabi lit y
in Upper
T aitt
15
0.01
0.Q25
0.05
0.81
0.95
1.08
1.07
1.23
1.36
0.70
0.83
0.95
1.25
1,40
1.54
0.59
0.71
0.82
1.46
1.61
1.75
0,49
0.59
0.69
1.70
1.84
1.97
0.39
0,48
0.56
1.96
2.09
2.21
20
0.01
0.025
0.05
0.95
1.08
1.20
1.15
1.28
1.41
0.86
0.99
1.10
1.27
1.41
1.54
0.77
0.89
1.00
1.41
1.55
1.68
0.68 1.57
0.79 1.70
0.90 1.83
0.60
0.70
0.79
1.74
1.87
1.99
25
0.01
0.025
0.05
1.05 1.21
1.18 1.34
1.29 1,45
0.98 1.30
. 1.10 1.43
1.21 1.55
0.90
1.02
1.12
1.41
1.54
1.66
0.83 1.52
0.94 1.65
1.04 1.77
0.75
0.86
0.95
1.65
1.77
1.89
30
0.01
0.025
0.05
1.13
1.25
1.35
1.26
1.38
1.49
1.07
1.18
1.28
1.34
1,46
1.57
1.01
1.12
1.21
1.42
1.54
1.65
0.94 1.51
1.05 1.63
1.14 1.74
0.88
0.98
1.07
1.61
1.73
1.83
om
1.25
1.35
1,44
1.34
1,45
1.54
1.20
1.30
1.39
1,40
1.51
1.60
1.15
1.25
1.34
1,46
1.57
1.66
1.10 1.52
1.20 1.63
1.29 1.72
1.05
1.15
1.23
1.58
1.69
1.79
1.24 1.49
1.34 1.59
1.42 ' 1.67
1.20 1.54
1.30 1.64
1.38 1.72
1.16
1.26
1.34
1.59
1.69
1.77
40
0.025
0.05
K= I
DL Du
1.28 .) ,45
1.38 1.54
1.46 1.63
50
0.01 .
0.Q25
0.05
60
0.01
0.025
0.05
1.38
1,47
1.55
1.45
1.54
1.62
1.35
1.44
1.51
1,48
1.57
1.65
1.32
1,40
1,48
1.52
1.61
1.69
1.28 1.56
1.37 1.65
1.44 1.73
1.25
1.33
1.41
1.60
1.69
1.77
80
0.01
0.025
0.05
1.47
1.54
1.61
1.52
1.59
1.66
1.44
1.53
1.59
1.54
1.62
1.69
1.42 1.57
1.49 1.65
1.56 1.72
1.39 1.60
1.47 1.67
1.53 1.74
1.36
1.44
1.51
1.62
1.70
1.77
om
1.52
1.59
1.65
1.56
1.63
1.69
1.50 1.58
1.57 1.65
1.63 1.72
1.48
1.55
1.61
1.46 1.63
1.53 1.70
1.59 1.76
1.44 1.65
1.51 1.72
1.57 1.78
100
0.025
0.05
1.32 1,40
. . . 1,42 1.50
1.50 1.59
1.60
1.67
1.74
... From C. F. Christ, Economic M odels and Methods, John Wiley, 1966, with permission, based on J. Durbin
an d G. S. Watson, Biometrika 37, 409, 1950; 38, 159, 1951.
t The probability shown in the second column is the area in the upper tail. K is the number of independent vari
ab les in addi tion to the constant term.
TABLES
T ABLE C .ll
Number of
Observations,
1- a
0.70
0.80
0.90
0.95
0.98
0.99
0.995.
'1 0
3
4
5
6
7
0.684
0.471
0.373
0.318
0.281
0.781
0.560
0.451
0.386
0.344
0.886
0.679
0.557
0.482
0.434
0.941
0.765
0.642
0.560
0.507
0.976
0.846
0.729
0.644
0.586
0.988
0.889
0.780
0.698
0.637
0.994
0.926
0.821
0.740
0.680
'11
8
9
10
0.318
0.288
0.265
0.385
0.352
0.325
0.479
0.441
0.409
0.554
0.512
0.477
0.631
0.587
0.551
0.683
0.635
0.597
0.725
0.677
0.639
11
12
13
0.391
0.370
0.351
0.442
0.419
0.399
0.517
0.490
0.467
0.576
0.546
0.521
0.638
0.605
0.578
0.679
0.642
0.615
0.713
0.675
0.649
14
15
16.
17
18
19
20
21
22 \
23
24
25
0.370
0.353
0.338
0.325
0.314
0.304
0.295
0.287
0.280
0.274
0.268
0.262
0.421
0.402
0.386
0.373
0.361
0.350
0.340
0.331
0.323
0.316
0.310
0.304
0.492
0.472
0.454
. 0.438
0.424
0.412
0.401
0.391
0.382
0.374
0.367
0.360
0.546
0.525
0.507
0.490
0.475
0.462
0.450
0.440
0.430
0.421
0.413
0.406
0.602
0.579
0.559
0.542
0.527
0.514
0.502
0.491
0.481
0.472
0.464
0.457
0.641
0.616
0.595
0.577
0.561
0.547
0.535
0.524
0.514
0.505
0.497
0.489
0.674
0.647
0.624
0.605
0.589
0.575
0.562
0.551
0.541
0.532
0.524
0.516
Statistic
' 21
' 22
.. Adapt ed by permission from W. J. Dixon and F. J. Massey, Jr., Introduction to S tatistical Analysis (2nd.
ed .), McGraw-H ili, New York, 1957.
. ,.k.l l ..
447
'.'"
;
!
1 _
'.;
APPENDIXD
Notation
GENERAL CONVENTIONS
b**
and (s).
b*
b*
b*
a
ali
a'
A
Ali
A~
All
ARL
e
e
e
eA
ciO)
elf
elf
random variable .
fi~ + fi1X
element of b,
. .L4L
e*
c
c
Cp
Covar
d(s)
d.f.
D
450
. ~ ..~
APPENDIX
jj
!i)
f
tff
E
R~
-'. ""
/(r:s)
I
I(xj)
, fmax
Imln
f(lX, Y, t) ;
F
function in general'
tTm
/*
J( )
"
get)
get)
g(w)
(.P[g(t)))
g(z)
ofg[td)
g(s)
G(t)
G(s)
- JQQJL 45
transfer matrix
ing function)
transform of G(t)
I
L
L(~
I Yn)
~
.
integrals of experimental data in
Section 10.4
.
Kullback criterion for model dis
crimination in Section 8.5
number of inversions
imaginary part
identity matrix
Kullback criterion for model dis
crimination in Section 8.5
Bessel functions in Tables 10.2-2 and
10.4-2
NOTATION
m
m
m
P
P
P(x)
peE)
P(x; t)
P(X I ; t)
ml
calculated (experimental) estimate of
mf
n(s)
N
N
N
N
P
PI
p(x)
p(x, t)
P(XI I X3)
I Yn)
Pn<r~
. ]J1
in
p (n )
r
P{A
I B}
q
'r
r
rl
rlj
r*
rxx(tl> t 2 )
451
probability in general
dimensionless parameter termed the
Peclet number in Chapter 10; sub
scripts designate axial (L) and radial
(R) directions
probability distribution function of
Xin general
probability of event E
the first-order probability distri
bution function P {X(t) ~ x} or the
probability that the random variable
X(t) is less than or equal to the
value of a deterministic variable x
marginal probability distribution
function of X(ti)
the first-order probability function
for a discrete variable X(t k )
the second-order probability distri
bution function defined by Equation
2.1-2
orthogonal polynomial term defined
by Equation 5.1-21
probability of model r being the
correct model after n experiments
have been carried out
probability of A given B
volumetric flow rate in Chapter 10
number of independent variables in
an empirical model
(x - u)Tr-l(x - u) in Equation 2.3-6
radius
magnitude (modulus) of a complex
number in Chapter 12
reaction rate,_deterministic variable
Dixon criterion in the test of outliers
dimensionless radial coordinate
ensemble autocorrelation function for
the nonstationary random variable
X(t)
ensemble autocorrelation function for
the stationary random variable X(t)
ensemble crosscorrelation function
for the nonstationary random vari
ables X(t l ) and Y(t 2 )
ensemble crosscorrelation function
for the stationary random variables
X(t) and yet)
ideal gas constant
range for a random variable
reaction rate random variable
finite time or empirical continuous
variable estimate of the ensemble
autocorrelation function
' j
452
&I( )
APPENDIX D
U*
9.2
s
Sj
SXY
sxx(w)
sxy(w)
si
s~
S~l
S
S*
Sxx(w)
SS
t*
tf
-to
T
T
T
Tx,TlI
T*
u
u
U
U(t)
U(x)
time
the Student-tused in the r-test,
random variable
relative temperature CC or OF)
dimensionless time
time at the end of a time record
initial time
period of input in Section 12.3
temperature, absolute temperature
Wilcoxon T, the rank sum
Wilcoxon T for x or y
number of times a larger number is
followed by a smaller number in a
sequence
transformation operator
upper bound for a parameter hi or a
variable x,
sensitivity coefficient matrix
matrix of upper b0U11ds
standardized normal random variable
defined in Equation 2.3-2
unit step function
unit step function
V*
j
Vlj
V
V
V
v:en)
T8
Var
WI
w(r)
w(w)
W
W(t)
w~n+l)
x(t)
X TIQ
x(S)
x(z)
x(w)
x
x(t)
statistic
unitary matrix
velocity of flow
element in V
outliers
variance
weight
lag window
spectral window
matrix of weights
a random variable in general
compact notation for x~n+l)M-l.
(x~n + 1)T in Section 8.5
general deterministic independent
variable
ith independent variable
additional value of -x (in regression
analysis)
deterministic process or model input
element of XT,I; r denotes the model
number, i the experimental run num
ber, and q the independent variable
number
Laplace transform of x(t)
z-transforrn of x(tk )
process or model input in the fre
quency domain
a vector of deterministic variables, of
independent deterministic variables
vector composed of all the indepen
dent variables in a model evaluated
on the ith run
matrix of deterministic process or
model inputs
the matrix x at an extremum
the alias matrix defined in Equation
5.1-17
x*
,.
_-1
z:...
" ",.
.~
NOTATION
X(n)
X(n)
rJ
Xli' Xr.1
xes)
X
X
X(t)
x
X*
X r IJ
x(n)
rJ.k
Xes)
X
x(n)
Xr
Y
yet)
y(tl)
yeO)
YrJ
h
L
,L i
Yr
yew)
yes)
y(z)
y
Yr
Yp
yeO)
Yo
y*
y(n)
Y
Y(t l)
yet)
Yn*+ l
Yes)
Y(w)
Ylx
y*
y(n)
f r(n)
453
454
APPENDIX D
a difference
delta function
Dirac delta function: 0kl = 0 if
k =1= I; 0kl = I if k = I
difference, as ~X = XI +l - XI or
~bj
z*
z~
~(n), ~ , ~l ' ~r
sss
Greek
IX
IX
ex
ex*' .
f3
f3
yet)
Yc
Ye
Yr
Yx(t)
y2(W)
yh(w)
rex)
r
r
J. 4
4?
constant
blunder in Section 4.7
contraction .coefficient
expansion coefficient
reflection coefficient
ensemble coefficient of variation of
the stochastic variable X(t)
coherency matrix
coherence function
gamma function
covariance matrix among models in
Chapter 8
covariance .matrix .among observa
tions
"Iri
'I i
8
Y
= f3j - b,
II
constant
,\
,\
,\
eigenvalue of a matrix
Lagrangian multiplier
parameter in the Marquardt least
squares method, a parameter in
general
random variable for Bartlett's test
(Equation 3.6-2)
NOTATION
P12 3
ai(t)
Section 6.2-1
density in Chapter 10
ensemble correlation coefficient for
the stationary random variables X(/)
and yet)
P"
',
PXY
et
covariance matrix of
multiple product
matrix defined by Equation 9.4-6a
PXY
II
455 '
~
~(s)
</>*, </>**
able
phase angle
functions in general
frequency in general
element of the matrix n
frequency in cycles/time
Nyquist frequency in radians
covariance matrix for the model
parameters, ~ ; subscripts designate a
covariance matrix for other param
eters
covariance matrix defined in Equa
tion 9.4-16
OVERLAYS
estimated
Fourier or Laplace transform
sample average
in canonical coordinates
incorrectly estimated, approximate
SUPERSCRIPT
(n)
456
APPENDIX D
*
T
VI!
= [Vlil- 1
V-I
OTHER
[x]
Il fll
fn
u
<>
_.~
Author Index
English, J. E., 67
Ewan, W. D., 88
Ezekiel, M., 40
Ferrell, E. B., 27
Ferris, C. D., 58
Freund, R. A., 86
Geary , R. c, 336
457
Johnson, N. L., 53
Kerridge , D. , 168'
K ing, J. R. , 27
Kloot, N. H. , 216
Krutchkoff, R. G. , 120
Law, V. J. , 195
Lehmann, E. L., 75
Leone, F . C., 53
458
AUTHOR INDEX
Ramachandran, G. , 72
Ranganathan, J., 72
Rao , C. V., 273
Ratkowsky, D. A., 38
Reeves, C. M. , 191
Reiner , A. M. , 275
Reswick , J. B., 382, 387
Roberts, S. W., 86, 91, 92
Rogers, A E., 367
Rose, H. E., 67
Rosenblatt, J. , 76
Rosenblatt, M., 132
Roth , R . W., 241
- Rowe, P~ M., 124
Salzer, H. E., 361
Savage, I. R. , 68
Schechter, R. S., 178,355
Schiesser, W. E., 334
Schmidt, G. L., 309
Shewhart, W. A., 80
Siegel, S., 71
Sliepcevich, C. M. , 388
Smiley, K. W., 148
Smith , F . W., 367, 369
Smith , H ., 209
Smith, W. E., 1I0
Smith, W. K. , 340
Spendley, N. , 181,259
Steiglitz, K., 367
Strand, T. G., 190
Strotz, R. H., 168
Stuart, A., 50
Subbarao, B. Y., 273
Sweeny. H. c., 259
Thompson , W. R., 135
Thornber, H. , 213
Tidwell, P. W., 20l
Toman, R. J., 163 ~ 164, 214
~~.
, '.;'
Subject Index
459
460
SUBJECT INDEX
Estimation (conI.)
interval, 49, 53
parameter, 49
subject to constraints, 201
techniques (see Particular technique)
Event, 10
Evolutionary operation (EVOP), 257
Expansion in partial fractions, 357
Expectation, 14
Expected value, of a derivative, 15
of an integral, 15
of a sum, 16
of a sum of independent variables, 16
of the sample variance, 33
Experimental design, blocking, 242
central composite, 241
EVOP, 257
factorial , 234, 237
for discrimination, 274, 280
for parameter estimation, 266, 280
for partial differential equation models ,
334
one-dimensional models, 231
orthogonal designs, 234
processes with random inputs, 380
practical consideration, 245
response surface methods, 230
three-dimensional models, 239
rotatable, 236, 237
to increase precision in parameter
estimates, 263
triangular, 235
two-dimensional models, 234
Experimentation checklist, 231
efficient, 230 and following
process optimization, 252
Exponentially moving average control
chart, 86
Exponential probability density function, 25
. Extremes (see Outliers)
Extremum, 254
Efficiency of estimates, 50
Efficient estimate, 49
Eigenvalues, 405
Eigenvectors, 405
Ellipse, confidence region , 119
Empirical models, 3
Ensemble, 6
autocorrelationfunction, 17
autocovariance, 19
coefficient of variation, 19
crosscorrelation function , 19
expectation, 14
mean, 16
mean of a dynamic model, 16
parameters, table of, 22
variance, 18
variance of sample variance, 33
Equation error, 318, 366
Error, experimental, 5
first and second kind, 57
introduction into model, 296, 297, 298
random, 5
systematic, 5
Error, numerical, 31, 148
round off, 31
Error sum of squares, 113
Estimate, biased, 49, 114
consistent, 49
efficient, 49
joint, 50
sufficient , 50
unbiased, 49, 113, 114
Estimated regression equation, 110, 143
harmonic, 150
Estimated regression line, 110
Estimation, errors not independent, 164
"'"!"_
..-.-
. '_
F, 36
Factor, definition of, 234
Factorial experiment (design), definition
of,234
types, 234, 235, 237, 240
F-distribution, 35
tables, 427
F-test, 65 (see Analysis of variance)
Filtering, 295
First-order model, 234
Fitting of a function (see Regression)
Fitting of a model (see Parameter
estimation)
Folding, 385
Forward mult iple regression, 212
Fourier transform, 374
Fractional representation, 239
Freedom, degree of (see Degrees offreedom)
Frequency, cumulative, 10
relative, 10
Frequency domain, estimation in, 374
Frequency response, 332, 334, 355, 374
Functional relation, selection of, 105
77
~- _
..- --- - - --
Hotelling's T2, 93
Hypothesis, alternative, 56
null , 57
for variability; 64
in regression, 154
- -- - - - -- --
SUBJECT IN DEX
Level of significance, 56
Likelihood, maximum, 50
control,80
mean of, 16
variance of, 18
114
25,28
r
I-
I
I.
Mann-Whitney test, 69
tables, 440
adjoint, 403
conformable, 403
derivative, 404
Hessian, 254
identity, 403
ill-conditioned, 148
integral, 404
inverse, 403
Jacobian, 315
multiplication, 404
normal equations.Ice
orthogonal, 403
singular, 403
square, 402
symmetric, 403
transpose, 403
unitary, 407
variance-covariance, 29,146,166,264,309
vector , 403
129, 146
Mean , 1~
confidence limits, 54
ensemble, 16
hypothesis test , 61
sample, 31
distribution of, 63
Models, 3, 237
deterministic, 3, 7
dimension, 234
distributed parameter, 4
lumped parameter, 4
population balance, 3
random, 3
screening, 214
steady state, 4
stochastic, 7, 8
transport phenomena, 3, 4
Moments, 22
central,22
method of, 51
norm al distribution, 24
raw, 22
estimation, 143
166,271,278
function, 28
Nonlinear operator, 5
Nonlinear system, 5
Nonstationary, 15
Norm, 406
Normalization, 406
461
bivariate, 29
mean , 24, 26
multidimensional, 28
standardized distribution, 25
table , 423
variance, 24, 26
Null hypothesis, 57
Degrees of freedom)
Numerical error, 31
Operating characteristic, 57
matrix, 403
Outcome, experimental, 10
random, 10
Outliers , definition, 77
295
320, 343
360
Confidence limits)
332
moments, 339
responses, 332
461
SUBJECT INDEX
Point estimation, 49
Poisson distribution, 16
Population, 10
coefficient)
Posterior density, 53
Power of a test , 57
Power spectrum (power spectral density),
379
Precision ,S
function, 11, 24
interpretation of, 10
... .
paper, normal, 28
second-order distribution , 11, 14
Process,3
.
Process analysis, 3
Process control charts (see Control charts)
process models (see models)
process optimization, 252
Prqcess records, 12
,Producers' risk, 86
Propagation of error, 36
linear relations; 36
nonlinear relations, 38
Range, 80
Ranked data, 69
distribution), 75
Reflection, 181
Region, acceptance, 56
confidence , 118
critical, 56
rejection , 56
assumptions, 110
outlyers, 209
trend,210
S2 (see Variance)
s2-distribution, mean, 33
variance, 33
Sample, average , 10, 30 31
correlation coefficient, 40
mean. 31
random, 30
size, determination, 59
standard deviation, 32
statistic, 30
variance , 31
Sample funct ions, 6
Sample space, 10
Sampling, sequential testing, 60
Scaling, 193
Scatter diagrams, 40
Screening models , 214
Search methods, 178
equations, 308
274,280
Sequential testing, 60
Shewhart control chart. 80
Shifts in level of con trol chart, 85
Siegel-Tukey test, 71
Significance level, 56
Significance tests , PO Wl ' of, 57
Sign test, 68
table, 439
Simplex designs, 259
Simplex method, 181
Size of experiment, effe t on precision, 57
Slope (see Estimation r.; parameters)
Slope, estimate of,l l.i
Spectral density, 379
estimation of, 382
Spectral window , 383
Standard deviation, dU"lition of, 19
ensemble, 19
estimation of, 31
sample , 32
tests for , 71
weakly, 18
--~
-- --_. . . _.,- -
- _.... -- - - -
I
I
.i
!
._--
SUBJECf INDEX
Stochastic variable, 5 (see Random)
Sufficient estimate, 50
variables, 161
of errors, 113
surface, 120
System,S
linear, 5
nonlinear,S
Systematic error,S
!',
~
distribution, 34
probability density, 34
table, 426
T 2 (Hotelling's T 2), 93
Test, characteristics, 49
for randomness , 74
for stationarity, 71
t (see t)
u (see u)
unbiased, 147
variance ratio , 65
463
definition, 18
distribution of sample , 33
ensemble, 18
of a function , 37
ofa sum, 18
pooled,33
sample, 31
distribution of, 36
tables, 427
of variance)
Variance reduction , 33
Testing, hypothesis , 49
Testing, sequential , 60
Time average, 10
Transformation, of a probability
density, 67
Trend,210
Two-way classification, 76
Uncorrelated, 19
Wald-Wolfowitz test, 72
Weight, 126
Wilcoxson, T., 70
Variable, continuous, 7
dependent , 105
discontinuous , 7
discrete, 7
independent, 105
random ,S
stochastic, 5
_I
,:
.:
.~ :