Академический Документы
Профессиональный Документы
Культура Документы
u3 u3 3
u3 u3
Z Z
2 2
P Y = f (y) dy = 3y 2 dy =
2 0 0 2
so
0
u<3
u3 3
FU (u) = 2 3u5
1 u>5
and (
3
dF 8 (u 3)2 35
fU (u) = =
du 0 otherwise
11
The general method works as follows.
The cdf method is useful for dealing with the squares of random
variables. Suppose U = X 2 , then
FU (u) = P (U u)
= P (X 2 u)
= P ( u X u)
Z u
=
f (x)dx
u
= FX ( u) FX ( u).
12
otherwise and U = X 2 then
1
fU (u) = fX ( u) + fX ( u)
2 u
1 u+1 u+1
= +
2 u 2 2
1
= 0u1
2 u
Example 2.2. As a more important example suppose Z N (0, 1)
so that
2
1 z
fZ (z) = exp < z < .
2 2
Then if U = Z 2
1
fU (u) = fX ( u) + fX ( u)
2 u
u
1 1 u 1
= exp + exp
2 u 2 2 2 2
1 u
= exp
2u 2
where Z
() = y 1 exp(y) dy
0
is the Gamma function.
13
Note that some text books define a Gamma distribution with 1/
instead, it doesnt really matter.
Now we can see that
Z
exp(y) dy = ey 0 = 1.
(1) =
0
Also if we integrate () by parts we see that
Z
() = y 1 exp(y) dy
0 Z
y
1
( 1)y 2 exp(y) dy
= y (e ) 0 +
Z 0
= 0 + ( 1) y 2 exp(y) dy
0
= ( 1)( 1)
14
So we have proved the following theorem.
Theorem 2.1. If the random variable Z N (0, 1) then Z 2 21 .
P (Y y) = P (FX (X) y)
= P (X FX1 (y))
= FX (FX1 (y))
= y 0<y<1
P (X x) = P (FX1 (U ) x)
= P (U FX (x))
= FX (x)
15
2.2 Method of direct transformation
fY (y) = y +1
ey
(e )
= ey y > 0
16
Suppose X1 , X2 have joint pdf fX1 ,X2 (x1 , x2 ) with support A =
{(x1 , x2 ) : f (x1 , x2 ) > 0}. We are interested in the random vari-
ables Y1 = g1 (X1 , X2 ) and Y2 = g2 (X1 , X2 ). The transforma-
tion y1 = g1 (x1 , x2 ) ,y2 = g2 (x1 , x2 ) is a 1-1 transformation of A
onto B. So there is an inverse transformation x1 = g11 (y1 , y2 )x2 =
g21 (y1 , y2 ). Let the determinant J, given by
x1 x1
y1 y2
J = x
y12 x 2
y2
fY1 ,Y2 (y1 , y2 ) = |J|fX1 ,X2 (g11 (y1 , y2 ), g21 (y1 , y2 )) (y1 , y2 ) B.
f (x1 , x2 ) = exp((x1 + x2 )) x1 0, x2 0.
17
Note in this example that as we started with 2 random variables we
have to transform to 2 random variables. If we are only interested in
one of them we can integrate out the other.
Example 2.7. X1 and X2 have joint pdf
18
First note that MX (0) = 1. Differentiating MX (t) with respect to t
assuming X is continuous we have
Z
d
MX0 (t) = etx f (x) dx
Zdt
= xetx f (x) dx
Z
MX0 (0) = xf (x) dx
= E[X]
d
where we assume we can take dt inside the integral.
Similarly
d2
Z
MX00 (t) = 2
etx f (x) dx
Zdt
= x2 etx f (x) dx
Z
MX00 (0) = x2 f (x) dx
= E[X 2 ].
19
Then the moment generating function is
n
X
tx n
M (t) = e px (1 p)nx
x
x=0
n
X n
= (pet )x (1 p)nx
x
x=0t n
= pe + (1 p)
Thus n1 t
M 0 (t) = n pet + (1 p)
pe
so M 0 (0) = np = E[X].
Also
n2 2 2 n1 t
M 00 (t) = n(n1) pet + (1 p) p e t+n pet + (1 p)
pe
20
Example 2.10. Suppose X has a Gamma distribution, Ga(, ). We
find the mgf of X as follows.
Z
tx
MX (t) = e x1 exp(x) dx
()
Z0
= x1 exp(x( t)) dx
0 ()
Z
( t) 1
= x exp(x( t)) dx
( t) 0 ()
Now the integral is of the pdf of a Ga(, t) random variable and
so is equal to 1. Note we have to have t < to make this work. That
is ok, so long as we can let t 0 which we can.
Thus the mgf for a Ga(, ) random variable X is
MX (t) = .
t
21
and so as the final bracket does not depend on x we can take it out-
side the integral to give
Z
2 2 t + 4 t2
1 1 2 2 2
MX (t) = exp 2 [x ( + t)] dx exp .
2 2 2 2
Now the function inside the integral is the pdf of a N ( + 2 t, 2 )
rv and so is equal to one. Thus
2 t2
MX (t) = exp t + .
2
0 2 t2
Now differentiating the mgf we find M (t) = exp t + 2 +
2 t), M 0 (0) = = E[X] and
2 2 2 2
t t
M 00 (t) = exp t + ( + 2 t)2 + exp t + 2,
2 2
M 00 (0) = 2 + 2 and hence Var[X] = 2 + 2 ()2 = 2 .
The moment generating function is also useful for proving other re-
sults. For example results about sums of random variables.
Suppose X1 , X2 , . .P
. , Xn are independent rvs with mgf MXi (t), i =
1, . . . , n. Let Y = i Xi then
n
Y
MY (t) = MXi (t).
i=1
MY (t) = E[etY ]
P
= E[et Xi ]
= E[etX1 etX2 etXn ]
= E[etX1 ] E[etX2 ] E[etXn ] by independence
= MX1 (t)MX2 (t) MXn (t)
23
/( t). Thus
n n
Y
MY (t) = =
i=1
t t
We give the next result about sums of standard normal rvs as a the-
orem as the result is important.
Theorem 2.4. Suppose Y1 , . . . , Yn are independent, normally dis-
tributed with mean E[Yi ] = i and variance Var[Yi ] = i2 . Let
Zi = (Yi i )/i so that Z1P
, . . . , Zn are independent and each has
a N (0, 1) distribution. Then Zi2 has a 2n distribution.
Proof. We have seen before that each Zi2 has a 21 distribution. So
24
MZi2 (t) = (1 2t)1/2 . Let V =Zi2 . Then
P
Y
MV (t) = MZi2 (t)
1
=
(1 2t)n/2
1 n2
2
= 1
2 t
25