Вы находитесь на странице: 1из 18

Transformations

Dear students,

Since we have covered the mgf technique extensively already,


here we only review the cdf and the pdf techniques, first for
univariate (one-to-one and more-to-one) and then for bivariate
(one-to-one and more-to-one) transformations.

1. The cumulative distribution function (cdf)


technique
Suppose Y is a continuous random variable with cumulative
distribution function (cdf) 𝐹𝑌 (𝑦) ≡ 𝑃(𝑌 ≤ 𝑦). Let 𝑈 = 𝑔(𝑌) be
a function of Y, and our goal is to find the distribution of U. The
cdf technique is especially convenient when the cdf 𝐹𝑌 (𝑦) has
closed form analytical expression. This method can be used for
both univariate and bivariate transformations.

Steps of the cdf technique:


1. Identify the domain of Y and U.
2. Write𝐹𝑈 (𝑢) = 𝑃(𝑈 ≤ 𝑢), the cdf of U, in terms of 𝐹𝑌 (𝑦),
the cdf of Y .
3. Differentiate 𝐹𝑈 (𝑢) to obtain the pdf of U, 𝑓𝑈 (𝑢).

Example 1. Suppose that 𝑌 ~ 𝑈(0,1). Find the distribution of


𝑈 = 𝑔(𝑌) = − ln 𝑌.
Solution. The cdf of 𝑌 ~ 𝑈(0,1) is given by
0, 𝑦≤0
𝐹𝑌 (𝑦) = {𝑦, 0 < 𝑦 ≤ 1
1, 𝑦≥1

The domain (*domain is the region where the pdf is non-zero)


for 𝑌 ~ 𝑈(0,1) is 𝑅𝑌 = {𝑦: 0 < 𝑦 < 1}; thus, because 𝑢 =
− ln 𝑦 > 0, it follows that the domain for U is 𝑅𝑈 = {𝑢: 𝑢 > 0}.
The cdf of U is:

𝐹𝑈 (𝑢) = 𝑃(𝑈 ≤ 𝑢) = 𝑃(− ln 𝑌 ≤ 𝑢)

= 𝑃(ln 𝑌 > −𝑢)

= 𝑃(𝑌 > 𝑒 −𝑢 ) = 1 − 𝑃(𝑌 ≤ 𝑒 −𝑢 )


= 1 − 𝐹𝑌 (𝑒 −𝑢 )

1
Because 𝐹𝑌 (𝑦) = 𝑦 for 0 < y < 1; i.e., for u > 0, we have
𝐹𝑈 (𝑢) = 1 − 𝐹𝑌 (𝑒 −𝑢 ) = 1 − 𝑒 −𝑢

Taking derivatives, we get, for u > 0,

𝑑 𝑑
𝑓𝑈 (𝑢) = 𝐹𝑈 (𝑢) = (1 − 𝑒 −𝑢 ) = 𝑒 −𝑢
𝑑𝑢 𝑑𝑢
Summarizing,

𝑒 −𝑢 , 𝑢>0
𝑓𝑈 (𝑢) = {
0, otherwise
1
This is an exponential pdf with mean λ = 1; that is, U ~
exponential(λ = 1). □

Example 2. Suppose that 𝑌 ~ 𝑈 (− 𝜋⁄2 , 𝜋⁄2) . Find the


distribution of the random variable defined by U = g(Y ) = tan(Y
).
Solution. The cdf of 𝑌 ~ 𝑈 (− 𝜋⁄2 , 𝜋⁄2) is given by

0, 𝑦 ≤ − 𝜋⁄2
𝑦 + 𝜋⁄2
𝐹𝑌 (𝑦) = , − 𝜋⁄2 < 𝑦 ≤ 𝜋⁄2
𝜋
{ 1, 𝑦 ≥ 𝜋⁄2

The domain for Y is 𝑅𝑌 = {𝑦: − 𝜋⁄2 < 𝑦 < 𝜋⁄2}. Sketching a


graph of the tangent function from − 𝜋⁄2 to 𝜋⁄2, we see that
−∞ < 𝑢 < ∞ .
Thus, 𝑅𝑈 = { 𝑢: − ∞ < 𝑢 < ∞} ≡ 𝑅, the set of all reals. The cdf
of U is:
𝐹𝑈 (𝑢) = 𝑃(𝑈 ≤ 𝑢) = 𝑃[tan(𝑌) ≤ 𝑢]

= 𝑃[𝑌 ≤ tan−1(𝑢)]
= 𝐹𝑌 [tan−1(𝑢)]
𝑦+𝜋⁄2
Because 𝐹𝑌 (𝑦) = for − 𝜋⁄2 < 𝑦 < 𝜋⁄2 ; i. e. , for 𝑢 ∈ ℛ ,
𝜋
we have
tan−1(𝑢) + 𝜋⁄2
𝐹𝑈 (𝑢) = 𝐹𝑌 [tan−1 (𝑢)] =
𝜋
The pdf of U, for 𝑢 ∈ ℛ, is given by
𝑑 𝑑 tan−1 (𝑢) + 𝜋⁄2 1
𝑓𝑈 (𝑢) = 𝐹𝑈 (𝑢) = [ ]= .
𝑑𝑢 𝑑𝑢 𝜋 𝜋(1 + 𝑢2 )
Summarizing,

2
1
, −∞<𝑢 <∞
𝑓𝑈 (𝑢) = {𝜋(1 + 𝑢2 )
0, otherwise.

A random variable with this pdf is said to have a (standard)


Cauchy distribution. One interesting fact about a Cauchy
random variable is that none of its moments are finite. Thus, if
U has a Cauchy distribution, E(U), and all higher order
moments, do not exist.
Exercise: If U is standard Cauchy, show that𝐸(|𝑈|) = +∞. □

2. The probability density function (pdf)


technique, univariate

Suppose that Y is a continuous random variable with cdf 𝐹𝑌 (𝑦)


and domain 𝑅𝑌 , and let 𝑈 = 𝑔(𝑌), where 𝑔: 𝑅𝑌 → ℛ is a
continuous, one-to-one function defined over 𝑅𝑌 . Examples of
such functions include continuous (strictly)
increasing/decreasing functions. Recall from calculus that if 𝑔
is one-to-one, it has an unique inverse 𝑔−1 . Also recall that if 𝑔
is increasing (decreasing), then so is 𝑔−1 .

Derivation of the pdf technique formula using the cdf


method:
Suppose that 𝑔(𝑦) is a strictly increasing function of y defined
over 𝑅𝑌 . Then, it follows that 𝑢 = 𝑔(𝑦) ⇔ 𝑔−1 (𝑢) = 𝑦 and
𝐹𝑈 (𝑢) = 𝑃(𝑈 ≤ 𝑢) = 𝑃[𝑔(𝑌) ≤ 𝑢]
= 𝑃[𝑌 ≤ 𝑔−1 (𝑢)] = 𝐹𝑌 [𝑔−1 (𝑢)]

Differentiating 𝐹𝑈 (𝑢) with respect to u, we get

𝑑 𝑑
𝑓𝑈 (𝑢) = 𝐹𝑈 (𝑢) = 𝐹 [𝑔−1 (𝑢)]
𝑑𝑢 𝑑𝑢 𝑌
𝑑 −1
= 𝑓𝑌 [𝑔−1 (𝑢)] 𝑔 (𝑢) (by chain rule)
𝑑𝑢
𝑑
Now as 𝑔 is increasing, so is 𝑔−1 ; thus, 𝑑𝑢 𝑔−1 (𝑢) > 0. If 𝑔(𝑦) is
strictly decreasing, then
𝑑
𝐹𝑈 (𝑢) = 1 − 𝐹𝑌 [𝑔−1 (𝑢)] and 𝑑𝑢 𝑔−1 (𝑢) < 0, which gives

3
𝑑 𝑑
𝑓𝑈 (𝑢) = 𝐹𝑈 (𝑢) = {1 − 𝐹𝑌 [𝑔−1 (𝑢)]}
𝑑𝑢 𝑑𝑢
𝑑 −1
= −𝑓𝑌 [𝑔−1 (𝑢)] 𝑔 (𝑢)
𝑑𝑢
Combining both cases, we have shown that the pdf of U, where
nonzero, is given by

𝑑
𝑓𝑈 (𝑢) = 𝑓𝑌 [𝑔−1 (𝑢)] | 𝑔−1 (𝑢) |.
𝑑𝑢

It is again important to keep track of the domain for U. If 𝑅𝑌


denotes the domain of Y, then 𝑅𝑈 , the domain for U, is given by
𝑅𝑈 = {𝑢: 𝑢 = 𝑔(𝑦); 𝑦 ∈ 𝑅𝑌 }.

Steps of the pdf technique:


1. Verify that the transformation u = g(y) is continuous and
one-to-one over 𝑅𝑌 .
2. Find the domains of Y and U.
3. Find the inverse transformation 𝑦 = 𝑔−1 (𝑢) and its
derivative (with respect to u).
4. Use the formula above for 𝑓𝑈 (𝑢).

Example 3. Suppose that Y ~ exponential(β); i.e., the pdf of Y is

1 −𝑦/𝛽
𝑒 , 𝑦>0
𝑓𝑌 (𝑦) = { 𝛽
0, otherwise.

Let 𝑈 = 𝑔(𝑌) = √𝑌, . Use the method of transformations to


find the pdf of U.
Solution. First, we note that the transformation 𝑔(𝑌) = √𝑌 is a
continuous strictly increasing function of y over 𝑅𝑌 = {𝑦: 𝑦 >
0}, and, thus, 𝑔(𝑌) is one-to-one. Next, we need to find the
domain of U. This is easy since y > 0 implies 𝑢 = √𝑦 > 0 as
well.
Thus, 𝑅𝑈 = {𝑢: 𝑢 > 0}. Now, we find the inverse
transformation:

𝑔(𝑦) = 𝑢 = √𝑦 ⇔ 𝑦 = 𝑔−1 (𝑢)


= 𝑢2 (by inverse transformation)

and its derivative:

4
𝑑 −1 𝑑
𝑔 (𝑢) = (𝑢2 ) = 2𝑢.
𝑑𝑢 𝑑𝑢
Thus, for u > 0,

𝑑 −1
𝑓𝑈 (𝑢) = 𝑓𝑌 [𝑔−1 (𝑢)] | 𝑔 (𝑢) |
𝑑𝑢
2 2
1 −𝑢 2𝑢 −𝑢𝛽
= 𝑒 𝛽 × |2𝑢| = 𝑒 .
𝛽 𝛽

Summarizing,
2
2𝑢 −𝑢𝛽
𝑓𝑈 (𝑢) = { 𝛽 𝑒 , 𝑢>0
0, otherwise.

This is a Weibull distribution. The Weibull family of


distributions is common in life science (survival analysis),
engineering and actuarial science applications. □

Example 4. Suppose that Y ~ beta(α = 6; β = 2); i.e., the pdf of Y


is given by

42𝑦 5 (1 − 𝑦), 0<𝑦<1


𝑓𝑌 (𝑦) = {
0, otherwise.

What is the distribution of U = g(Y ) = 1 −Y ?


Solution. First, we note that the transformation g(y) = 1 −Y is a
continuous decreasing function of y over 𝑅𝑌 = {𝑦: 0 < 𝑦 < 1},
and, thus, g(y) is one-to-one. Next, we need
to find the domain of U. This is easy since 0 < y < 1 clearly
implies 0 < u < 1. Thus,
𝑅𝑈 = {𝑦: 0 < 𝑢 < 1}. Now, we find the inverse transformation:

𝑔(𝑦) = 𝑢 = 1 − 𝑦 ⇔ 𝑦 = 𝑔−1 (𝑢)


= 1 − 𝑢 (by inverse transformation)

and its derivative:

𝑑 −1 𝑑
𝑔 (𝑢) = (1 − 𝑢) = −1.
𝑑𝑢 𝑑𝑢
Thus, for 0 < u < 1,

𝑑 −1
𝑓𝑈 (𝑢) = 𝑓𝑌 [𝑔−1 (𝑢)] | 𝑔 (𝑢) |
𝑑𝑢
5
= 42(1 − 𝑢)5 [1 − (1 − 𝑢)] × |−1| = 42𝑢(1 − 𝑢)5 .

Summarizing,

42𝑢(1 − 𝑢)5 , 0<𝑢<1


𝑓𝑈 (𝑢) = {
0, otherwise.

We recognize this is a beta distribution with parameters α = 2


and β = 6. □

More-to-one transformation: What happens if u = g(y) is not a


one-to-one transformation? In this case, we can still use the
method of transformations, but we have “break up" the
transformation 𝑔: 𝑅𝑌 → 𝑅𝑈 into disjoint regions where g is
one-to-one.

RESULT: Suppose that Y is a continuous random variable with


pdf𝑓𝑌 (𝑦) and that U = g(Y ), not necessarily a one-to-one (but
continuous) function of y over RY . However, suppose that we
can partition 𝑅𝑌 into a finite collection of sets, say, A0, A1, A2, … ,
Ak, where 𝑃(𝑌 ∈ 𝐴0 ) = 0, 𝑎𝑛𝑑 𝑃(𝑌 ∈ 𝐴𝑖 ) > 0 for all i ≠ 0, and
𝑓𝑌 (𝑦) is continuous on each Ai , i ≠ 0. Furthermore, suppose
that the transformation is 1-to-1 from Ai (i = 1, 2, …, k,) to B,
where B is the domain of U = g(Y ) such that 𝑔𝑖−1 (∙) is a 1-to-1
inverse mapping of Y to U = g(Y ) from B to Ai.

Then, the pdf of U is given by

𝑘
𝑑
∑ 𝑓𝑌 [𝑔𝑖 −1 (𝑢)] | 𝑔𝑖 −1 (𝑢) | , 𝑢 ∈ 𝑅𝑈
𝑓𝑈 (𝑢) = { 𝑑𝑢
𝑖=1
0, otherwise.

That is, writing the pdf of U can be done by adding up the terms
𝑑
𝑓𝑌 [𝑔𝑖 −1 (𝑢)] |𝑑𝑢 𝑔𝑖 −1 (𝑢) | corresponding to each disjoint set Ai,
i = 1, 2,…,k.

6
Example 5. Suppose that Y ~ N(0, 1); that is, Y has a standard
normal distribution; i.e.,
1 −𝑦 2/2
𝑒 , −∞ < 𝑦 < ∞
𝑓𝑌 (𝑦) = {√2𝜋
0, otherwise.

Consider the transformation: 𝑈 = 𝑔(𝑌) = 𝑌 2 .

Solution 1 (the pdf technique):

This transformation is not one-to-one on 𝑅𝑌 = ℛ = {𝑦: −∞ <


𝑦 < ∞} , but it is one-to-one on A1 = (−∞, 0) and A2 = (0, ∞)
(separately) since 𝑔(𝑦) = 𝑦 2 is decreasing on A1 and increasing
on A2, and A0 = {0} where 𝑃(𝑌 ∈ 𝐴0 ) = 𝑃(𝑌 = 0) = 0.
Furthermore, note that A0, A1 and A2 partitions 𝑅𝑌 .
Summarizing,

Partition Transformation Inverse


transformation
A1 = (−∞, 0) 𝑔(𝑦) = 𝑦 2 = 𝑢 𝑔1 −1 (𝑢)
= −√𝑢 = 𝑦
A2 = (0, ∞) 𝑔(𝑦) = 𝑦 2 = 𝑢 𝑔2 −1 (𝑢)
= √𝑢 = 𝑦

And, on both sets A1 and A2,


𝑑 1
| 𝑔𝑖 −1 (𝑢) | = .
𝑑𝑢 2 √𝑢
Clearly, 𝑢 = 𝑦 2 > 0; thus, 𝑅𝑈 = {𝑢: 𝑢 > 0}, and the pdf of U is
given by
𝑓𝑈 (𝑢)
2 2
1 (−√𝑢) 1 1 (√𝑢) 1
− −
={ 𝑒 2 ( )+ 𝑒 2 ( ), 𝑢>0
√2𝜋 2√𝑢 √2𝜋 2 √𝑢
0, otherwise.
1
Thus, for u > 0, and recalling that Γ (2) = √𝜋, 𝑓(𝑢) collapses to
2 −𝑢 1
𝑓𝑈 (𝑢) = 𝑒 2( )
√2𝜋 2 √𝑢
1 1−1 −𝑢 1 1 𝑢
= 𝑢2 𝑒 2 = 1 𝑢2−1 𝑒 − 2 .
√2𝜋 1
Γ (2) 22
Summarizing, the pdf of U is
1 1
−1 −
𝑢
1 𝑢 2 𝑒 2, 𝑢 > 0
𝑓𝑈 (𝑢) = { Γ (1) 22
2
0, otherwise.

7
That is, U ~ gamma(1/2, 2). Recall that the gamma(1/2, 2)
distribution is the same as a 𝜒 2 distribution with 1 degree of
freedom; that is, 𝑈 ~ 𝜒 2 (1). □

Solution 2 (the cdf technique):

𝐹𝑈 (𝑢) = 𝑃(𝑈 ≤ 𝑢) = 𝑃(𝑌 2 ≤ 𝑢) = 1 − 𝑃(𝑌 2 > 𝑢)

= 1 − 𝑃(𝑌 > √𝑢 𝑜𝑟 𝑌 < −√𝑢)


= 1 − 𝑃(𝑌 > √𝑢 ) − 𝑃( 𝑌 < −√𝑢)

= 𝑃(𝑌 ≤ √𝑢 ) − 𝑃( 𝑌 ≤ −√𝑢) = 𝐹𝑌 (√𝑢) − 𝐹𝑌 (−√𝑢)

Taking derivative with respect to 𝑢 at both sides, we


have:

𝑑 √𝑢 𝑑 √𝑢
𝑓𝑈 (𝑢) = 𝑓𝑌 (√𝑢) + 𝑓𝑌 (−√𝑢)
𝑑𝑢 𝑑𝑢
2 2
1 (−√𝑢) 1 1 (√𝑢) 1
= − −
𝑒 2 ( )+ 𝑒 2 ( )
√2𝜋 2 √𝑢 √2𝜋 2√𝑢
1 1 𝑢 1 1 𝑢
= 𝑢2−1 𝑒 − 2 = 𝑢 −1 −
2 𝑒 2, 𝑢 > 0
√2𝜋 1 1
Γ (2) 22

That is, U ~ gamma(1/2, 2). Recall that the gamma(1/2, 2)


distribution is the same as a 𝜒 2 distribution with 1 degree of
freedom; that is, 𝑈 ~ 𝜒 2 (1). □

3. The probability density function (pdf)


technique, bivariate
Here we discuss transformations involving two random
variable 𝑌1 , 𝑌2 . The bivariate transformation is
𝑈1 = 𝑔1 (𝑌1 , 𝑌2 )
𝑈2 = 𝑔2 (𝑌1 , 𝑌2 )
Assuming that 𝑌1 and 𝑌2 are jointly continuous random
variables, we will discuss the one-to-one transformation first.
Starting with the joint distribution of 𝒀 = (𝑌1 , 𝑌2 ), our goal is
to derive the joint distribution of 𝑼 = (𝑈1 , 𝑈2 ).

Suppose that 𝒀 = (𝑌1 , 𝑌2 ) is a continuous random vector with


joint pdf𝑓𝑌1 ,𝑌2 (𝑦1 , 𝑦2 ). Let 𝑔: ℛ 2 → ℛ 2 be a continuous one-to-

8
one vector-valued mapping from 𝑅𝑌1 ,𝑌2 to 𝑅𝑈1 ,𝑈2 , where 𝑈1 =
𝑔1 (𝑌1 , 𝑌2 ) and 𝑈2 = 𝑔2 (𝑌1 , 𝑌2 ), and where 𝑅𝑌1 ,𝑌2 and 𝑅𝑈1 ,𝑈2
denote the two-dimensional domain of 𝒀 = (𝑌1 , 𝑌2 ) and 𝑼 =
(𝑈1 , 𝑈2 ), respectively. If 𝑔1 −1 (𝑢1 , 𝑢2 ) and 𝑔2 −1 (𝑢1 , 𝑢2 ) have
continuous partial derivatives with respect to both 𝑢1 and 𝑢2 ,
and the Jacobian, J, where, with “det” denoting “determinant”,
𝜕𝑔1 −1 (𝑢1 , 𝑢2 ) 𝜕𝑔1 −1 (𝑢1 , 𝑢2 )
𝜕𝑢1 𝜕𝑢2
𝐽 = det || −1 (𝑢 −1 (𝑢
| ≠ 0,
|
𝜕𝑔2 1 , 𝑢2 ) 𝜕𝑔2 1 , 𝑢2 )
𝜕𝑢1 𝜕𝑢2
then
𝑓𝑈1 ,𝑈2 (𝑢1 , 𝑢2 )
−1 −1
𝑓 [𝑔 (𝑢1 , 𝑢2 ), 𝑔2 (𝑢1 , 𝑢2 )]|𝐽|, (𝑢1 , 𝑢2 ) ∈ 𝑅𝑈1 ,𝑈2
= { 𝑌1 ,𝑌2 1
0, otherwise,
where |J| denotes the absolute value of J.

RECALL: The determinant of a 2 × 2 matrix, e.g.,


𝑎 𝑏
det | | = 𝑎𝑑 − 𝑏𝑐.
𝑐 𝑑

Steps of the pdf technique:


1. Find 𝑓𝑌1 ,𝑌2 (𝑦1 , 𝑦2 ), the joint distribution of 𝑌1 and 𝑌2 . This may
be given in the problem. If Y1 and Y2 are independent, then
𝑓𝑌1 ,𝑌2 (𝑦1 , 𝑦2 ) = 𝑓𝑌1 (𝑦1 )𝑓𝑌2 (𝑦2 ).
2. Find 𝑅𝑈1 ,𝑈2 , the domain of 𝑼 = (𝑈1 , 𝑈2 ).
3. Find the inverse transformations 𝑦1 = 𝑔1 −1 (𝑢1 , 𝑢2 ) and 𝑦2 =
𝑔2 −1 (𝑢1 , 𝑢2 ).
4. Find the Jacobian, J, of the inverse transformation.
5. Use the formula above to find 𝑓𝑈1 ,𝑈2 (𝑢1 , 𝑢2 ), the joint
distribution of 𝑈1 and 𝑈2 .

NOTE: If desired, marginal distributions 𝑓𝑈1 (𝑢1 ) and 𝑓𝑈2 (𝑢2 ).


can be found by integrating the joint distribution 𝑓𝑈1 ,𝑈2 (𝑢1 , 𝑢2 ).

Example 6. Suppose that 𝑌1 ~ gamma(α, 1), 𝑌2 ~ gamma(β, 1),


and that𝑌1 and 𝑌2 are independent. Define the transformation
𝑈1 = 𝑔1 (𝑌1 , 𝑌2 ) = 𝑌1 + 𝑌2
𝑌1
𝑈2 = 𝑔2 (𝑌1 , 𝑌2 ) = .
𝑌1 + 𝑌2
Find each of the following distributions:
(a) 𝑓𝑈1 ,𝑈2 (𝑢1 , 𝑢2 ), the joint distribution of 𝑈1 and 𝑈2 ,
(b) 𝑓𝑈1 (𝑢1 ), the marginal distribution of 𝑈1 , and
(c) 𝑓𝑈2 (𝑢2 ), the marginal distribution of 𝑈2 .

9
Solutions. (a) Since 𝑌1 and 𝑌2 are independent, the joint
distribution of 𝑌1 and 𝑌2 is
𝑓𝑌1 ,𝑌2 (𝑦1 , 𝑦2 ) = 𝑓𝑌1 (𝑦1 )𝑓𝑌2 (𝑦2 )
1 1 𝛽−1 −𝑦2
= 𝑦1𝛼−1 𝑒 −𝑦1 × 𝑦 𝑒
Γ(𝛼) Γ(𝛽) 2
1 𝛽−1
= 𝑦1𝛼−1 𝑦2 𝑒 −(𝑦1 +𝑦2 ) ,
Γ(𝛼)Γ(𝛽)
for 𝑦1 > 0, 𝑦2 > 0, and 0, otherwise. Here, 𝑅𝑌1 ,𝑌2 =
{(𝑦1 , 𝑦2 ): 𝑦1 > 0, 𝑦2 > 0}. By inspection, we see that 𝑢1 = 𝑦1 +
𝑦
𝑦2 > 0, and 𝑢2 = 𝑦 +1𝑦 must fall between 0 and 1.
1 2
Thus, the domain of 𝑼 = (𝑈1 , 𝑈2 ) is given by
𝑅𝑈1 ,𝑈2 = {(𝑢1 , 𝑢2 ): 𝑢1 > 0, 0 < 𝑢2 < 1}.
The next step is to derive the inverse transformation. It follows
that
𝑢1 = 𝑦1 + 𝑦2
𝑦1 𝑦1 = 𝑔1 −1 (𝑢1 , 𝑢2 ) = 𝑢1 𝑢2
𝑢2 = ⇒ −1
𝑦1 + 𝑦2 𝑦2 = 𝑔2 (𝑢1 , 𝑢2 ) = 𝑢1 − 𝑢1 𝑢2
The Jacobian is given by
𝜕𝑔1 −1 (𝑢1 , 𝑢2 ) 𝜕𝑔1 −1 (𝑢1 , 𝑢2 )

𝐽 = det ||
𝜕𝑢1 𝜕𝑢2 | = det | 𝑢2 𝑢1
|
−1 (𝑢 −1 | 1 − 𝑢 −𝑢
𝜕𝑔2 1 , 𝑢2 ) 𝜕𝑔2 (𝑢1 , 𝑢2 ) 2 1

𝜕𝑢1 𝜕𝑢2
= −𝑢1 𝑢2 − 𝑢1 (1 − 𝑢2 ) = −𝑢1 .
We now write the joint distribution for 𝑼 = (𝑈1 , 𝑈2 ). For 𝑢1 >
0 and 0 < 𝑢2 < 1, we have that
𝑓𝑈1 ,𝑈2 (𝑢1 , 𝑢2 ) = 𝑓𝑌1 ,𝑌2 [𝑔1 −1 (𝑢1 , 𝑢2 ), 𝑔2 −1 (𝑢1 , 𝑢2 )]|𝐽|
1
= (𝑢 𝑢 )𝛼−1 (𝑢1 − 𝑢1 𝑢2 )𝛽−1 𝑒 −[(𝑢1 𝑢2 )+(𝑢1 −𝑢1𝑢2 )]
Γ(𝛼)Γ(𝛽) 1 2
× | − 𝑢1 |

Note: We see that 𝑈1 and 𝑈2 are independent since the


domain 𝑅𝑈1 ,𝑈2 = {(𝑢1 , 𝑢2 ): 𝑢1 > 0, 0 < 𝑢2 < 1} does not
constrain 𝑢1 by 𝑢2 or vice versa and since the nonzero part
of 𝑓𝑈1 ,𝑈2 (𝑢1 , 𝑢2 ) can be factored into the two expressions
ℎ1 (𝑢1 ) and ℎ2 (𝑢2 ), where
ℎ1 (𝑢1 ) = 𝑢1 𝛼+𝛽−1 𝑒 −𝑢1
and
𝑢2 𝛼−1 (1 − 𝑢2 )𝛽−1
ℎ2 (𝑢2 ) = .
Γ(𝛼)Γ(𝛽)
(b) To obtain the marginal distribution of 𝑈1 , we integrate the
joint pdf𝑓𝑈1 ,𝑈2 (𝑢1 , 𝑢2 )
over 𝑢2 . That is, for 𝑢1 > 0,

10
1
𝑓𝑈1 (𝑢1 ) = ∫ 𝑓𝑈1 ,𝑈2 (𝑢1 , 𝑢2 ) 𝑑𝑢2
𝑢2 =0
1
𝑢2 𝛼−1 (1 − 𝑢2 )𝛽−1 𝛼+𝛽−1 −𝑢
=∫ 𝑢1 𝑒 1 𝑑𝑢2
𝑢2 =0 Γ(𝛼)Γ(𝛽)
1
1 𝛼+𝛽−1 −𝑢1
= 𝑢 𝑒 ∫ 𝑢2 𝛼−1 (1 − 𝑢2 )𝛽−1 𝑑𝑢2
Γ(𝛼)Γ(𝛽) 1 𝑢2 =0
(𝑢2 𝛼−1 (1−𝑢2 )𝛽−1 is beta(𝛼,𝛽) kernel) 1
⇔ 𝑢 𝛼+𝛽−1 𝑒 −𝑢1
Γ(𝛼)Γ(𝛽) 1
Γ(𝛼)Γ(𝛽)
×
Γ(𝛼 + 𝛽)
1
= 𝑢 𝛼+𝛽−1 𝑒 −𝑢1
Γ(𝛼 + 𝛽) 1
Summarizing,
1
𝑢 𝛼+𝛽−1 𝑒 −𝑢1 , 𝑢1 > 0
𝑓𝑈1 (𝑢1 ) = {Γ(𝛼 + 𝛽) 1
0, otherwise.
We recognize this as a gamma(𝛼 + 𝛽, 1) pdf; thus, marginally,
𝑈1 ~gamma(𝛼 + 𝛽, 1).
(c) To obtain the marginal distribution of 𝑈2 , we integrate the
joint pdf 𝑓𝑈1 ,𝑈2 (𝑢1 , 𝑢2 ) over 𝑢2 . That is, for 0 < 𝑢2 < 1,

𝑓𝑈2 (𝑢2 ) = ∫ 𝑓𝑈1 ,𝑈2 (𝑢1 , 𝑢2 ) 𝑑𝑢1
𝑢1 =0

𝑢2 𝛼−1 (1 − 𝑢2 )𝛽−1 𝛼+𝛽−1 −𝑢
=∫ 𝑢1 𝑒 1 𝑑𝑢1
𝑢1 =0 Γ(𝛼)Γ(𝛽)
𝑢2 𝛼−1 (1 − 𝑢2 )𝛽−1 ∞
= ∫ 𝑢1 𝛼+𝛽−1 𝑒 −𝑢1 𝑑𝑢1
Γ(𝛼)Γ(𝛽) 𝑢1 =0
Γ(𝛼 + 𝛽) 𝛼−1
= 𝑢 (1 − 𝑢2 )𝛽−1 .
Γ(𝛼)Γ(𝛽) 2
Summarizing,
Γ(𝛼 + 𝛽) 𝛼−1
𝑢 (1 − 𝑢2 )𝛽−1 , 0 < 𝑢2 < 1
𝑓𝑈2 (𝑢2 ) = {Γ(𝛼)Γ(𝛽) 2
0, otherwise.
Thus, marginally, U2 ~ beta(𝛼, 𝛽). □

REMARK: Suppose that 𝒀 = (𝑌1 , 𝑌2 ) is a continuous random


vector with joint pdf 𝑓𝑌1 ,𝑌2 (𝑦1 , 𝑦2 ), and suppose that we would
like to find the distribution of a single random variable
𝑈1 = 𝑔1 (𝑌1 , 𝑌2 )
Even though there is no 𝑈2 present here, the bivariate
transformation technique can still be useful. In this case, we
can devise an “extra variable” 𝑈2 = 𝑔2 (𝑌1 , 𝑌2 ), perform the
bivariate transformation to obtain 𝑓𝑈1 ,𝑈2 (𝑢1 , 𝑢2 ), and then find

11
the marginal distribution of 𝑈1 by integrating 𝑓𝑈1 ,𝑈2 (𝑢1 , 𝑢2 ) out
over the dummy variable 𝑢2 . While the choice of 𝑈2 is arbitrary,
there are certainly bad choices. Stick with something easy;
usually 𝑈2 = 𝑔2 (𝑌1 , 𝑌2 ) = 𝑌2 does the trick.

Exercise: (Homework 3, Question 1) Suppose that 𝑌1 and 𝑌2


are random variables with joint pdf
8𝑦 𝑦 , 0 < 𝑦1 < 𝑦2 < 1
𝑓𝑌1 ,𝑌2 (𝑦1 , 𝑦2 ) = { 1 2
0, otherwise.
Find the pdf of 𝑈1 = 𝑌1 /𝑌2 .

More-to-one transformation: What happens if the


transformation of Y to U is not a one-to-one transformation? In
this case, similar to the univariate transformation, we can still
use the pdf technique, but we have to “break up" the
transformation 𝒈: 𝑅𝒀 → 𝑅𝑼 into disjoint regions where g is
one-to-one.

RESULT: Suppose that Y is a continuous bivariate random


variable with pdf𝑓𝑌1 ,𝑌2 (𝑦1 , 𝑦2 ) and that 𝑈1 = 𝑔1 (𝑌1 , 𝑌2 ), 𝑈2 =
𝑔2 (𝑌1 , 𝑌2 ), where 𝑼 = (𝑈1 , 𝑈2 ) is not necessarily a one-to-one
(but continuous) function of y over RY = 𝑅𝑌1 ,𝑌2 . Furthermore,
suppose that we can partition 𝑅𝒀 into a finite collection of sets,
say, A0, A1, A2, … , Ak, where 𝑃(𝒀 ∈ 𝐴0 ) = 0, 𝑎𝑛𝑑 𝑃(𝒀 ∈ 𝐴𝑖 ) > 0
for all i ≠ 0, and 𝑓𝑌1 ,𝑌2 (𝑦1 , 𝑦2 ) is continuous on each Ai , i ≠ 0.
Furthermore, suppose that the transformation is 1-to-1 from Ai
(i = 1, 2, …, k,) to B, where B is the domain of 𝑼 = (𝑈1 =
−1 −1
𝑔1 (𝑌1 , 𝑌2 ), 𝑈2 = 𝑔1 (𝑌1 , 𝑌2 )) such that (𝑔1𝑖 (∙), 𝑔2𝑖 (∙)) is a 1-to-1
inverse mapping of Y to U from B to Ai.

Let Ji denotes the Jacobina computed from the ith inverse, i = 1,


2, …, k. Then, the pdf of U is given by

𝑓𝑈1 ,𝑈2 (𝑢1 , 𝑢2 )


𝑘

∑ 𝑓𝑌1 ,𝑌2 [𝑔1𝑖 −1 (𝑢, 𝑣), 𝑔2𝑖 −1 (𝑢, 𝑣)]|𝐽𝑖 | , 𝑢 ∈ 𝐵 = 𝑅𝑈


={
𝑖=1
0, otherwise.

Example 7. Suppose that 𝑌1 ~ N(0, 1), 𝑌2 ~ N(0, 1), and


that 𝑌1 and 𝑌2 are independent. Define the transformation
𝑌1
𝑈1 = 𝑔1 (𝑌1 , 𝑌2 ) =
𝑌2
𝑈2 = 𝑔2 (𝑌1 , 𝑌2 ) = |𝑌2 |.
Find each of the following distributions:

12
(a) 𝑓𝑈1 ,𝑈2 (𝑢1 , 𝑢2 ), the joint distribution of 𝑈1 and 𝑈2 ,
(b) 𝑓𝑈1 (𝑢1 ), the marginal distribution of 𝑈1 .
Solutions. (a) Since 𝑌1 and 𝑌2 are independent, the joint
distribution of 𝑌1 and 𝑌2 is
𝑓𝑌1 ,𝑌2 (𝑦1 , 𝑦2 ) = 𝑓𝑌1 (𝑦1 )𝑓𝑌2 (𝑦2 )
1 −𝑦 2/2 −𝑦2 /2
= 𝑒 1 𝑒 2

Here, 𝑅𝑌1 ,𝑌2 = {(𝑦1 , 𝑦2 ): −∞ < 𝑦1 < ∞, −∞ < 𝑦2 < ∞}.

The transformation of Y to U is not one-to-one because the


points (𝑦1 , 𝑦2 ) and (−𝑦1 , −𝑦2 ) are both mapped to the same
(𝑢1 , 𝑢2 ) point. But if we restrict considerations to either
positive or negative values of 𝑦2 , then the transformation is
one-to-one. We note that the three sets below form a partition
of 𝐴 = 𝑅𝑌1 ,𝑌2 as defined above with 𝐴1 = {(𝑦1 , 𝑦2 ): 𝑦2 > 0},
𝐴2 = {(𝑦1 , 𝑦2 ): 𝑦2 < 0} , and 𝐴0 = {(𝑦1 , 𝑦2 ): 𝑦2 = 0}.

The domain of U, 𝐵 = {(𝑢1 , 𝑢2 ): −∞ < 𝑢1 < ∞, 𝑢2 > 0} is the


image of both 𝐴1 and 𝐴2 under the transformation. The inverse
transformation from 𝐵 𝑡𝑜 𝐴1 and 𝐵 𝑡𝑜 𝐴2 are given by:
𝑦1 = 𝑔11 −1 (𝑢1 , 𝑢2 ) = 𝑢1 𝑢2
𝑦2 = 𝑔21 −1 (𝑢1 , 𝑢2 ) = 𝑢2
and
𝑦1 = 𝑔12 −1 (𝑢1 , 𝑢2 ) = −𝑢1 𝑢2
𝑦2 = 𝑔22 −1 (𝑢1 , 𝑢2 ) = −𝑢2

The Jacobians from the two inverses are 𝐽1 = 𝐽1 = 𝑢2

The pdf of U on its domain B is thus:


2

𝑓𝑈1 ,𝑈2 (𝑢1 , 𝑢2 ) = ∑ 𝑓𝑌1 ,𝑌2 [𝑔1𝑖 −1 (𝑢, 𝑣), 𝑔2𝑖 −1 (𝑢, 𝑣)]|𝐽𝑖 |
𝑖=1
Plugging in, we have:
1 −(𝑢 𝑢 )2 /2 −𝑢2 /2
𝑓𝑈1 ,𝑈2 (𝑢1 , 𝑢2 ) = 𝑒 1 2 𝑒 2 |𝑢2 |

1 −(−𝑢 𝑢 )2 /2 −(−𝑢 )2 /2
+ 𝑒 1 2 𝑒 2 |𝑢2 |

Simplifying, we have:
𝑢2 2 2
𝑓𝑈1 ,𝑈2 (𝑢1 , 𝑢2 ) = 𝑒 −(𝑢1 +1)𝑢2 /2 , −∞ < 𝑢1 < ∞, 𝑢2 > 0
π
(b) To obtain the marginal distribution of 𝑈1 , we integrate the
joint pdf𝑓𝑈1 ,𝑈2 (𝑢1 , 𝑢2 )
over 𝑢2 . That is,

𝑓𝑈1 (𝑢1 ) = ∫ 𝑓𝑈1 ,𝑈2 (𝑢1 , 𝑢2 ) 𝑑𝑢2
0

13
1
= , −∞ < 𝑢1 < ∞
𝜋(𝑢12
+ 1)
Thus, marginally, U1 follows the standard Cauchy distribution.

REMARK: The transformation method can also be extended to


handle n-variate transformations. Suppose that 𝑌1 , 𝑌2 , … , 𝑌𝑛 are
continuous random variables with joint pdf
𝑓𝒀 (𝑦) and define
𝑈1 = 𝑔1 (𝑌1 , 𝑌2 , … , 𝑌𝑛 )
𝑈2 = 𝑔2 (𝑌1 , 𝑌2 , … , 𝑌𝑛 )

𝑈𝑛 = 𝑔𝑛 1 , 𝑌2 , … , 𝑌𝑛 ).
(𝑌

Example 8. Given independent random variables 𝑋 and𝑌, each


with uniform distributions on (0, 1), find the joint pdf of U and
V defined by U=X+Y, V=X-Y, and the marginal pdf of U.

The joint pdf of 𝑋 and 𝑌 is𝑓𝑋,𝑌 (𝑥, 𝑦) = 1, 0 ≤ 𝑥 ≤ 1, 0 ≤ 𝑦 ≤ 1.


The inverse transformation, written in terms of observed
values is
𝑢+𝑣 𝑢−𝑣
𝑥= , 𝑎𝑛𝑑 𝑦 = .
2 2
It is clearly one-to-one. The Jacobian is
𝜕(𝑥,𝑦) 1/2 1/2 1 1
𝐽 = 𝜕(𝑢,𝑣) = | | = − 2, so |𝐽| = 2.
1/2 −1/2

We will use 𝒜 to denote the range space of (𝑋, 𝑌), and ℬ to


denote that of (𝑈, 𝑉), and these are shown in the diagrams
below. Firstly, note that there are 4 inequalities specifying
ranges of 𝑥 and𝑦, and these give 4 inequalities concerning 𝑢
and𝑣, from which ℬcan be determined. That is,
𝑥 ≥ 0 ⇒ 𝑢 + 𝑣 ≥ 0, that is, 𝑣 ≥ −𝑢
𝑥 ≤ 1 ⇒ 𝑢 + 𝑣 ≤ 2, that is 𝑣 ≤ 2 − 𝑢
𝑦 ≥ 0 ⇒ 𝑢 − 𝑣 ≥ 0, that is 𝑣 ≤ 𝑢
𝑦 ≤ 1 ⇒ 𝑢 − 𝑣 ≤ 2, that is 𝑣 ≥ 𝑢 − 2

Drawing the four lines


𝑣 = −𝑢, 𝑣 = 2 − 𝑢, 𝑣 = 𝑢, 𝑣 = 𝑢 − 2
On the graph, enables us to see the region specified by the 4
inequalities.

14
Now, we have
1 1 −𝑢 ≤ 𝑣 ≤ 𝑢, 0 ≤ 𝑢 ≤ 1
𝑓𝑈,𝑉 (𝑢, 𝑣) = 1 ∗ = , {
2 2 𝑢 − 2 ≤ 𝑣 ≤ 2 − 𝑢, 1 ≤ 𝑢 ≤ 2
The importance of having the range space correct is seen when
we find marginal pdf of 𝑈.

𝑓𝑈 (𝑢) = ∫−∞ 𝑓𝑈,𝑉 (𝑢, 𝑣)𝑑𝑣

𝑢 1
∫−𝑢 2 𝑑𝑣, 0≤𝑢≤1
= { ∫2−𝑢 1 𝑑𝑣, 1≤𝑢≤2
𝑢−2 2
0, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒

𝑢, 0≤𝑢≤1
={
2 − 𝑢, 1 ≤ 𝑢 ≤ 2

= 𝑢𝐼[0,1] (𝑢) + (2 − 𝑢)𝐼(1,2] (𝑢), using indicator functions.

Example 9. Given 𝑋and 𝑌 are independent random variables


𝑥
1
each with pdf𝑓𝑋 (𝑥) = 2 𝑒 −2 , 𝑥 ∈ [0, ∞), find the distribution
of(𝑋 − 𝑌)/2.

We note that the joint pdf of 𝑋 and 𝑌 is


1 𝑥+𝑦
𝑓𝑋,𝑌 (𝑥, 𝑦) = 𝑒 2 , 0 ≤ 𝑥 < ∞, 0 ≤ 𝑦 < ∞.
4
15
Define𝑈 = (𝑋 − 𝑌)/2. Now we need to introduce a second
random variable 𝑉 which is a function of 𝑋 and𝑌. We wish to
do this in such a way that the resulting bivariate
transformation is one-to-one and our actual task of finding the
pdf of U is as easy as possible. Our choice for 𝑉 is of course, not
unique. Let us define𝑉 = 𝑌. Then the transformation is,
(using𝑢, 𝑣, 𝑥, 𝑦, since we are really dealing with the range
spaces here).
𝑥 = 2𝑢 + 𝑣
𝑦=𝑣
From it, we find the Jacobian,
2 1
𝐽=| |=2
0 1
To determineℬ, the range space of 𝑈 and𝑉, we note that

𝑥 ≥ 0 ⇒ 2𝑢 + 𝑣 ≥ 0 , that is 𝑣 ≥ −2𝑢
𝑥 < ∞ ⇒ 2𝑢 + 𝑣 < ∞
𝑦≥0 ⇒ 𝑣≥0
𝑦<∞ ⇒ 𝑢<∞
So ℬ is as indicated in the diagram below.

Now, we have
1 −2𝑢+𝑣+𝑣 1
𝑓𝑈,𝑉 (𝑢, 𝑣) = 𝑒 2 ∗ 2 = 𝑒 −(𝑢+𝑣) , (𝑢, 𝑣) ∈ ℬ.
4 2

16
The marginal pdf of 𝑈 is obtained by integrating 𝑓𝑈,𝑉 (𝑢, 𝑣) with
respect to𝑣, giving

1 −(𝑢+𝑣)
∫ 𝑒 𝑑𝑣, 𝑢 < 0
−2𝑢 2
𝑓𝑈 (𝑢) = ∞
1 −(𝑢+𝑣)
∫ 𝑒 𝑑𝑣, 𝑢 > 0
{ 0 2

1 𝑢
𝑒 , 𝑢<0
={ 2
1 −𝑢
𝑒 , 𝑢>0
2
1
= 𝑒 −|𝑢| , − ∞ < 𝑢 < ∞
2

[This is sometimes called the 𝑓𝑜𝑙𝑑𝑒𝑑 (𝑜𝑟 𝑑𝑜𝑢𝑏𝑙𝑒)𝑒𝑥𝑝𝑜𝑛𝑒𝑛𝑡𝑖𝑎𝑙


distribution.]

17
Homework #3.
Question 1 is the exercise on page 9 of this handout.
Questions 2-9 (from our textbook): 2.9, 2.15, 2.24, 2.30, 2.31,
2.32, 2.33, 2.34

18

Вам также может понравиться