Вы находитесь на странице: 1из 18

Random Samples

X1 , . . . , Xn - i.i.d variables (independent, identically distributed)


or a random sample observations independently selected
from the same population, or resulting from analogous
statistical experiments.
Definition A statistic is any function of observations from a
random sample that does not involve population parameters.
Distributions of statistics are called sampling distributions.
Examples:
X = n1 (X1 + + Xn ), S 2 =

1
n1

Pn

i=1 (Xi

X )2 , min(X1 , . . . , Xn )

Statistics are random variables


Theorem Let X1 , . . . , Xn be a random sample from a
distribution with E(Xi ) = and Var(Xi ) = 2 . Then
2
E(X ) = , Var(X ) = n and E(X1 + + Xn ) = n,
Var(X1 + + Xn ) = n 2 .
1/1

Distributions of selected statistics in random samples


from normal populations

Theorem Let X1 , . . . , Xn be a random sampleP


from a N(, 2 )
distribution. Statistics X and U = n1
S 2 = 12 i (Xi X )2 have
2
2

distributions N(, n ) and 2n1 , respectively.


Theorem If X1 , . . . , Xn is a random sample from N(, 2 )
distribution, then variables X and U are independent. Also, if
variables X and U are independent then sample X1 , . . . , Xn is
selected from a normally distributed population.

2/1

Definition If independent variables Z and U have distributions


N(0, 1) and 2 , respectively, then variable X = Z has
U/

Students t distribution with degrees of freedom. The density


of variable X is f (x; ) =

((+1)/2) 1
(1
(/2)

x 2 (+1)/2
.
)

Theorem Students tn distribution approaches N(0, 1) when the


number n of degrees of freedom increases.
Definition If independent variables U and V have
1
distributions 21 and 22 , respectively, then X = VU/
/2 has F
distribution with 1 and 2 degrees of freedom, F (1 , 2 ).

3/1

Problem 10.2.2 p. 318 X F(1 , 2 ). Since


1
P(X < x ) = = P( X1 > x1 ), then P(
<
X
|{z}

1
x )

= 1 .

F (2 ,1 )

Problem 10.2.3 p. 318 X N(0, 1), Y N(1, 1), W N(2, 4).


2

+(Y 1)
P( X 2 +(YX1)
2 +(W 2)2 /4 > k ) = P(

Since

1
1+a

1
(W 2)2 /4
1+ 2
X +(Y 1)2

> k) = 

> k is equivalent to 1 > k + ak and to a <

(W 2)2 /4
 = P( 2
<
[X + (Y 1)2 ]/2
|
{z
}

2(1k )
)
k

1k
k .

= 0.05 Now

F (1,2)

2(1k )
k

= F0.95 (1, 2) =

1
F0.05 (2,1)

1
199.5

and k = 0.9975.

4/1

Order Statistics
Let X1 , . . . , Xn be independent, identically distributed random
variables (or a random sample) with a common density f (x)
and cdf F (x).
X1:n X2:n Xn:n are order statistics k th order statistic
is k th in magnitude.
n = 2 two observations: X1 , X2 and two order statistics:
X1:2 = min(X1 , X2 ) , X2:2 = max(X1 , X2 ).
n = 5 If x1 = 4, x2 = 3, x3 = 1, x4 = 6, x5 = 3, then
1 < 3 =3 < 4 < 6
x1:5 x2:5 x3:5 x4:5 x5:5

5/1

Distribution of Order Statistics


Let Gk (t) be the cdf of Xk :n at t.
Then Gk (t) = P(Xk :n t) = P( at least k observations are t)
= P(exactly k ) + P(exactly k + 1) + + P(exactly n)


n
= kn [F (t)]k [1 F (t)]nk + k +1
[F (t)]k +1 [1 F (t)]nk 1


P
+ + nn [F (t)]n = nr=k nr [F (t)]r [1 F (t)]nr

P
k = 1 G1 (t) = nr=1 nr [F (t)]r [1 F (t)]nr


P
= nr=0 nr [F (t)]r [1 F (t)]nr n0 [F (t)]0 [1 F (t)]n
= [F (t) + 1 F (t)]n [1 F (t)]n = 1 [1 F (t)]n .

6/1

Also G1 (t) = P(X1:n t) = 1 P(X1:n > t)


= 1P(X1 > t, . . . , Xn > t) = 1[P(Xi > t)]n = 1 [1 F (t)]n .
g1 (t) =

d
dt G1 (t)

d
= n[1 F (t)]n1 (1) dt
F (t)

= nf (t)[1 F (t)]n1

P
k = n Gn (t) = nr=n nr [F (t)]r [1 F (t)]nr

= nn [F (t)]n [1 F (t)]0 = [F (t)]n .
Also, Gn (t) = P(Xn:n t) = P(X1 t, . . . , Xn t)
= [P(Xi t)]n = [F (t)]n . Finally, gn (t) = nf (t)[F (t)]n1

gk (t) = k kn f (t)[F (t)]k 1 [1 F (t)]nk

7/1

Example Xi EXP(1), f (x) = ex for x > 0 and 0 otherwise,


F (t) = 1 ex for x > 0 and 0 otherwise.
G1 (t) = 1 [1 (1 et )]n = 1 ent , g1 (t) = nent
Gn (t) = (1 et )n , and gn (t) = net (1 et )n1 .
Example Let X1 , . . . , Xn be a random sample selected from
U[0, ] distribution.
Find:
(i) Densities g1 (t) and gn (t) of X1:n and Xn:n , respectively.
(ii) cdfs G1 (t) and Gn (t).
(iii) Means and variances.
(iv) Covariance of X1:n and Xn:n .

8/1

1
(i) Xi U[0,
], f (x) = for 0 < x < and 0 otherwise,
0 for x < 0
x
for 0 < x <
F (x) =

1 for x >

g1 (t) = n (1 t )n1 ,

gn (t) = n ( t )n1

(ii) G1 (t) = 1 [1 t ]n ,

Gn (t) = ( t )n .

9/1

t n (1 t )n1 dt = 
R1
t
n
n1 dw
=
w
then
dt
=
dw,
and

=

0 w (1 w)
R1
= n 0 w 1 (1 w)n1 dw
|
{z
}
(iii) E(X1:n ) =

BETA(2,n)

(n1)!
(n+1)! n

E(Xn:n ) =
=

Z
|0
R
0

(n + 2) 1
n!
w (1 w)n1 dw = (n+1)!
=
(2)(n)
{z
}

n+1 .

=1

t n ( t )n1 dt =

R n
1
n
n

0 t dt

n t n+1
n n+1 |0

nn+1
(n+1)n

n
n+1 .

10 / 1

2 )=
E(X1:n

R
0

t 2 g1 (t) =

R
0

t 2 n (1 t )n1 dt = 

= u, t = (1 u), dt = du
R0
R1
and  = 1 2 (1 u)2 n u n1 du = n2 0 u n1 (1 u)2 du
|
{z
}
1

BETA(n,3)

n2 (n)(3)
(n+3)

2n!
=
= 2 (n+2)!

Z
|0

(n + 3) n1
u
(1 u)2 du = 2 n(n1)!2
(n+2)!
(n)(3)
{z
}
=1

22
(n+1)(n+2)

11 / 1

2 ) (E(X ))2 =
Finally, Var(X1:n ) = E(X1:n
1:n
2
= 2 ( (n+1)(n+2)

1
)
(n+1)2

22
(n+1)(n+2)

2
[2(n
(n+1)2 (n+2)

t 2 n ( t )n1 dt =

2
( n+1
)

+ 1) (n + 2)]

n2
(n+1)2 (n+2)

Var(Xn:n ) :
2 )=
E(Xn:n

n t n+2
n n+2 |0

t 2 gn (t)dt =

R
0

t n+1 dt

n2
n+2 .

2 ) (E(X
2
Now, Var(Xn:n ) = E(Xn:n
n:n )) =
1
= n2 ( n+2

n
n

n
)
(n+1)2

n2
(n+2)(n+1)2

n2
n+2
2

n
( n+1
)2

[(n + 1) n(n + 2)]


|
{z
}
=1

n2
(n+2)(n+1)2

Notice that Var(X1:n ) = Var(Xn:n ).

12 / 1

(iv) The joint density of k th and lth order statistics (l > k ) is


given by the formula gk ,l (s, t) =
n!
k 1 [F (t)F (s)]lk 1 [1F (t)]nl f (s)f (t)
(k 1)!(lk 1)!(nl)! [F (s)]

for s t, and 0 otherwise.


For k = 1 and l = n :
n!
0
n2 [1
0!(n2)!0! [F (s)] [F (t) F (s)]
1)[F (t) F (s)]n2 f (s)f (t).

g1,n (s, t) =
= n(n

F (t)]0 f (s)f (t)

For X1 , . . . , Xn selected from U[0, ] distribution


g1,n (s, t) = n(n 1)[ t s ]n2

n(n1)
n (t

s)n2 ,

0 < s < t < .

13 / 1

RRt
E(X1:n Xn:n ) = 0 0 st g1,n (s, t) ds dt
RRt
n2 ds dt
= 0 0 st n(n1)
n (t s)
Z t
R
n(n1)
= 0 t n
[t (t s)](t s)n2 ds dt =
|0
{z
}
=

=
=

n1

Rt

n2 (t s)n1 ]ds = ( t(ts)


n1
0 [t(t s)

tn
n1

tn
n

NowR

= 0 t

(ts)n t
n )|0

t n+2
n+2

1
1
= t n ( n1
n1 ) = t n n(n1)
.

n(n1)
n

tn
n(n1)

2
[(n
(n+1)2 (n+2) |

1
n

2
n+2

dt =

and finally, Cov(X1:n , Xn:n ) =


=

+ 1)2 n(n + 2)] =


{z
}

t n+1 dt =

n+1

1
n

2
n+2 ,

n
n+1

2
(n+1)2 (n+2)

=1

14 / 1

Distribution of the sample range R = Xn:n X1:n


R = Xn:n X1:n = T S, and let companion variable be W = T .
The solutions are t = w, and s = t r = w r . Since
J = |11 10 | = 1, |J| = 1. Now g(w, r ) = f1,n (s(w, r ), t(w, r ))
= n(n 1)[F (w) F (w r )]n2 f (w r )f (w) and the density of
R
R can be obtained as hR (r ) = g(w, r )dw
Example X1 . . . , Xn EXP(1), f (x) = ex for x > 0 and 0
otherwise, F (x) = 1 ex for x > 0 and 0 otherwise. Then
R
hR (r ) = r n(n 1)[ew+r ew ]n2 e(wr ) ew dw
Z
= er (er 1)n2 n(n 1)
ew(n2)2w dw
|r
{z
}
=

15 / 1

=

R
r

1 nr
enw dw = n1 enw |
, and finally
r = ne

hR (t) = (n 1)er (er 1)n2 enr = (n 1)er (n1) (er 1)n2


= (n 1)er (er (er 1))n2 = (n 1)er (1 er )n2 , r > 0.
Example Let X1 , . . . , Xn be a random sample selected from a
U[0, ] distribution. Determine the sample size n needed for the
expected sample range E(Xn:n X1:n ) to be at least 0.75.
E(R) = E(Xn:n X1:n ) =
n1
n+1

n
n+1

n+1

n1
n+1 .

If now

0.75 then 4(n 1) 3(n + 1) and n 7.

Var(R) = Var(Xn:n X1:n ) = Var(Xn:n ) + Var(X1:n )


2Cov(X1:n , Xn:n ) =

2n2
(n+1)2 (n+2)

22
(n+1)2 (n+2)

22 (n1)
(n+1)2 (n+2)

16 / 1

Determine Var(Xl:n Xk :n ), l > k , in a sample from U[0, ].


Var(Xl:n Xk :n ) = Var(Xk :n ) + Var(Xl:n ) 2Cov(Xk :n , Xl:n ).
Needed: (i) E(Xk :n ), (ii) E(Xk2:n ), (iii) Var(Xk :n ),
and (iv) Cov(Xk :n , Xl:n )
n!
k 1 (1
(k 1)!(nk )! [F (t)]
t k 1
n!
[1 t ]nk 1
(k 1)!(nk )! ( )

gk (t) =
=

F (t)]nk f (t)

17 / 1

(i) E(Xk :n ) =

n!
(k 1)!(nk )!

n!
(k 1)!(nk )!

k !(nk )!
(n+1)!

tgk (t)dt =

R1
0

n!
t k 1
t (k 1)!(nk
[1 t ]nk 1 dt
)! ( )

ww k 1 (1 w)nk 1 dw

|0

w k (1 w)nk dw =
{z
}

(k +1)(nk +1)
n!
(n+2)
(k 1)!(nk )!

BETA(k +1,nk +1)

n!
(k 1)!(nk )!

k
n+1 ,

and E(Xl:n ) =

l
n+1 .

(ii) Similarly,
R
R
n!
t k 1
E(Xk2:n ) = 0 t 2 gk (t)dt = 0 t 2 (k 1)!(nk
[1 t ]nk 1 dt
)! ( )
Z 1
2
n!
= (k 1)!(nk
w k +1 (1 w)nk dw
)!
|0
{z
}
=

(k +2)(nk +1)
(n+3)

BETA(k +2,nk +1)


2 n!
2 k (k +1)
(k 1)!(nk
)! = (n+1)(n+2) .

18 / 1

k (k +1)
k 2
(iii) Var(Xk :n ) = E(Xk2:n ) [E(Xk :n )]2 = 2 (n+1)(n+2)
( n+1
)
2

(n+2)
k (n+1k )
= 2 k (k +1)(n+1)k
= 2 (n+1)
2 (n+2) .
(n+1)2 (n+2)
n!
(k 1)!(lk 1)!(nl)!
[F (s)]k 1 [F (t) F (s)]lk 1 [1

(iv) gk ,l (s, t) =

F (t)]nl f (s)f (t)ds dt.

RRt
Now E(Xk :n Xl:n ) = 0 0 st gk ,l (s, t)
RRt
= 0 0 st( s )k 1 [ t s ]lk 1 [1 t ]nl 12 ds dt =???
Easier way: Let Y1 , . . . , Yn be a random sample from U[0, 1].
Then Xi = Yi , Xi:n = Yi:n , E(Xi:n ) = E(Yi:n ) and so on. Also
E(Xk :n Xl:n ) = 2 E(Yk :n Yl:n ).

19 / 1

n!
(k 1)!(lk 1)!(nl)! ,
RRt
E(Yk :n Yl:n ) = 0 0 st gk ,l (s, t)ds dt
RRt
= 0 0 st sk 1 [t s]lk 1 [1 t]nl ds dt
RRt
= 0 0 [1 (1 t)] sk [t s]lk 1 [1 t]nl ds dt
RRt
=  0 0 sk [t s]lk 1 [1 t]nl ds dt
RRt
 0 0 sk [t s]lk 1 [1 t]nl+1 ds dt
and since 1 = k !(lk(n+1)!
1)!(nl)! ,

For  =

A=


1

RRt
0

= A B,

1sk [t s]lk 1 [1 t]nl ds dt


|
{z
}
gk +1,l+1 (s,t)

1)!(nl)!
=  k !(lk(n+1)!
.

20 / 1

Similarly B =


2

RRt
0

2sk [t s]lk 1 [1 t]nl+1 ds dt


{z
}
|

=  k !(lk 1)!(nl+1)!
, since 2
(n+2)!

gk +1,l+1 (s,t)
(n+2)!
= k !(lk 1)!(nl+1)!
.

Next,
1)!(nl)!

E(Yk :n Yl:n ) = A B = ( k !(lk(n+1)!

=
=

n!
(k 1)!(lk 1)!(nl)!
k (l+1)
(n+1)(n+2) .

k !(lk 1)!(nl)!
[(n
(n+2)!

k !(lk 1)!(nl+1)!
)
(n+2)!

+ 2) (n l + 1)]

k (l+1)
Consequently, E(Xk :n Xl:n ) = 2 (n+1)(n+2)

and Cov(Xk :n , Xl:n ) = E(Xk :n Xl:n ) E(Xk :n )E(Xl:n )


k (l+1)
= 2 (n+1)(n+2)

k
n+1

l
n+1

= 2 k (l+1)(n+1)kl(n+2)
(n+1)2 (n+2)

k (n+1l)
= 2 (n+1)
2 (n+2) .

21 / 1

Finally, Var(Xl:n Xk :n ) = Var(Xk :n ) + Var(Xl:n ) 2Cov(Xk :n , Xl:n )


k (n+1k )
2 l(n+1l)
2 k (n+1l)
= 2 (n+1)
2 (n+2) + (n+1)2 (n+2) 2 (n+1)2 (n+2)
)(n+1l+k )
= 2 (lk
(n+1)2 (n+2)

Joint distribution of X1:n , . . . , Xn:n : The density of the joint


distribution is g(y1 , . . . , yn ) = n!f (y1 ) f (yn ) for y1 < < yn
and 0 otherwise.
HW 10.3.2 p. 325

22 / 1

Problem 10.3.6 p. 325 The density of Xi BETA(2, 1) is


(3)
f (x) = (2)(1)
x 21 (1 x)11 = 2x for 0 < x < 1 and 0 otherw.
The joint density (for 0 < y1 < y2 < y3 < y4 < y5 < 1) is
g(y1 , y2 , y3 , y4 , y5 ) = 5!2y1 2y2 2y3 2y4 2y5 = 5!25 y1 y2 y3 y4 y5 .
Z
R 1 y4
5
(i) g1,2,4 (y1 , y2 , y4 ) = 5!2 y4
y1 y2 y3 y4 y5 dy3 dy5 =
y2
|
{z
}
=
Z y4
Ry
 = y24 y1 y2 y3 y4 y5 dy3 = y1 y2 y4 y5
y3 dy3
y2
| {z }
=0.5(y42 y22 )

= 0.5y1 y2 y4 y5 (y42 y22 )

23 / 1

Now = 5!24

R1

y1 y2 y4 y5 (y42 y22 )dy5


Z 1
4
2
2
= 5!2 y1 y2 y4 (y4 y2 )
y5 dy5 = 5!23 y1 y2 y4 (y42 y22 )(1 y42 ).
y
| 4 {z }
y4

0.5(1y42 )

24 / 1

(ii) E(X2:5 |X4:5 ) = E(S|T ) =

Rt

0 sf (s|t)ds =

Rt
0

s ff(s,t)
ds.
T (t)

Since

f (x) = 2x, F (X ) = x 2 , the joint density is (n = 5, k = 2, l = 4)


5!
2 1 2
2 421 (1
(21)!(421)!(54)! (s ) (t s )
5!s3 t(t 2 s2 )(1 t 2 ) for 0 < s < t < 1,

f2,4 (s, t) =
=4

and f4 (t) =

5!
2 41 (1
(41)!(54)! (t )

t 2 )1 2s2t

t 2 )54 2t = 40t 7 (1 t 2 ).

45!s3 t(t 2 s2 )(1t 2 )


= 12s3 t 6 (t 2 s2 ), and
40t 7 (1t 2 )
Rt
Rt
E(X2:5 |X4:5 ) = 0 s 12s3 t 6 (t 2 s2 ) ds = 0 12s4 t 4 ds
Rt
5 4 |t 12 s 7 t 6 |t = 12( 1 1 )t = 24 t
0 12s6 t 6 ds = 12
0
0
5 s t
7
5
7
35

Now, f (s|t) =

HW: 10.3.6 p. 325 - find E(X3:5 |X4:5 )

25 / 1

(iii) Y = XX2:5
= TS . Let W = S be a companion variable, so that
1:5
t = yw, and s = w. Since 0 < s < t < 1, we have
0 < w < yw < 1, and that means that w > 0, y > 1, and y < w1 .

J = |y1 w
0 | = w, |J| = w.

5!
11 [F (t) F (s)]211
(11)!(211)!(52)! [F (s)]
F (t)]53 f (s)f (t) = 20(1 t 2 )3 2s2t = 80st(1 t 2 )3

f1,2 (s, t) =
[1

Now g(w, y ) = 80w(wy )(1 y 2 w 2 )3 w = 80w 3 y (1 y 2 w 2 )3 ,


R 1/y
R 1/y
and gY (y ) = 0 g(w, y )dw = 0 80w 3 y (1 y 2 w 2 )3 dw
Z 1/y
= 80y
w 3 (1 y 2 w 2 )3 dw =
{z
}
|0
=

Let 1 y 2 w 2 = z. Then w 2 = 1z
, 2wy 2 dw = dz, and  =
y2
R0 3 3 1
R
R
1 4 1
3 dz = 1 y 4 1 (z 3 z 4 )dz
w
z
dz
=
y
(1

z)z
2
2
2
1
0
0
2wy
4

= 12 y 4 ( z4

z5 1
5 )|0 dz

1 4
,
40 y

and gY (y ) = = 2y 3 for y > 1


26 / 1

Generating Random Samples


When quantitative problems are too complex to be studied
theoretically one can try to use simulations to obtain
approximate solutions.
Generating U[0, 1] distribution to obtain other discrete
distributions such as e.g., Bernoulli, binomial, geometric,
negative binomial, and Poisson.
Example1: P(X = 1) = p = 1 P(X = 0). Let p = 0.3. Select
any subset of [0, 1] of length 0.3 (p). For example: [0.2, 0.5], or
[0.7, 1], or [0, 0.1] [0.8, 1].
Let [0, 0.3] and [0.3, 1] represent a success (S) and a failure
(F ), respectively. Five values are generated from a U[0, 1]
distribution:
0.2117, 0.1385, 0.7009, 0.6990, 0.6903
S
S
F
F
F a random sample from
BIN(1, 0.3) distribution
27 / 1

Example 2 BIN(n, p) BIN(6, 0.4).


Let [0, 0.6] F and (0.6, 1] S (one of possible choices).
1 observation requires n generations from U[0, 1]
k observations require n k generations from U[0, 1].
For two observations from BIN(6, 0.4) distributions one needs
12 generations from U[0, 1]
0.4972
0.8125
0.3133
0.2025
0.9335
0.0114
X1 = 2

F
S
F
F
S
F

0.5957
0.4801
0.2223
0.1718
0.2292
0.9815

F
F
F
F
F
S

X2 = 1

Random sample of size 2 generated from BIN(6, 0.4)


distribution: x1 = 2, x2 = 1.
28 / 1

Example 3 Generate a random sample of size 6 from POI(2)


distribution.
X
0
1
2
3
4
5

P(X = x)
0.1353
0.2707
0.2707
0.1804
0.0902
0.0361

FX (x) = P(X x)
0.1353
0.4060
0.6767
0.8571
0.9473
0.9834

Xk = i if for k th observation Uk Fx (i 1) Uk < FX (i).


Ui
Xi

0.0909
0

0.1850
1

0.1243
0

0.2991
1

0.4290
2

0.9272
4

Random sample of size 6 selected from POI(2) distribution is:


0, 1, 0, 1, 2, 4.
29 / 1

Theorem If a random variable X has continuous and strictly


increasing cdf FX then F (X ) has U[0, 1] distribution.
Therefore if Y U[0, 1] then FX1 (Y ) has the same distribution
as a random variable X . Therefore to generate the distribution
of Y one generates the distribution U[0, 1] first, and then
transforms obtained observations by FX1 .
Problem
p. 329 The density and 
the cdf are
 10.4.3
1 2x
2x
e
for x < 0
2e
f (x) =
and
F
(x)
=
e2x for x > 0
1 12 e2x .
 1
for y 12
2 log 2y
Since F (0) = 12 , F 1 (y ) =
12 log 2(1 y ) for y > 12 .
y1 = 0.74492 x1 = 12 log 2(1 0.744921) = 0.336517
y2 = 0.464001 x2 =

1
2

log 2 0.464001 = 0.03736.


HW 10.4.2 p.329
30 / 1

Accept/Reject Algorithm
When the distribution of variable X is such that cdf F and/or
F 1 do not have closed form, one of the possible methods of
generating a random sample from the distribution of X is the
so-called accept/reject algorithm:
Let U U[0, 1], and let variable Y with density g be some
distribution that is easy to generate. Variables U and Y are
independent.
Additionally, let c be a constant such that f (y ) cg(y ) for any
f (y )
value y of Y , so in other words c = supy g(y
).
f (Y )
cg(Y ) .

Finally, X = Y if U <

31 / 1

Justification:

It will be shown that FX (y ) = FY (y |U

FX (y ) = FY (y |U

f (Y )
cg(Y ) )

P(U f (Y )/[cg(y )]) =


=c
=c

Ry

Ry

R f (t)/[cg(t)]

P(Y y ,Uf (y )/[cg(y )]


P(Uf (y )/[cg(y )]

f (y )
cg(y )

g(y )dy =

g(t) du dt = c

f (t)
g(t) cg(t) dt =

c
c

Ry

f (t) dt

1
c

f (Y )
cg(Y ) ).

=
R

f (y )dy

1
c

Ry

R f (t)/[cg(t)]
g(t)(
du) dt

= FX (y )

Problem 10.4.6 p.329 Use accept/reject algorithm to


generate a sample from N(0, 1) distribution. fX - density of
N(0, 1), FX does not have a closed form. Y has double
exponential (Laplace) distribution with density g(y ) = 1.5e3|y | .

32 / 1

c = supy

f (y )
f (y )
,
g(y ) g(y )

(2)1/2 ey
1.5e|y |

2 /2

1.5 2

ey

2 /2+3|y |

the function is even so it is enough to consider y > 0.


y > 0: maxy ey

2 /2+3y

d y 2 /2+3y
dy e

d y 2 /2+3y
(y
dy e

+ 3)

is equal to 0 if y = 3.
supy

f (y )
g(y )

f (3)
g(3)

X = Y if U <
U1
0.22295
0.847152
0.614370

e4.5+9
1.5 2

4.5
e
1.5 2

= 23.941 = c.

f (Y )
cg(Y ) .

U2
0.516174
0.466449
0.001058

Y
0.01096
-0.02315
-2.05270

f (y )/[cg(y )]
0.1148
0.0119
0.6385

X
none
none
-2.0527

x1 = 2.0527.
33 / 1

Example Another accept/reject algorithm will be used to


generate BETA(, + 1) distribution.
Let U1 , U2 U[0, 1], be independent and let > 0, > 0
1/

V1 = U1

1/

, V2 = U2 . X = V1 if V1 + V2 1

Determine the distribution of X :


FX (a) = P(V1 a|V1 + V2 1) =
1/

FV1 (v ) = P(V1 v ) = P(U1

P(V1 a,V1 +V2 1)


P(V1 +V2 1)

N
D

v ) = P(U1 v ) = v ,

fV1 (v ) = v 1 and f (v1 , v2 ) = v11 v21 (variables V1 and


V2 are indpendent since U1 and U2 are assumed independent)

34 / 1

R 1 R 1v
D = P(V1 + V2 1) = 0 0 1 v11 v21 dv2 dv1
Z
R 1 1 1v1 1
R1
= 0 v1
v2 dv2 dv1 = 0 v11 (1 v1 ) dv1
|0
{z
}
= 1 (1v1 )

= ()(+1)
(++1)

R1

(++1) 1
(1
0 ()(+1) v1

v1 ) dv1 =

(+1)(+1)
(++1)

and
N =

R a R 1v1
0

= ()(+1)
(++1)
=

Ra

v11 v21 dv2 dv1 =

(++1)
0 ()(+1) (1

Ra
0

(1 v1 ) v11 dv1

v1 ) v11 dv1

(+1)(+1)
(++1) FBETA(,+1) (a).

35 / 1

Now =

N
D

= FBETA(,+1) (a)

X BETA(, + 1).

Generate 1 observation from BETA(0.738, 1.449) distribution.


X BETA(0.738, 1.449), = 0.738, = 0.449.
Generate u1 , u2 : 0.996484, 0.066042
v1 = 0.9964841/0.738 = 0.99523,
v2 = 0.0660421/0.449 = 0.002352.
v1 + v2 = 0.99758 1 and therefore x = v1 = 0.99523

36 / 1

Вам также может понравиться