Вы находитесь на странице: 1из 10

STAT 410

Fall 2016

Maximum Likelihood Asymptotics,


Information, Cramer-Rao Lower Bound

Definition: Maximum Likelihood Estimator (MLE)


p.m.f. or p.d.f.

(; ),

parameter space.

Likelihood function for a sample of i.i.d. 1 , , ,

(; ) = (; 1 , , ) = (1 , , ; ) = ( ; )

where = (1 , ,

=1

is a vector of sample observations. The log-likelihood is,

(; ) = ln[(; 1 , , )] = ln[( ; )]
=1

Assumptions (Regularity Conditions):

(R0) The pdfs are distinct; i.e., ( ; ) ( ; ).


(R1) The pdfs have common support for all .

(R2) The true unknown point 0 is an interior point in .

(R3) (; ) is a twice differentiable function of .

(R4) (; ) can be twice differentiable under the integral as a function of .

(R5)

ln[(; )] < (), [()] <

Theorem 6.1.3. Assume that 1 , , satisfy regularity conditions (R0) to (R2),


where 0 is the true parameter, and further that (: ) is differentiable with
respect to , then,
(; )
(; )
= 0,
=0

has a solution , such that 0 .

STAT 410
Fall 2016

Maximum Likelihood Asymptotics,


Information, Cramer-Rao Lower Bound

Definition: The score function is

Note that,

Proof:

ln[(; )].

ln[(; )] = 0

1 = (; )

Taking a derivative of both sizes with respect to yields,

(; )

0=
(; ) = (; ) =
(; )

(; )

= ln[(; )] (; ) = ln[(; )]

Definition: Fisher information, (), is the variance of the score function,

2
2

() = ln[(; )] = ln[(; )] = 2 ln[(; )]

Proof.

2
2

ln[(; )] = ln[(; )] ln[(; )]

We showed above that,

ln[(; )] (; ) = 0

Lets take another derivative of both sides with respect to ,

ln[(; )] (; ) = 0

STAT 410
Fall 2016

Maximum Likelihood Asymptotics,


Information, Cramer-Rao Lower Bound


ln[(; )] (; ) = 0

Using the product rule yields,

2
2 ln[(; )] (; ) + ln[(; )] (; ) = 0

(; )
2

(; ) = 0
2 ln[(; )] + ln[(; )]

(; )
2
2

2 ln[(; )] + ln[(; )] = 0

Rao-Cramer Lower Bound: Consider 1 , , iid (; ) and a statistics


= (1 , , ) such that () = (), then,
2

()
()
.
()

Note that if is unbiased then () = , () = 1, and


()

1
.
()

is an efficient estimator of if and only if the variance of attains the RaoCramer lower bound.

STAT 410
Fall 2016

Maximum Likelihood Asymptotics,


Information, Cramer-Rao Lower Bound

Theorem 6.2.2. Assume 1 , , are iid with pdf (; ) for 0 and (R0) to
(R5) are satisfied. Suppose Fisher information satisfies 0 < (0 ) < . Then any
consistent sequence of solutions of the mle satisfies,

/2
Intuition:

0 0,

(0 )

has an approximate 100(1 )%


confidence level for large .

A second-order Taylor expansion of ; about 0 is,

1
2
; = (0 ; ) + 0 (0 ; ) + 0 ( ; )
2

for between and 0 .


Note:

(0 ; ) =

; = 0

=1

ln[( ; 0 )] 0, (0 )

1
1
2
(0 ; ) = 2 ln[( ; 0 )] (0 )

=1

1
0 ( ; ) 0

STAT 410
Fall 2016

Maximum Likelihood Asymptotics,


Information, Cramer-Rao Lower Bound

Example 1. Consider the (, 2 ) distribution with 2 known.


a)

Find ().

1 ( )2
exp
(; ) =

2 2
2 2
1

1 ( )2
1
1
2)

ln[(; )] = ln(2) ln(


2 2
2
2


ln[(; )] =
,

2
We have two options for computing ():

b)

() = ln[(; )]


1
= 2 = 2

2
1
() = 2 ln[(; )] = 2

Is an efficient estimator of ?

The mle of is = .

( ) = ( ) =
c)

2
1
ln[(;
)]
=

2
2

2
1
=
is an efficient estimator of .
()

What is the large sample distribution of ?


= ~ ,
5

STAT 410
Fall 2016

Maximum Likelihood Asymptotics,


Information, Cramer-Rao Lower Bound

Example 2. Consider the (, 2 ) distribution with known.


a)

Find ().

1 ( )2
exp
(; ) =

2 2
2 2
1

1 ( )2
1
ln[(; )] = ln(2) ln()
2 2
2

1 ( )2
ln[(; )] = +
,

b)

( )2
2
1
ln[(; )] = 2 3

2
4

( )2
2
1
2
() = 2 ln[(; )] = 2 3

Find ( 2 ).

(;
ln[(;

2 )]

2)

1 ( )2
=
exp

2 2
2 2
1

1
1 ( )2
1
2)
= ln(2) ln(
2
2 2
2

1
1 ( )2
ln[(; 2 )]
= 2+
,
2
2 ( 2 )2
2
(

2)

( )2
2 ln[(; 2 )]
1
=

( 2 )2
( 2 )3
2( 2 )2

( )2
2
1
1
2 )]
=
=

ln[(;

=
( 2 )2
2 4
2 4
6
6

STAT 410
Fall 2016

c)

Maximum Likelihood Asymptotics,


Information, Cramer-Rao Lower Bound

What is the asymptotic distribution of =

=1( )2 ?

The mle is 2 = =1( )2 and we know that ( 2 2 ) (0,2 4 ).

( )2
=
1

() =

=1

,
1

1
1
() =
2 1

2
1 1
4
2 0,

( ) 0,
1 2
4 1 2

Example 3. Let ~(). Find ().


(; ) =

1
, > 0, > 0

ln[(; )] = ln()
1

ln[(; )] = + 2 ,

We have two options for computing ():

() = ln[(; )]

1
1
= + 2 = 2

2
1

ln[(;
)]
=

2
2
3
2
2
1
() = 2 ln[(; )] = 2

STAT 410
Fall 2016

Maximum Likelihood Asymptotics,


Information, Cramer-Rao Lower Bound

Example 4. Let ~(1, ), 0 < < 1.


a)

Find ().

(; ) = (1 )1 ,

0 < < 1,

= 0,1

ln[(; )] = ln() + (1 ) ln(1 )

1
ln[(;
)]
=

+
2 (1 )2
2

ln[(; )] =
=
,
1 (1 )

We have two options for computing ():

() = ln[(; )]

(1 )
1
=
(1 )

b)

Is an efficient estimator ?

The mle of theta is = , so =

estimator of .
c)

2
() = 2 ln[(; )]

1
= 2 +

(1 )2

1
=
(1 )

(1)

What is the large sample distribution of ?


= ~ ,
8

()

(1 )

is an efficient

STAT 410
Fall 2016

Maximum Likelihood Asymptotics,


Information, Cramer-Rao Lower Bound

Example 5. Let 1 , , be a random sample of size from the distribution with


probability density function,
1 1
,
0 1
(; ) =
0,

a)

Find ().

[ ln()] = 2

[ ln()] = ,

1
ln[(; )] = ln() 1 ln()

2
1
ln()
ln[(;
)]
=

2
2
2
3

1 ln()
ln[(; )] = + 2 ,

() = ln[(; )]

1 ln()
= + 2

1
= 2

2
() = 2 ln[(; )]

1
ln()
1
=

2
2
3

STAT 410
Fall 2016

Maximum Likelihood Asymptotics,


Information, Cramer-Rao Lower Bound

Example 6. Let > 0 and let 1 , , be a random sample from the distribution
2
with the probability density function, (; ) = 22 3 , > 0.
Recall,

( ) = 2 + 2 ,
2

> 4

1
= 2 ~ = 2, = ,

=1

1
=
,
2 1

Consider the estimator,

2
1
=
(2 1)2 (2 2)

2 1
= 2
=1

Is an efficient estimator of ? If not, find its efficiency.

2
2 1

=
=

2 2

ln[(; )] = ln 2 + 2 ln() + 3 ln 2

2
2
ln[(;
)]
=

2
2

2
ln[(; )] = 2 ,

2
() = ln[(; )]
() = 2 ln[(; )]

2
2
2
= 2
=
=

2
2
4)
2
2
= ( [( )]
2
= 32 [1 ]2 = 2

2
Rao-Cramer lower bound is so is not efficient. Its efficiency is,
2

2 2 1
=
1
2

10

Вам также может понравиться