Вы находитесь на странице: 1из 9

Advanced Statistics, MiQEF University of St.

Gallen
Chapter 2
On p. 9
Exercise 1:
Unbiased estimator of
2
: Consider T
n
V
n
E

[T
n
V
n
] =
ind.
E

[T
n
]E

[V
n
] = =
2
Unbiased estimator of ( 1) =
2
: Consider T
n
V
n
T
n
E

[T
n
V
n
T
n
] = E

[T
n
V
n
] E

[T
n
] =
2

This is not the only unbiased estimator. Another one: T


n
V
n
V
n
, for example.
Exercise 2: X
i
iid Bernoulli(p), T
n
=

n
i=1
X
i
B(n, p), s
2
n
=

n
i=1
X
i


X
n
n1
,

X
n
=
T
n
n
. Remind: s
2
n
is an unbiased estimator for p(1 p) = Var(X
i
).
Show that: E
p
_
T
n
(T
n
1
n (n 1)
_
= p
2
Note that
s
2
n
=
1
n 1

_
n

i=1
X
2
i
2
n

i=1
X
i


X
n
+
n

i=1

X
2
n
_
=
1
n 1

_
n

i=1
X
2
i
2n

X
2
n
+n

X
2
n
_
,
so
n
n 1

X
2
n
=
1
n 1
n

i=1
X
2
i
s
2
n
.
Now
T
n
(T
n
1)
n (n 1)
=
n

X
n
(n

X
n
1)
n (n 1)
=
1
n 1
(n

X
2
n


X
n
) =
=
n
n 1

X
2
n

1
n 1

X
n
=
1
n 1
n

i=1
X
2
i
s
2
n

1
n 1

X
n
E
p
_
T
n
(T
n
1
n (n 1)
_
= E
p
_
1
n 1
n

i=1
X
2
i
s
2
n

1
n 1

X
n
_
=
=
1
n 1

_
p(1 p) +p
2
_
. .
=Var
p
(X
i
)+E
p
[X
i
]
2
p(1 p)
1
n 1
p = p
2
Consistency:

X
n
P
p, s
2
n
P
p(1 p),

n
1=1
X
2
i
n
P
E
p
[X
2
i
] = p
T
n
(T
n
1
n (n 1)
=
1
n 1
n

i=1
X
2
i
s
2
n

1
n 1

X
n
P
p
2
Prof. Francesco Audrino p. 1
Advanced Statistics, MiQEF University of St. Gallen
Exercise 3: X
i
Pois(), E[X
i
] = , E[X
2
i
] =
2
+, Var(X
i
) =
T
n
=

X
2
n


X
n
=
_
1
n
n

i=1
X
i
_
2

_
1
n
n

i=1
X
i
_
Once again: s
2
n
=
1
n 1

_
n

i=1
X
2
i
n

X
2
n
_
E

[T
n
] = E

[

X
2
n
] E

[

X
n
] = E

n 1
n
s
2
n
+
1
n
n

i=1
X
2
i
_
E

[

X
n
] =
=
n 1
n
E

[s
2
n
] +E

[m

2
] E

[m

1
] =
=
n 1
n
Var(X
i
) +

1
=
n 1
n
+ (
2
+) =
=
2

n 1
n

n

b
n
(
2
) =
2

n 1
n

2
=
n 1
n

n

T
n
is not even asymptotically unbiased! An unbiased estimator would be
U
n
= T
n
+
n 1
n
s
2
n
=
1
n
n

i=1
X
2
i

1
n
n

i=1
X
i
=

1
or
V
n
= T
n
+
n 1
n


X
n
=

X
2
n

1
n

X
n
Exercise 4: X
i
iid N(, 1).
T
n
=

X
2
n

1
n
=
n 1
n
s
2
n
+
1
n
n

i=1
X
2
i

1
n
E[T
n
] =
n 1
n
1 +

1
n
=
n 1
n
+
2
+ 1
1
n
=
2
On p. 12
Exercise: p
MLE
= p
MME
=

X
n
=
1
n

n
i=1
X
i
.
E[ p] = p, Var( p) =
Var(X
i
)
n
=
p(1 p)
n
We also know that p
P
p.
By the CLT:

n
_
p p)
_
p(1 p)
_
d
N(0, 1),
therefore

n ( p p)
d
N(0, p(1 p)).
p is a CAN estimator with J
2
(p) = p(1 p).
Prof. Francesco Audrino p. 2
Advanced Statistics, MiQEF University of St. Gallen
On p. 14
Exercise 1:
(i) Pois() MLE:

=

X
n
.
We know:

P
, E[

] = , Var(

) =
Var(X
i
)
n
=

n
From the CLT:

n
_

_
d
N(0, 1),
therefore

n (

)
d
N(0, ) is CAN estimator with I() = 1/J
2
() = 1/.
We verify this result by the denition of the Fisher information:
log f(x|) = xlog log x!
log f(x|)

=
x

1
_
log f(x|)

_
2
=
x
2

2

2x

+ 1
E

_
_
log f(x|)

_
2
_
=

2
+

2

2

+ 1 =
1

(ii) Geometric with =


1
p
Find MLE of p:
L(p) = p
n

i=1
(1 p)
x
i
1
log L(p) = nlog p +
n

i=1
(x
i
1) log(1 p)
log L(p)
p
=
n
p

1
1 p

n

i=1
(x
i
1)
!
= 0
n np = p
n

i=1
x
i
np p =
n

n
i=1
x
i
=
1

X
n
Hence the MLE of =
1
p
is

=

X
n
. Note that E[X
i
] = .
Like before:

P
, E[

X
n
] = , Var(

X
n
) =
Var(X
i
)
n
=
1p
p
2
n

n (

)
d
N(0,
1 p
p
2
)
I() =
1
1p
p
2
=
1

Prof. Francesco Audrino p. 3


Advanced Statistics, MiQEF University of St. Gallen
Exercise 2: X
i
Uniform([0, ]).
f(x
i
|) =
_
1

, x
i
[0, ]
0 , else
Find the MLE of :
L() =
_
1

n
, x
i
[0, ], i = 1, . . . , n
0 , else
L() is maximal when the minimal is chosen such that all x
i
s are in the interval [0, ]


= max(X
1
, . . . , X
n
)
Remember the limit laws for maxima that we saw in Chapter 1.
(Fisher-Tippett Theorem): limit laws are extreme value distributions, NOT normal!

is not CAN estimator


Exercise 3: X
i
N(,
2
)

n
=

X
n
N
_
,

2
n
_
CAN estimator with

J() =
2
, n observations.

n
=
n
n + 1


X
n
N(
n
n + 1
,
n
(n + 1)
2

2
)

n
is asymptotically unbiased for , bias() =
1
n+1

n
0.
CAN estimator with

J

() =
2
,
(n+1)
2
n
observations. The asymptotic relative e-
ciency is = 1. However
(n+1)
2
n
= n + 2 +
1
n
> n, so with

n
we need a bigger number of
observations to have the same variance.
On p. 17
Exercise: (see also Ex. 1 on p. 6)
E[t
1
(X
1
, . . . , X
n
)] = E[

X
n
] =
E[t
2
(X
1
, . . . , X
n
)] = E
_
1
2
n1

i=1
X
i
n 1
+
X
n
2
_
=
1
2
(n 1)

n 1
+

2
=
Var(t
1
(X
1
, . . . , X
n
) =

n
Var(t
2
(X
1
, . . . , X
n
) =
1
4

n
n 1

I() = E
_
_
log f(x|)

_
2
_
=
1

By the Cramer-Rao Inequality for any unbiased estimator t(X


1
, . . . , X
n
) of we have
Var(t)
1
n(1/)
=

n
Prof. Francesco Audrino p. 4
Advanced Statistics, MiQEF University of St. Gallen
Estimator t
1
achieves equality, Var(t
2
) >

n
. By the Corollary on p. 8, we have
k
n
()(t
1
) =

log
_
n

i=1
f(x
i
|)
_
=
n

i=1

log f(x
i
|) =
n

i=1
_
x
i

1
_
=
n

(

X
n
),
so we get k
n
() =
n

.
g() = e

(1 +) = P[X = 0] +P[X = 1] = P[X = {0, 1}]


Dene
(X
i
) =
_
1 , if X
i
= 0 or X
i
= 1
0 , else
then
E[(X
i
)] = 1 P[X = {0, 1}] = e

(1 +)
hence t
3
(X
1
, . . . , X
n
) =
1
n

n
i=1
(X
i
), the proportion of observations from a sample of
size n that are either 0 or 1, is an unbiased estimator of g() with
Var(t
3
) =
Var((X
i
))
n
=
1
n

_
E[(X
i
)
2
] E[(X
i
)]
2
_
=
=
1
n

_
e

(1 +) e
2
(1 +)
2
_
The Cramer-Rao bound here is
Var(t)
(g

())
2
nI()
=

3
e
2
n
By the same Corollary as mentioned before, equality would be achieved by an estimator
t if and only if
k
n
()(t e

(1 +)) =
n

(

X
n
) t = e

(1 +) +
n
k
n
()
(

X
n
)
To get rid of the e

(1 +)
n
k
n
()
part, one would have to choose
k
n
() =
n
e

(1 +)
but the estimator k would still be a function of the unknown , hence for the function
g() chosen there is no estimator that achieves the Cramer-Rao bound.
On p. 18
Exercise: X
i
Exp
_
1

_
. We know: E[X
i
] = , Var(X
i
) =
2
, MLE of :

=

X
n
,
with E[

] = , unbiased, Var(

) =

2
n
I() = E
_
_
log f(x|)

_
2
_
= E
_
_

_
log
x

_
_
2
_
=
= E
_
_

+
x

2
_
2
_
= E
_
1

2

2x

3
+
x
2

4
_
=
1

2

2

2
+
2
2

4
=
1

2
Prof. Francesco Audrino p. 5
Advanced Statistics, MiQEF University of St. Gallen
Cramer-Rao bound of unbiased estimator:
1
nI()
Var

(t
n
)

2
n
MLE is ecient for and BRUE.
On p. 21
Exercise: X
i
iid Uniform[(0, )]
Y
n
= max(X
1
, . . . , X
n
), Y
1
= min(X
1
, . . . , X
n
)
Consider x
1
, . . . , x
n
such that x
i
(0, ) for all i.
F
Y
n
(y
n
) = P[Y
n
y
n
] = P[X
1
y
n
, . . . , X
n
y
n
] =
__
y
n
0
1

dy
_
n
=
y
n
n

n
f
Y
n
(y
n
) = F
Y
n

(y
n
) =
1

n
ny
n1
n
P[X
1
= x
1
, . . . , X
n
= x
n
|Y
n
= y
n
] f
(X
1
,...,X
n
)|Y
n
=y
n
(x
1
, . . . , x
n
) =
=
f
(X
1
,...,X
n
,Y
n
(x
1
, . . . , x
n
, y
n
)
f
Y
n
(y
n
)
=
=
f
(X
1
,...,X
n
(x
1
, . . . , x
n
)
f
Y
n
(y
n
)
=
=
1

n
1

n
ny
n1
n
=
1
ny
n1
n
P[X
1
= x
1
, . . . , X
n
= x
n
|Y
n
= y
n
] is independent of . By integrating over U it follows
that Y
n
is a sucient statistic for .
P[Y
1
y
1
] = 1P[Y
1
> y
1
] = 1P[X
1
> y
1
, . . . , X
n
> y
1
] = 1
__

y
1
1

dy
_
n
= 1
1

n
(y
1
)
n
f
Y
1
(y
1
) = F
Y
1

(y
1
) =
1

n
n( y
1
)
n1
f
(X
1
,...,X
n
)|Y
n
=y
n
(x
1
, . . . , x
n
) =
1

n
f
Y
1
(y
1
)
=
1
n( y
1
)
n1
P[X
1
= x
1
, . . . , X
n
= x
n
|Y
1
= y
1
] is dependent on ! Y
1
is not sucient.
On p. 22
Prof. Francesco Audrino p. 6
Advanced Statistics, MiQEF University of St. Gallen
Exercise 1: X
i
iid G(x|, , 0), let all x
i
0.
f(x
1
, . . . , x
n
|, ) =
_
1

+1
( + 1)
_
n
x

1
x

2
. . . x

n
e

n
i=1
x
i

=
=
_
1

+1
( + 1)
_
n

_
n

i=1
x
i
_

n
i=1
x
i

=
=
_
1

+1
( + 1)
_
n
T

n
e

S
n
= h(x
1
, . . . , x
n
) g(T
n
, S
n
|, )
where
h(x
1
, . . . , x
n
) = 1
g(x, y|, ) =
_
1

+1
( + 1)
_
n
x

T
n
=
n

i=1
x
i
S
n
=
n

i=1
x
i
Therefore (T
n
, S
n
) is a jointly sucient statistic for (, ).
Exercise 2: X
i
iid Uniform([, ])
Y
n
= max(X
1
, . . . , X
n
), Y
1
= min(X
1
, . . . , X
n
)
f(x
1
, . . . , x
n
|) =
1
2

n

i=1
1I
x
i
[,]
=
=
1
2

n

i=1
1I
x
i
(,]
1I
x
i
[,)
=
=
1
2

n

i=1
1I
max(x
1
,...,x
n
)(,]
1I
min(x
1
,...,x
n
)[,)
=
= h(x
1
, . . . , x
n
) g(max(x
1
, . . . , x
n
), min(x
1
, . . . , x
n
)|)
Therefore (max(x
1
, . . . , x
n
), min(x
1
, . . . , x
n
)) is sucient statistic for .
Attention: Sucient statistics are in general not best estimators. Rather functions
of the sucient statistics are candidates for best estimators.
On p. 23
Exercise: X
i
iid N(,
2
), with R unknown,
2
> 0 known.
MLE: =

X
n
, unique.
Prof. Francesco Audrino p. 7
Advanced Statistics, MiQEF University of St. Gallen
Here we have
f(x
1
, . . . , x
n
|) =
n

i=1
f(x
i
|) =
n

i=1
1

2
e

1
2

x
i

2
=
_
1

2
_
n
e

1
2
2

n
i=1
(x
i
)
2
=
=
_
1

2
_
n
e

1
2
2

n
i=1
x
2
i
e

1
2
2

n
i=1
(2x
i
+
2
)
=
=
_
1

2
_
n
e

1
2
2

n
i=1
x
2
i
e

1
2
2
(2

n
i=1
x
i
+n
2
)
=
= h(x
1
, . . . , x
n
) g
_
n

i=1
x
i
|
_
where
h(x
1
, . . . , x
n
) =
_
1

2
_
n
e

1
2
2

n
i=1
x
2
i
g
_
n

i=1
x
i
|
_
= e

1
2
2
(2

n
i=1
x
i
+n
2
)
From the Neyman Factorization Theorem:
t(X
1
, . . . , X
n
) =

n
i=1
X
i
is a sucient statistic for and MLE: =

X
n
=
t(X
1
,...,X
n
)
n
.
In the case both and
2
are unknown, one can show that the MLE for and
2
are functions of the sucient statistics (

n
i=1
X
i
,

n
i=1
X
2
i
). In fact
=

n
i=1
X
i
n

2
=

n
i=1
(X
i
)
2
n
=

n
i=1
X
2
i
n

_
n
i=1
X
i
n
_
2
On p. 25
Exercise: X
i
iid Pois()
Find a sucient statistic:
f(x
1
, . . . , x
n
|) =
n

i=1
_
e


x
i
x
i
!
1I
x
i
{0,1,2,...}
_
=
n

i=1
_
1
x
i
!
1I
x
i
{0,1,2,...}
_
. .
=h(x
1
,...,x
n
)
e
n

n
i=1
x
i
. .
=g(

n
i=1
x
i
)|
Therefore t(X
1
, . . . , X
n
) =

n
i=1
X
i
is sucient statistic for .
Investigate completeness of t(X
1
, . . . , X
n
):
If X Pois(
1
), Y Pois(
2
), X, Y independent X +Y Pois(
1
+
2
)
Prof. Francesco Audrino p. 8
Advanced Statistics, MiQEF University of St. Gallen
So we have

n
i=1
X
i
Pois(n). We have to show that the Poisson family is complete.
Let u() be any function.
E

[u(X)] 0

k=0
u(k)

k
k!
e

k=0
u(k)

k
k!
0,
since e

> 0. Two power series are equal if and only if all coecients are equal:
u(k)

k
k!
0 k u(k) 0 k
Hence the Poisson family is complete and t(X
1
, . . . , X
n
) =

n
i=1
X
i
is a complete su-
cient statistic for .
On p. 27
Exercise 1: X
i
iid N(,
2
), with R unknown,
2
> 0 known.
We already showed that t(X
1
, . . . , X
n
) =

n
i=1
X
i
is a sucient statistic for . What
about completeness?

n
i=1
X
i
N(n, n
2
). We have to verify whether the normal
family is complete. Let u() be an arbitrary function.
E

[u(X)] =
_

u(x)
1

2
e

1
2

x
i

2
dx 0 u(x) 0 x R
The normal family is complete. Therefore t(X
1
, . . . , X
n
) =

n
i=1
X
i
is a complete
sucient statistic.
However: E

[t(X
1
, . . . , X
n
)] = n


X
n
=
t(X
1
, . . . , X
n
)
n
is a UMVUE of
Exercise 2: X
i
iid N(,
2
), with R unknown,
2
> 0 known.
We already showed that t(X
1
, . . . , X
n
) = (

n
i=1
X
i
,

n
i=1
X
2
i
) is a sucient statistic for
(,
2
). Verify that t is complete (like before).
Exercise 3: X
i
iid Pois()
We already saw that t(X
1
, . . . , X
n
) =

n
i=1
X
i
is a complete sucient statistic for .
E

[t(X
1
, . . . , X
n
)] = n is biased. But

X
n
=
t(X
1
,...,X
n
)
n
is unbiased.


X
n
is UMVUE of .
Prof. Francesco Audrino p. 9

Вам также может понравиться