Вы находитесь на странице: 1из 6

ELE 530 April 9, 2012

Theory of Detection and Estimation Handout #9


Homework #4 Solutions
1. Rayleigh Samples. Suppose Y
i
are i.i.d. samples from p

(y), for i = 1, ..., n, where


is unknown, and the family of distributions p

(y) is the Rayleigh family given by


p

(y) =
y

y
2
2
u(y),
where u(y) is the unit step function.
The distribution of the entire i.i.d. vector of samples p

(y
n
) can be inferred from p

(y).
a. Is p

(y
n
) an exponential family?
b. Find a complete sucient statistic for .
c. What is the minimum variance unbiased estimator (MVUE) for ?
d. Can the Cramer-Rao Bound be applied? If so, use it to obtain a lower bound on the
variance of unbiased estimators for . Is there an ecient estimator? (An unbiased
estimator is ecient if it meets the Cramer-Rao bound.)
e. What is the maximum likelihood estimator for .
Solution:
a. Yes, p

(y
n
) is an exponential family. We can write it in the proper form as
p

(y
n
) =
n

i=1
p

(y
i
)
=
_
1

n
_
_
n

i=1
y
i
u(y
i
)
_
e
1
2

n
i=1
y
2
i
.
b. The parameter
1
2
, in the exponent of the formula for p

(y
n
) given above, takes on
values in the set (, 0), which is a rectangle in
1
. Therefore, the statistic

n
i=1
y
2
i
is complete. It is a complete sucient statistic for .
c. The second moment of a Rayleigh distributed random variable is 2. Therefore,

=
1
2n
n

i=1
Y
2
i
is the only unbiased function of the complete sucient statistic found in part b. and
is therefore the minimum variance unbiased estimator.
d. First we calculate the score function.
s(, Y
n
) =

ln p

(Y
n
)
=
n

+
1
2
2
n

i=1
Y
2
i
.
From the above equation, we can see that the expected value of the score function is
zero, and the Cramer-Rao Bound can be applied. However, we will factor the score
Page 2 of 6 ELE 530, Spring 2010-2011
function further to show that there is an ecient estimator and identify the Fisher
information directly. As a side note, once weve shown that there is an ecient esti-
mator, then we already know that the estimator from part c. is the ecient estimator.
However, it will be obvious from the score function what the ecient estimator is as
well.
s(, Y
n
) =
n

2
_
1
2n
n

i=1
Y
2
i

_
.
From the above factorization, we know that there is an ecient estimator, and we nd
that the Fisher information is
I() =
n

2
.
Therefore, the Cramer-Rao Lower Bound for all unbiased estimators is
V ar(

)

2
n
.
e. Also, from the score function we see that the maximum likelihood estimator is the
same as the estimator MVUE from part c. In problem 4 we show that this is always
the case if an ecient estimator exists.
Homework #4 Solutions Page 3 of 6
2. Uniform Samples. Suppose Y
i
are i.i.d. samples Unif[, ], for i = 1, ..., n, where
> 0 is unknown.
The distribution of the entire i.i.d. vector of samples p

(y
n
) can be inferred from p

(y).
a. Is p

(y
n
) an exponential family?
b. Find a scaler sucient statistic for .
c. What is an unbiased estimator based on the sucient statistic of part b?
d. Can the Cramer-Rao Bound be applied? If so, use it to obtain a lower bound on the
variance of unbiased estimators for . Is there an ecient estimator? (An unbiased
estimator is ecient if it meets the Cramer-Rao bound.)
e. What is the maximum likelihood estimator for .
Solution:
a. The family of distributions p

(y
n
) is not an exponential family. The support of an
exponential family must be the same for all , and that is not the case with the
uniform distribution we are dealing with.
b. We use the Neyman-Fisher Factorization Theorem to nd a sucient statistic.
p

(y
n
) =
n

i=1
p

(y
i
)
=
1
(2)
n
n

i=1
1
y
i
[,]
=
1
(2)
n
1
max
i
|y
i
|
.
Therefore, T = max
i
|Y
i
| is a sucient statistic.
We can show that T is complete. First we derive the distribution of T. Notice that
|Y
i
| are each uniformly distributed on [0, ] and independent. Thus, the formula for
the maximal statistic is:
p
T|
(t) = n
_
F
|Y | |
(t)
_
n1
p
|Y | |
(t)
= n
_
t

_
n1
1

1
t[0,]
=
n

_
t

_
n1
1
t[0,]
Now take any function v(T) that has zero mean for all and notice the following chain
Page 4 of 6 ELE 530, Spring 2010-2011
of implications:
_

0
v(t)
n

_
t

_
n1
dt = 0 > 0.
n

n
_

0
v(t)t
n1
dt = 0 > 0.
_

0
v(t)t
n1
dt = 0 > 0.
v(t)t
n1
= 0 > 0, t 0.
v(t) = 0 > 0, t > 0.
Therefore, v(T) is zero with probability one. The forth equality above only need hold
for t almost everywhere, but thats actually all that we need.
c. First we nd the expected value of our sucient statistic.
E T =
_

0
t
n

_
t

_
n1
dt
= n
_

0
_
t

_
n
dt
=
n
n + 1
.
An unbiased estimator is

=
n+1
n
max
i
|Y
i
|. Since we showed that the statistic is
complete, this estimator is the MVUE.
d. First we calculate the score function.
s(, Y
n
) =

ln p

(Y
n
)
=
_
n

, max
i
|Y
i
| <
undened, elsewhere
Therefore,
E s(, Y
n
) =
n

,
and the Cramer-Rao Bound does not apply.
e. We see from the score function that

ML
= max
i
|Y
i
|.
Homework #4 Solutions Page 5 of 6
3. MMSE vs. MVU and ML. Consider jointly Gaussian random variables X and Y with
zero mean, unit variance, and correlation EXY = .
a. What is the MMSE estimate of X given Y ?
b. Now treat X like a parameter (Ignore the marginal distribution on X and just consider
the conditional distribution p(y|x).). What is the minimum variance unbiased (MVU)
estimate of X given Y , and what is the maximum likelihood (ML) estimate of X given
Y ?
Solution:
a.

X
MMSE
= E (X|Y )
= Y.
b. Notice that the family of distributions p(y|x) is an exponential family.
p(y|x) =
1
_
2(1
2
)
e

1
1
2
(yx)
2
=
_
1
_
2(1
2
)
e

y
2
1
2
_
_
e

2
x
2
1
2
_
e
2x
1
2
y
.
Therefore, Y is complete. The expected value of Y is X. To make it unbiased we
must divide by .

X
MV U
=
Y

.
It is obvious from p(y|x) that the maximum likelihood estimate for X will be the one
that causes the term (y x)
2
to be zero. Thus,

X
ML
=
Y

.
Page 6 of 6 ELE 530, Spring 2010-2011
4. Eciency of ML. Show that if an ecient unbiased estimator exists then it is the maxi-
mum likelihood estimator. Consider the necessary and sucient condition for an ecient
estimator to exist, stated in Theorem 3.1 of Kay Vol. 1.
Solution:
From Theorem 3.1 of Kay Vol. 1 involving the Cramer-Rao lower bound, we know that
an ecient estimator exists if and only if the score function can be factored as,
s(, X) =

ln p

(X)
= I()(g(X) ),
for some functions g(X), which is the ecient estimator, and I(), which is the Fisher
Information.
Choosing to maximize the likelihood is equivalent to maximizing the log-likelihood. So
to nd

ML
we can inspect the derivative of the log-likelihood function with respect to ,
which is precisely the score function.
Notice that the Fisher Information, I() is non-negative. Therefore, the score function is
positive for < g(X) and negative for > g(X). This means that

ML
= g(X)
=

MV U
.
The maximum likelihood estimator is also unique except in the degenerate case where
I() = 0 for some non-empty open interval of . However, this would mean that the
family of distributions does not change over that interval, which is not a reasonable
model to work with for parameter estimation.

Вам также может понравиться