Вы находитесь на странице: 1из 29

MATH F113

Probability and Statistics


BITS Pilani Dr. Shivi Agarwal
Pilani Campus Department of Mathematics
Chapter # 7
Estimation
• Point Estimation
• Interval Estimation
• Significance testing
• Hypothesis testing

BITS Pilani, Pilani Campus


• To estimate numerical value of a
population parameter using a sample of
some size n.
• We must device an appropriate statistic
(on random sample of size n) so that we
only need to consider its value on the
observed sample.
• It is desirable that the statistic satisfies
certain properties. We may want to have
such a statistic.
BITS Pilani, Pilani Campus
Estimator and Estimate
• A statistic (which is a function on a
random sample, and hence a random
variable) used to estimate the
population parameter θ is called a
point estimator for θ and is denoted
by ˆ
• The value of the point estimator on a
particular sample of that size is called
a point estimate for θ.
BITS Pilani, Pilani Campus
• Point Estimator
–Unbiased Estimator
–Method of Moments for estimator
–Maximum Likelihood estimator

BITS Pilani, Pilani Campus


Unbiased estimator :
Let  be the parameter of
interest and ˆ be a statistic. Then
the statistic ˆ is said to be an
unbiased estimator, or its value
an unbiased estimate, if and
only if the mean of the
sampling distribution of the
estimator equals  , whatever the
value of  , viz. E[ˆ ]= .
BITS Pilani, Pilani Campus
Theorem :
The sample mean X of a
random sample of size n from
population X is an unbiased
estimator of the population
mean  .

BITS Pilani, Pilani Campus


More efficient unbiased
estimator :
A statistics ˆ1 is said to be a
more efficient unbiased
estimator of the parameter 
than the statistics ˆ2 if
1. ˆ1 and ˆ2 are both unbiased
estimators of  ;
BITS Pilani, Pilani Campus
2. the variance of the
sampling distribution of the
first estimator is no larger
than that of the second and is
smaller for at least one value
of  .

BITS Pilani, Pilani Campus


Theorem :
 2
  Var ( X ) 
2
X
n
•From this theorem, it follows that larger
the sample size, sample mean can be
expected to lie close to population
mean.
•Thus choosing large sample makes
estimation more reliable.
BITS Pilani, Pilani Campus
Definition : LetX denote the sample
mean of a (random) sample of size n
from a distribution of standard deviation
. Then
Standard error of mean =

 X   / n.

BITS Pilani, Pilani Campus


Unbiased estimator of variance
Theorem : The sample variance S2 of a
random sample of size n from a
population X is an unbiased estimator
for population variance 2, viz.
E[S2] = 2.

BITS Pilani, Pilani Campus


Q.4 : An interactive computer system is
available at a large installation. Let X denote
the number of requests for this system
received per hr. Assume X has Poisson dist
with parameter s. These data are obtained :
25 20 20 30 24 15 10 23 4.
(a)Find an unbiased estimate for s.
(b)Find an unbiased estimate for the average
number of requests per hr.
(c) Find an unbiased estimate for the average
number of requests per quarter of an hr.
BITS Pilani, Pilani Campus
Q.8
Note that S is a statistic, and unless X is constant,
its value varies from sample to sample, thus
Var[S]>0. Show S is not an unbiased estimator of
σ by method of contradiction.

If E[S]= σ, Var[S]=E[S2] – E[S]2 = σ2- σ2=0.

BITS Pilani, Pilani Campus


7.2. Methods to find estimators
To find ‘good’ estimators for other
population parameters, we describe 2
methods :
1. Method of moments
2. Maximal likelihood method.

BITS Pilani, Pilani Campus


Methods of Moments
• Method of Moments:
kth moment = E (Xk)
• An estimator of E (Xk) based on a
random sample size n = Mk
n
M k   (X k i / n )
i 1
n
M1   (X i / n )  X
i 1
n
M 3   (X 3i / n )
n
M 2   (X i / n )
2

i 1 i 1
BITS Pilani, Pilani Campus
Method of moments
n
X ik
1) Use estimators M k  
i 1 n

for the moments E[ X k ], k  1, 2, etc..


k
2) Express E[ X ] in terms of parameters
of the distribution.
3) Set the equations by replacing the parameters in
k
E[X ] by their estimators and equating to M k .
4) Set as many (suitable) equations as number of
parameters and solve them for the estimators of the
parameters in terms of M k ' s.
BITS Pilani, Pilani Campus
Ex. Show method of moments estimator for
σ2 is (n-1)S2/n, hence not unbiased.
σ2 = E[X2]-E[X]2, therefore its estimator is
2
n

n

 X
i
2
 Xi 
M 2  M1 
2 i 1
  i 1 
2
n n
 n 2  n 
2

 n X    X  

n  1 i 1
i
 i 1
i
  n 1 2
   S .
n  n(n  1)  n
 
  BITS Pilani, Pilani Campus
Q. 7.2.16

Let X1, …, Xm be a random sample of


size m from a binomial distribution with
parameters n, assumed to be known, and
p (unknown). Show that the method of
moments estimator of p is

X
pˆ  .
n
BITS Pilani, Pilani Campus
Example 7.2.2
Let X1, … , Xn be a random sample from
a gamma distribution with parameters ,
. Find method of moments estimators
for , .
Using E[X]= , E[X2]-(E[X])2 = 2, we
get
ˆ M2  M 2
M 2
 , ˆ 
1 1
.
M1 M 2  M12

BITS Pilani, Pilani Campus


Maximum likelihood
estimation :
Consider a random sample of
size n from population f ( x; )
that depends on a parameter
 . The joint distribution is
f ( x1 ; ) f ( x2 ; )...... f ( xn ; ) .

BITS Pilani, Pilani Campus


L( )  f ( x1 ; ) f ( x2 ; )...... f ( xn ; )
is called the likelihood
function. The maximum
likelihood estimator of  is
the random variable which
equals the value for  that
maximizes the probability of
the observed sample.
BITS Pilani, Pilani Campus
Method of Maximum Likelihood function for
Estimating 
-------------------------------------------------------------
1. Obtain a random sample X1, X2 , X3 ,……., Xn
from the distribution of a random variable X with
the density ‘f ’ and associated parameter 
2. Define a function L( ) by
L( ) = f(x1).f(x2).f(x3)…… f(xn)
This function is called the likelihood function for
the sample.
3. Find the expression for  that maximizes the
likelihood function. This can be done directly
or by maximizing ln L(  ). BITS Pilani, Pilani Campus
4. Replace  by ˆ to obtain an expression for
the maximum likelihood estimator for 
5. Find the observed value of this estimator for a
given sample.

Example: Let X1, X2 , X3 ,……., Xn be a random


sample from a normal distribution with mean µ
variance 2 .The density for X is
–(1/2)[(x- µ)/ ]2
f(x)=(1/ (2)) e
The likelihood function for the sample is a function
of both µ and  . In particular,
BITS Pilani, Pilani Campus
n 1 xi   2
1 
L( µ, )  
( )
2 
e
i 1 2
n

1
 ( x   ) 2
1 2 2
i

 ( )n e i 1

2
n
1
ln (L( µ, ))   n ln( 2 )  n ln( ) 
2 2
 i
( x
i 1
  ) 2

To maximize this function , we take the partial


derivatives with respect to µ and  , set these
derivatives equal to 0, and solve the equations
simultaneously for µ and  :
BITS Pilani, Pilani Campus
Similarly
n

L  ( , ) (X i  )
 i 1
0
L(  ,  )  2

L (  ,  )  n ( Xi  )
n 2
  0
L(  ,  )  i 1  3

Thus L  ( ˆ , ˆ )  0, L ( ˆ , ˆ )  0 gives
n

(X i  X) 2

ˆ  X , ˆ 2 i 1
.
n BITS Pilani, Pilani Campus
The method of moments estimator for a
parameter and the maximum likelihood
estimator often agree. However, if they do
not, the maximum likelihood estimator is
usually preferred.

BITS Pilani, Pilani Campus


Q. 30. Let X1, X2 , X3 ,……., Xm be a
random sample of size m from a
Binomial distribution with
parameters n (known) and p
(unknown). Find the maximum
likelihood estimator for p.

BITS Pilani, Pilani Campus


Q. 31. Let W be an exponential
random variable with parameter
 unknown. Find the maximum
likelihood estimator for  based
on a sample of size n.

BITS Pilani, Pilani Campus

Вам также может понравиться