Академический Документы
Профессиональный Документы
Культура Документы
MODULE II
LECTURE - 8
Case 3: Test of H 0 : L =
Let us consider the test of hypothesis related to a linear parametric function. Assuming that the linear parameter function
L is estimable where L = ( 1 , 2 ,..., p ) is a p 1 vector of known constants and = ( 1 , 2 ,..., p ) . The null hypothesis
of interest is
H 0 : L =
where is some specified constant. Consider the set up of linear model Y = X + where Y = (Y1 , Y2 ,..., Yn ) follows N ( X , 2 I ). The maximum likelihood estimators ti t of f and d 2 are
= ( X X ) 1 X y
and
)( y X ), respectively 2 = (y X ) respectively.
1 n
2 n
~ 2 (n p)
2 n
are also independently distributed.
t=
) ( n p )( L 2 L( X X ) 1 L n
follows a t-distribution with (n p) degrees of freedom. So the test for H 0 : L = against H1 : L rejects H 0 whenever
t t
(n p )
where t1 (n1 ) denotes the upper 100 % points on t-distribution with n1 degrees of freedom.
H 0 : 1 = 1 , 2 = 2 ,..., k = k
where 1 , 2 ,..., k are the known constants. Let = (1 , 2 ,..., k ) and = (1 , 2 ,.., k ). Then H 0 is expressible as H 0 : = L = where L is a k p matrix of constants associated with L1 , L2 ,..., Lk .
= L' The maximum likelihood estimator of i is : i i
= ( , ,..., ) = L . Then 1 2 k
) = Also E ( ) = 2V Cov (
where V = (( L'i ( X X ) 1 L j ))
)V 1 ( ) (
2
follows a 2 distribution with k degrees of freedom and
2 n
5
1 2 Further ( )V ( ) and n are also independently distributed.
Thus under H 0 : =
)V 1 ( ) ( 2 k 2 n 2 (n p )
or
)V 1 ( ) n p ( k 2 n
follows FF distribution with k and (n p) degrees of freedom. freedom So the hypothesis H 0 : = is rejected against
H1 : At least one i for i = 1, 2,..., k whenever F F1 (k , n p) where F1 (k , n p) denotes the upper 100 % points
of F-distribution with k and (n p) degrees of freedom.
The random samples from different population are assumed to be independent of each other. These observations follow the set up of linear model
Y = X +
where
Y = (Y11 , Y12 ,..., Y1n1 , Y21 ,..., Y2 n2 ,..., Y p1 , Y p 2 ,..., Y pn p ) ' y = ( y11 , y12 ,..., y1n1 , y21 ,..., y2 n2 ,..., y p1 , y p 2 ,..., y pn p ) '
7
1 0...0 n1 values 1 0 0 0 1...0 n values 2 X = 0 1...0 0 0...1 n p values 0 0...1
This completes the representation of a fixed effect linear model of full rank.
Th null The ll h hypothesis th i of fi interest t ti is H 0 : 1 = 2 = ... = p = (say) ( ) and H 1 : At least one i j (i j ) where and 2 are unknown. W would We ld d develop l h here the h lik likelihood lih d ratio i test. I It may b be noted d that h the h same test can also l b be d derived i d through h h the h l least squares method. This will be demonstrated in the next module. This way the readers will understand both the methods. We already have developed the likelihood ratio for the hypothesis H 0 : 1 = 2 = ... = p in the case 1.
) d The e whole o e pa parametric a et c space is s a ( p + 1) dimensional e s o a space .
( y
i =1 j =1
ij p
i ) 2
ni ij
n 1 L = ln L( y | , ) = ln (2 2 ) 2 2 2 L = 1 =0 i i ni
( y
i =1 j =1
i )2
y
j =1
ni
ij
= yio
L 1 p ni 2 = = 0 ( yij yio ) 2 . 2 n i =1 j =1
9
The dot sign in yio indicates that the average has been taken over the second subscript j. The Hessian matrix of
(o)
2 which second order partial derivation of ln L with respect to i and 2 is negative definite at = y io and 2 =
ensures that the likelihood function is maximized at these values. Thus the maximum value of L( y | , 2 ) over is
1 1 2 Max L( y | , ) = exp 2 2 2 2
2 n
( y
i =1 j =1 n /2
ni
ij
)2 i
n = p ni 2 2 ( yij yio ) i =1 j =1
n p . exp 2
is
1 1 2 exp L( y | , ) = 2 2 2 2
and
( y
i =1 j =1 p ni
ni
ij
)2
1 n l L( y | , ) = ln(2 ln l (2 2 ) 2 2 2
2
( y
i =1 j =1
ij
)2 .
The normal equations and the least squares estimates are obtained as follows:
p ni ln L( y | , 2 ) =1 =0 yij = yoo n i =1 j =1
ln L( y | , 2 ) 1 p ni 2 = 0 = ( yij yoo ) 2 . 2 n i =1 j =1
10
The maximum value of the likelihood function over under H 0 is
1 1 2 Max L ( y | , 2 ) = exp 2 2 2 2
n
(y
i =1 j =1 n/2
ni
ij
)2
n = ni p 2 2 ( y ij y oo ) i =1 j =1
n exp . 2
Max L ( y | , 2 ) Max L ( y | , 2 )
n/2
(y
i =1 j =1
ni
ij
ni
11
Thus
( y
i =1 j =1
ni
ij
n 2
q = 1 + 1 q2
n 2
where
q1 = ni ( yio yoo ) 2
i =1 p
q2 = ( yij yio ) 2 .
i =1 j =1
ni
q1 q2
: sum of squares due to deviations from H 0 or the between population sum of squares, : sum of squares due to error or the within population sum of squares,
Q2 = Si2
i =1
Yoo =
1 p ni Yij , n i =1 j =1
Yio =
1 ni
Y
j =1
ni
ij
12
then under H 0
Q1
2
~ 2 ( p 1) ~ 2 (n p)
and
Q2
2
Q1 and Q2
q1 >C q2
where the constant C = F1 ( p 1, n p ).
13
The analysis of variance table for the one way classification in fixed effect model is
Source of variation
Between populations
Degrees of freedom
p -1
Sum of squares
q1
Mean squares
q1 p 1
F - value
n p q1 . p 1 q2
C = F1 ( p, n p )
Within populations
n-p
q2
H0 : = 0
q2 (n p)
Total
n-1
q1 + q2
Note that Q E 2 =2 n p Q E 1 =2 + p 1 1 p = i . p i =1
(
i =1
) ;
p 1