Вы находитесь на странице: 1из 17

Estimation of Error Variance

Common estimate of
2
is
S
2
e
=
1
n 2
n

i =1
(y
i
(

0
+

1
x
i
))
2
=
1
n 2
n

i =1
e
2
i
=
SSE
DF
error
= MS
error
Dividing by
1
n2
makes S
2
e
an unbiased estimator for
2
.
(n 2) follows general d.f. rule:
Estimate 2 parameters in the model.
The residuals satisfy two constraints by LSE method.
1 / 17
Inference for
1
Discuss

1
in detail

1
=

n
i =1
(x
i
x)(y
i
y)

n
i =1
(x
i
x)
2
=

n
i =1
(x
i
x)y
i

n
i =1
(x
i
x)
2

1
is a linear combination of normal random variables (the
y
i
s), so

1
is normally distributed with
E(

1
) =
1

1
= Var(

1
) =

2

n
i =1
(x
i
x)
2
=

2
(n 1)S
2
X
2 / 17
Inference for
1

2
is unknown; plug in estimate S
2
e
.
Sample standard error of

1
is
S

1
=
S
e
S
X

n 1
(

1
)/S

1
follows a t-distribution with (n 2) d.f.
Test H
0
:
1
= 0 vs. H
1
:
1
= 0 at level ,
T =

1
0
S

1
t
n2
, and reject H
0
if |T| > t
n2,1/2
100(1 )% C.I. for
1
is

1
t
n2,1/2
S

1
.
3 / 17
Inference for
0
Point estimate of
0
is

0
= y

1
x.
E(

0
) =
0
,
2

0
= Var(

0
) =
2

1
n
+
x
2
(n 1)S
2
X

0
N(
0
,
2

0
)

0
has sample standard error S

0
= S
e

1
n
+
x
2
(n1)S
2
X
Test H
0
:
0
= 0 vs. H
1
:
0
= 0 at level ,
T =

0
0
S

0
t
n2
, and reject H
0
if |T| > t
n2,1/2
.
100(1 )% C.I. for
0
is

0
t
n2,1/2
S

0
.
4 / 17
Inference for Regression Line (or Conditional Means)
Inference for E(Y|X = x) =
0
+
1
x
For a chosen x
0
,
estimate is y
0
=

0
+

1
x
0
= y +

1
(x
0
x).
E( y
0
) =
Y|X=x
0
=
0
+
1
x
0
Var( y
0
) =
2

1
n
+
(x
0
x)
2
(n1)S
2
X

Sample standard error is S


y
0
= S
e

1
n
+
(x
0
x)
2
(n1)S
2
X
.
Test H
0
:
Y|X=x
0
= vs. H
1
:
Y|X=x
0
= at level ,
T =
y
0

S
y
0
t
n2
, and reject H
0
if |T| > t
n2,1/2
.
100(1 )% C.I. for
Y|X=x
0
(i.e.
0
+
1
x
0
) is
(

0
+

1
x
0
) t
n2,1/2
S
y
0
.
5 / 17
Prediction
Predict the value for Y at given x
0
:
Y
new
=
0
+
1
x
0
+ E
Estimate is still y
new
=

0
+

1
x
0
Standard error is
S
y,pred
=

S
2
e
+ S
2
y
0
= S
e

1 +
1
n
+
(x
0
x)
2
(n 1)S
2
X
100(1 )% prediction interval:
(

0
+

1
x
0
) t
n2,1/2
S
y,pred
.
6 / 17
Example
Forbes Data
James D. Forbes collected data in the Scotish Alps in the
1840s and 1850s.
n = 17 locations (at dierent altitudes)
Objective: Predict barometric pressure (in inches of
mercury) from boiling point of water (X) in

F.
Use Y = log(barametric pressure).
Motivation: Fragile barameters of the 1840s were dicult
to transport.
7 / 17
BOILING POINT BARAMETRIC NATURAL LOG OF
OF WATER PRESSURE BARAMETRIC
Obs (degrees F) (inches Hg) PRESSURE
1 194.3 20.79 3.034472
2 194.5 20.79 3.034472
3 197.9 22.40 3.109061
4 198.4 22.67 3.121042
5 199.4 23.15 3.141995
6 199.9 23.35 3.150597
7 200.9 23.89 3.173460
8 201.1 23.89 3.173460
9 201.3 24.01 3.178470
10 201.4 24.02 3.178887
11 203.6 25.14 3.224460
12 204.6 26.57 3.279783
13 208.6 27.76 3.323596
14 209.5 28.49 3.349553
15 210.7 29.04 3.368674
16 211.9 29.88 3.397189
17 212.2 30.06 3.403195
8 / 17
Forbes Data
GG
G
G
G
G
G
GGG
G
G
G
G
G
G
G
190 195 200 205 210 215
3
.
0
3
.
1
3
.
2
3
.
3
3
.
4
3
.
5
Boiling point of water (degrees F)
L
o
g

P
r
e
s
s
u
r
e
9 / 17
Analysis of Forbes Data
Proposed regression model
y
i
=
0
+
1
x
i
+ e
i
where e
i
i .i .d
N(0,
2
), i = 1, , 17.
Y
i
= log(pressure)
X
i
= boiling point (

F)

1
is the increase in mean log(pressure) when boiling point
of water increases by 1

F.

0
is the mean log(pressure) when boiling point of water is
0

F. (Is this extrapolation realistic?)
10 / 17
Analysis of Forbes Data
Estimated regression model
y =

0
+

1
x = 0.970866 + 0.020622x
Residuals: e
i
= y
i
y
i
, i = 1, , 17.
Estimated mean log(pressure) at 212

F is
y
212
=

0
+

1
212 = 3.401074.
11 / 17
Analysis of Forbes Data
Inference on
1
:
Test H
0
:
1
= 0 (Y
i
=
0
+ e
i
) versus
H
1
:
1
= 0 (Y
i
=
0
+
1
x
i
+ e
i
).
Evaluate T =

1
0
S

1
=
0.0206220
0.000379
= 54.42. p-value <<
0.0001. Reject H
0
and conclude that the slope is positive.
A 95% C.I. for the slope indicates that the slope is very
well estimated from these data

1
t
15,0.975
S

1
0.020622 (2.131)(0.00037895)
(0.0198, 0.0214)
12 / 17
Analysis of Forbes Data
Inference on
0
Test H
0
:
0
= 0 (Y
i
=
1
x
i
+ e
i
) versus
H
1
:
0
= 0 (Y
i
=
0
+
1
x
i
+ e
i
)
Evaluate T =

0
0
S

0
=
0.9710
0.0769
= 12.6. p-value
<< 0.0001. Reject H
0
and conclude that the intercept is
negative. (Is there a practical motivation to do this test?)
A 95% C.I. for the intercept is

0
t
15,0.975
S

0
0.971 (2.131)(0.0769)
(1.135, 0.807)
13 / 17
Analysis of Forbes Data
Construct a 95% C.I. for mean of log-pressure
measurements when the boiling point of water is x = 209

F.
Estimated mean is
y =

0
+

1
x = 0.9710 + (0.0206)(209) = 3.339
Evaluate the sample standard error of this estimate
S
y
=

0.0000762

1
17
+
(209 202.953)
2
530.78

= 0.00312
A 95% C.I. is y t
15,0.975
S
y
= (3.333, 3.346).
14 / 17
Analysis of Forbes Data
For every point x, compute 95% C.I. will get us a C.I. band for
the regression line.
GG
G
G
G
G
G
GGG
G
G
G
G
G
G
G
190 195 200 205 210 215
3
.
0
3
.
1
3
.
2
3
.
3
3
.
4
3
.
5
Boiling point of water (degrees F)
L
o
g

P
r
e
s
s
u
r
e
Regression Line
95 percent C.I.
15 / 17
Analysis of Forbes Data
Inference for prediction:
Construct a 95% prediction interval for a log-pressure
value when the boiling point of water is x=209

F.
Prediction is the estimated mean (because the estimate of
the error is zero)
y =

0
+

1
x+error = 0.9710+(0.0206)(209)+0 = 3.339
Evaluate the standard error of the prediction
S
y
=

0.0000762

1 +
1
17
+
(209 202.953)
2
530.78

= 0.00927
16 / 17
Analysis of Forbes Data
A 95% prediction interval is
y t
15,0.975
S
y,pred
3.339 (2.131)(0.00927)
(3.319, 3.359)
Above inferences (estimation, testing, prediction) are in the
output of SAS program. We will introduce SAS coding after
we introduce one more thing the ANOVA table for simple
linear regression. Hold on.
17 / 17

Вам также может понравиться