Вы находитесь на странице: 1из 9

ISyE 7401

Advanced Statistical Modeling

Spring 2013

HW #3 (due in class on February 6, Wednesday)


(There are 4 questions, and please look at both sides)
Problem 1. Suppose that each setting xi of the independent variable in a simple least squares
problem is duplicated, yielding two independent observations Yi1 , Yi2 . Is it true that the least squares
estimates of the intercept and slope can be found by doing a regression of the mean response of
each pair of duplicates, Yi = (Yi1 + Yi2 )/2 on the xi s? Why or why not? Explain.

ar stu
ed d
vi y re
aC s
o
ou urc
rs e
eH w
er as
o.
co
m

Answer: The answer is a resounding Yes for the point estimates of the intercept and
slope, but the answers for the condence intervals of intercept and slope are no (because of the
dierence in estimating 2 , the error variance).
There are a couple of ways to prove that the point estimates of the intercept and slope are the
same. The rst one is to look at the SSEs. Note that for the full data set, we want to nd b0 and
b1 to minimize
n [(
)2 (
)2 ]

RSS1 =
yi1 (b0 + b1 xi ) + yi2 (b0 + b1 xi )
,
i=1

whereas the second linear regression is to nd b0 and b1 that minimize RSS2 =


)2
b1 xi ) . Since yi = (yi1 + yi2 )/2, it is straightforward to show that
RSS1 2RSS2 =

n [(

i=1

n (
i=1

yi (b0 +

)2 (
)2
(y + y
)2 ]
i1
i2
yi1 (b0 + b1 xi ) + yi2 (b0 + b1 xi ) 2
(b0 + b1 xi )
2

n [
n

(yi1 + yi2 )2 ] (yi1 yi2 )2


2
2
=
yi1 + yi2
=
2
2
i=1

i=1

Th

is

does not depend on b0 and b1 . Hence, the pair of (b0 , b1 ) that minimizes one of RSSs also minimizes
the other, and thus these two linear regression methods lead to the same solutions.
The second method is to prove them directly. For the original full data set, the least square
estimates are
n

)(Yi2 y)
)(Yi1 y) + ni=1 (xi x
i=1 (xi x
n
b1 =
)2
2 i=1 (xi x
n
i2
)( Yi1 +Y
y)
i=1 (xi x
2

=
,
n
2
(x

)
i=1 i
n
(Yi1 + Yi2 )
b0 = y b1 x
,
with y = i=1
2n

sh

while the least estimates for the new regression model are
n
i2
(xi x
)( Yi1 +Y
y)

2
b
, b0 = y b1 x

1 = i=1 n
2
(x

)
i
i=1

Thus b1 = b1 and so b0 = b0 . Therefore, the two methods are equivalent. You can also easily
extend this proof to multiple linear regression.
1

https://www.coursehero.com/file/7917926/hw03sol/

It is important to note that these two linear regression leads to dierent estimators of 2 , the error
variance, and thus the condence intervals of intercept and slope are dierent. To see this, the
1
full data set has 2n observations and thus s21 =
bf2ull = RSS
2n2 whereas, the second linear regression
2
method has n observations and thus s22 =
b2 = RSS
n2 . Plugging the relationship between RSS1 and
RSS2 , we have
n

RSS2
1
(yi1 yi2 )2
2
2
s1 s2 =
+
,
(n 1)(n 2) 2(n 1)
2
i=1

and thus

s21

s22

in general.

Problem 2. Suppose that n points are to be placed in the interval [1, 1] for tting the model
Yi = 0 + 1 xi + i ,

ar stu
ed d
vi y re
aC s
o
ou urc
rs e
eH w
er as
o.
co
m

where the i s are independent with common mean 0 and common variance 2 . How should the
xi s be chosen in order to minimize V ar(b1 )?
(Hints: This is one of the earliest examples in the eld of optimal design of experiments. Note
that this question
to choose the xi s (in the interval [1, 1]) so as to maximize
nessentially 2asksus
n
h(x1 , . . . , xn ) = i=1 (xi x
) = i=1 x2i n(
x)2 . For each i, xing the other xj s, is h a quadratic
polynomial of xi ? When will xi maximize h? Finally, suppose m of the xi s are equal to 1, then
the remaining n m of xi s should be 1. Which m maximizes h? Your answers may depend on
whether n is even or odd.)
2
2
Answer: Recall that V ar(b1 ) = Sxx = n (xi x)2 . Hence, minimizing V ar(b1 ) is equivalent
i=1

x)2 . Following the hints,


for each
)2 = ni=1 x2i n(
to maximizing h(x1 , . . . , xn ) = ni=1 (xi x
n
n
2
i=1 xi 2
i, xing the other
alsodepends on xi ) we have h = i=1 xi n( n ) =
xj s (note
that x
(1 n1 )x2i n2 xi j=i xj + j=i x2j n1 ( j=i xj )2 , which is a quadratic polynomial of xi with a
leading coecient 1 n1 > 0 when n 2. Thus the optimal choices of xi s must be the endpoints
1. Suppose m of the xi s are equal to 1, then the remaining n m of xi s should be 1, and thus
2
)2 = n ( 2mn
h = n ( m(nm)
n
n ) , which is minimized at m = n/2 if n is even, and m = (n 1)/2
if n is odd.

Th

is

Problem 3. The dataset teengamb (available in the R library faraway you need to install this
library rst) concerns a study of teenage gambling in Britain. Fit a regression model with the
expenditure on gambling as the response and the sex (coded as male=0 and female=1), status,
income, and verbal score as predictors. Present the output and answer the following questions.
(a) What percentage of variation in the response is explained by these predictors?

sh

Answer: 52.67%, which is just the value of R2 .

(b) Which observation has the largest (positive) residual? Give the case number.
Answer: The 24th observation.
(c) Compute the mean and median of the residuals.
Answer: The mean of the residuals is 0 and the median is -1.4514.
2

https://www.coursehero.com/file/7917926/hw03sol/

(d) Compute the correlation of the residuals with the tted values.
Answer: 0.
(e) Compute the correlation of the residuals with the income.
Answer: 0.
(f ) For all other predictors held constant, what would be the dierence in predicted expenditure
on gambling for a male compared to a female.

ar stu
ed d
vi y re
aC s
o
ou urc
rs e
eH w
er as
o.
co
m

Answer: When variable sex goes from 0 to 1, the change of the predicted expenditure
on gambling would be (1 0) (22.1183) = 22.1183, where 22.1183 is the regression
coecient of sex.
(g) Which variables are statistically signicant?

Answer: sex and income are signicant at 95% condence level.

(h) Predict the amount that a male with average (given these data) status, income and verbal
score would gamble along with an appropriate 95% CI. Repeat the prediction for a male with
maximal values (for this data) of status, income, and verbal score. Which CI is wider and
why is this expected?
Answer: Note that in the multiple linear regression Yn1 = Xnp p1 + , for a given new
b There are two kinds of condence intervals
data xnew , the point prediction is y = xnew .
involving y . The rst one is the so-called 100(1 )% prediction interval on the future
observation, which is given by
xnew b t/2,np
b

1 + xTnew (XT X)1 xnew .

The second one is the so-called 100(1 )% confidence interval on the mean response y

is

xnew b t/2,np
b

xTnew (XT X)1 xnew .

sh

Th

These two intervals dier only on the term 1 (variance of the future noise) inside the
square root term. In this problem, the prediction interval seems to be more appropriate,
though we also give the full credit if you provide the latter intervals (but take points o if
you misunderstood them).
For a male with average values, the predicted amount of gambling is 28.2425 with a 95%
prediction interval [18.5154, 75.0004] (For a male with average values, the 95% condence
interval on the mean gambling is [18.7828, 37.7023]).
Likewise, for a male with maximal values, the predicted amount of gambling is 71.3079 with
a 95% prediction interval [17.0659, 125.5500] (the corresponding 95% condence interval on
the mean gambling is [42.2324, 100.3835]). The interval for a male with maximal values is
wider because larger coecients result in larger (prediction) variance.
3

https://www.coursehero.com/file/7917926/hw03sol/

Figure 1: Non-constant variance of the original linear model

30

Residuals vs Fitted

ScaleLocation
100

1.5
1.0
0.5

Standardized residuals

20
10
0

ar stu
ed d
vi y re
aC s
o
ou urc
rs e
eH w
er as
o.
co
m

0.0

20

10

Residuals

55

15

100
15
17

20

40

60

80

100

20

Fitted values
lm(Lab ~ Field)

40

60

80

100

Fitted values
lm(Lab ~ Field)

(i) Fit a model with just income as a predictor and use an F-test to compare it to the full model.
Answer: The p value of the F-test is 0.01177 which means the reduced model is not adequate.

Problem 4. Researchers at National Institute of Standards and Technology (NIST) collected


pipeline data (available in the R library faraway) on ultrasonic measurements of the depths of
defects in the Alaska pipeline in the eld. The depth of the defects were then remeasured in the
laboratory. These measurements were performed in six dierent batches. It turns out that this
batch eect is not signicant and so can be ignored in the analysis that follows. The laboratory
measurements are more accurate than the in-eld measurements, but more time consuming and
expensive. We want to develop an regression equation for correcting the in-eld measurements.
(a) Fit a regression model Lab Field. Check for nonconstant variance.

Th

is

Answer: See the result above as well as Figure 1. Obviously the variance is not constant.
We can also do a Non-constant variance score test, the p value is 5.35e-08 which indicates
signicantly non-constant variance.

sh

(b) We wish to use weights to account for the nonconstant variance. Here we split the range of
Field into 12 groups of size nine (except for the last group which has only eight values). Within
each group, we compte the variance of Labs as varlab and the mean of Field as meanfield.
Suppose pipeline is the name of your data frame, the following R code will make the needed
computations:
> i <- order(pipeline$Field)
> npipe <- pipeline[i,]
4

https://www.coursehero.com/file/7917926/hw03sol/

> ff <- gl(12, 9)[-108]


> meanfield <- unlist(lapply(split(npipe$Field, ff), mean))
> varlab <- unlist(lappy(split(npipe$Lab, ff), var))

Suppose we guess that the variance in the response is linked to the predictor in the following
way:
var(Lab) = a0 F ielda1
Regress log(varlab) on log(meanf ield) to estimate a0 and a1 . (You might or might not chose
to ignore the last point, as the last group has only eight values). Use this to determine
appropriate weights in a WLS t of Lab on Field. Show the regression summary.

ar stu
ed d
vi y re
aC s
o
ou urc
rs e
eH w
er as
o.
co
m

Answer:
By regressing log(varlab) on log(meaneld), we got estimation of log(b
a0 ) =
0.3538 and b
a1 = 1.1244. So the weights in a WLS t of Lab on Field should be e0.3538 F1ield1.1244 .
i
See the regression summary above (model3).
(c) An alternative to weighting is transformation. Find transformations on Lab and/or Field so
that in the transformed scale the relationship is approximately linear with constant variance.
You may restrict your choice of transformation to square root, log and inverse.
Answer: We regress log(Lab) on log(Field). See results above. The p value is less than
2.2e-16 and R2 = 0.9337 which is close to 1. Therefore the two transformed variables exhibit
good linear relationship. Moreover, non-constant variance score test has a p value of 0.394,
which is not signicant at 95% condence level.
Appendix: R Code

## Problem 3
##
> require(faraway)
> model<-lm(gamble~.,data=teengamb)
> summary(model)

is

Call:
lm(formula = gamble ~ ., data = teengamb)

Median
-1.451

3Q
9.452

Max
94.252

sh

Th

Residuals:
Min
1Q
-51.082 -11.320

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 22.55565
17.19680
1.312
0.1968
sex
-22.11833
8.21111 -2.694
0.0101 *
status
0.05223
0.28111
0.186
0.8535
income
4.96198
1.02539
4.839 1.79e-05 ***
5

https://www.coursehero.com/file/7917926/hw03sol/

verbal
-2.95949
2.17215 -1.362
0.1803
--Signif. codes: 0 *** 0.001 ** 0.01 * 0.05 . 0.1

Residual standard error: 22.69 on 42 degrees of freedom


Multiple R-squared: 0.5267,
Adjusted R-squared: 0.4816
F-statistic: 11.69 on 4 and 42 DF, p-value: 1.815e-06

ar stu
ed d
vi y re
aC s
o
ou urc
rs e
eH w
er as
o.
co
m

>
> round(vcov(model), digits=2)
(Intercept)
sex status income verbal
(Intercept)
295.73 -72.73 -2.40 -9.89 -15.18
sex
-72.73 67.42
1.27
2.47 -3.54
status
-2.40
1.27
0.08
0.10 -0.32
income
-9.89
2.47
0.10
1.05 -0.05
verbal
-15.18 -3.54 -0.32 -0.05
4.72
>
> which.max(residuals(model))
24
24
> mean(residuals(model))
[1] 1.240143e-16
> median(residuals(model))
[1] -1.451392

> cor(residuals(model),fitted(model))
[1] 6.247412e-17

> cor(residuals(model),teengamb$income)
[1] -3.961603e-17

Th

is

> m1<-data.frame(sex=0,status=mean(teengamb$status),
income=mean(teengamb$income),verbal=mean(teengamb$verbal))
> predict(model,m1,interval="confidence")
fit
lwr
upr
1 28.24252 18.78277 37.70227

sh

>
> predict(model,m1,interval="prediction")
fit
lwr
upr
1 28.24252 -18.51536 75.00039

https://www.coursehero.com/file/7917926/hw03sol/

> m2<-data.frame(sex=0,status=max(teengamb$status),
income=max(teengamb$income),verbal=max(teengamb$verbal))
> predict(model,m2,interval="confidence")
fit
lwr
upr
1 71.30794 42.23237 100.3835
>
> predict(model,m2,interval="prediction")
fit
lwr
upr
1 71.30794 17.06588 125.55

ar stu
ed d
vi y re
aC s
o
ou urc
rs e
eH w
er as
o.
co
m

> model2<-lm(gamble~income,data=teengamb)
> anova(model,model2)
Analysis of Variance Table
Model 1: gamble ~
Model 2: gamble ~
Res.Df
RSS Df
1
42 21624
2
45 28009 -3
--Signif. codes: 0

sex + status + income + verbal


income
Sum of Sq
F Pr(>F)
-6384.8 4.1338 0.01177 *

*** 0.001 ** 0.01 * 0.05 . 0.1

###################
#### Problem 4:

> model<-lm(Lab~Field,data=pipeline)
> summary(model)

Call:
lm(formula = Lab ~ Field, data = pipeline)

Th

is

Residuals:
Min
1Q
-21.985 -4.072

Median
-1.431

3Q
2.504

Max
24.334

sh

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.96750
1.57479 -1.249
0.214
Field
1.22297
0.04107 29.778
<2e-16 ***
--Signif. codes: 0 *** 0.001 ** 0.01 * 0.05 . 0.1
1

Residual standard error: 7.865 on 105 degrees of freedom


7

https://www.coursehero.com/file/7917926/hw03sol/

Multiple R-squared: 0.8941,


Adjusted R-squared: 0.8931
F-statistic: 886.7 on 1 and 105 DF, p-value: < 2.2e-16
> plot(model,which=c(1,3))
> require(car)
> ncvTest(model)
Non-constant Variance Score Test
Variance formula: ~ fitted.values
Chisquare = 29.58568
Df = 1

p = 5.349868e-08

> model2<-lm(log(varlab)~log(meanfield))
> summary(model2)

ar stu
ed d
vi y re
aC s
o
ou urc
rs e
eH w
er as
o.
co
m

Call:
lm(formula = log(varlab) ~ log(meanfield))
Residuals:
Min
1Q
-2.2038 -0.6729

Median
0.1656

3Q
0.7205

Max
1.1891

Coefficients:

(Intercept)
log(meanfield)
--Signif. codes:

Estimate Std. Error t value Pr(>|t|)


-0.3538
1.5715 -0.225
0.8264
1.1244
0.4617
2.435
0.0351 *
0 *** 0.001 ** 0.01 * 0.05 . 0.1

Residual standard error: 1.018 on 10 degrees of freedom


Multiple R-squared: 0.3723,
Adjusted R-squared: 0.3095
F-statistic: 5.931 on 1 and 10 DF, p-value: 0.03513

Th

is

> weight<-1/(exp(model2$coefficients[1])*pipeline$Field^model2$coefficients[2])
> model3<-lm(Lab~Field,data=pipeline,weights=weight)
> summary(model3)
Call:
lm(formula = Lab ~ Field, data = pipeline, weights = weight)

sh

Weighted Residuals:
Min
1Q Median
-2.0826 -0.8102 -0.3189

3Q
0.6212

Max
3.4429

Coefficients:
Estimate Std. Error t value Pr(>|t|)
8

https://www.coursehero.com/file/7917926/hw03sol/

(Intercept) -1.49436
0.90707 -1.647
0.102
Field
1.20828
0.03488 34.637
<2e-16 ***
--Signif. codes: 0 *** 0.001 ** 0.01 * 0.05 . 0.1
1
Residual standard error: 1.169 on 105 degrees of freedom
Multiple R-squared: 0.9195,
Adjusted R-squared: 0.9188
F-statistic: 1200 on 1 and 105 DF, p-value: < 2.2e-16
> model4<-lm(log(Lab)~log(Field),data=pipeline)
> summary(model4)

ar stu
ed d
vi y re
aC s
o
ou urc
rs e
eH w
er as
o.
co
m

Call:
lm(formula = log(Lab) ~ log(Field), data = pipeline)
Residuals:
Min
1Q
Median
-0.40212 -0.11853 -0.03092

3Q
0.13424

Max
0.40209

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.06849
0.09305 -0.736
0.463
log(Field)
1.05483
0.02743 38.457
<2e-16 ***
--Signif. codes: 0 *** 0.001 ** 0.01 * 0.05 . 0.1
1

Residual standard error: 0.1837 on 105 degrees of freedom


Multiple R-squared: 0.9337,
Adjusted R-squared: 0.9331
F-statistic: 1479 on 1 and 105 DF, p-value: < 2.2e-16

p = 0.3939633

sh

Th

is

> ncvTest(model4)
Non-constant Variance Score Test
Variance formula: ~ fitted.values
Chisquare = 0.7266744
Df = 1

https://www.coursehero.com/file/7917926/hw03sol/

Powered by TCPDF (www.tcpdf.org)

Вам также может понравиться