Вы находитесь на странице: 1из 19

Econ 103 Final Review (Ch.

5-6)

Spring 2013: Rojas


TA: Amanda
The least squares estimator
Y

actual Y at
X=1 Ŷ= b1 + b2X

ê=Y-Ŷ

Predicted Ŷ at X=1: Ŷ=b1 + b2*1

X
X=1
Interpreting the model (coefficients)
• Simple linear model: Ŷ= b1 + b2X
– A 1 unit increase in X is associated on average with a b2 unit change in Y

• Quadratic model: Ŷ= b1 + b2X2
– A 1 unit increase in X2 is associated on average with a b2 unit change in Y

• Log-linear model: lnŶ= b1 + b2X


– A 1 unit increase in X is associated on average with a (100*b2)% change in Y

• Linear-log model: Ŷ= b1 + b2lnX
– A 1% increase in X is associated on average with a (b2/100) unit change in Y

• Log-log model: lnŶ= b1 + b2lnX


– A 1% increase in X is associated on average with a (b2)% change in Y
– In other words, is the b2 elasticity

• Multiple regression model: Ŷ= b1 + b2X2 + b3X3


– Holding X3 constant, a 1 unit increase in X2 is associated on average with a
b2 unit increase in Y
Interpreting parameters of interest
• Marginal effect or slope
– Write the fitted line equation as a function of b’s and take the first derivative, dŶ/dX
– Often have to evaluate at a given point. Can be at the means (Xbar, Ybar) or for a
given value of X and the associated fitted value, (X, Ŷ)
– Interpret: marginal effect = 5 for 1 unit increase in X, Y increases by 5
• Elasticity
– Find the marginal effect
– Elasticity = (marginal effect)*(X/Y)
– Simplify the expression for elasticity first (cross-cancel like terms), then plug in for
given X’s and Y (or Ŷ)
– Interpret: elasticity = 0.1 for 1% increase in X, Y increases by 0.1%
• Estimate the average value of Y at a point / predict Y at a point
– Plug in the given X value(s) and b’s into the fitted line equation (estimate E[Y|X]
with Ŷ)
• X-value at which Y is maximized/minimized (same as the X-value after which
Y begins to fall)
– Set the first derivative equal to zero and solve for X (as a function of b’s and other
X’s, Y)
– Likely to get a nonlinear function of b’s must use delta method to get s.e.
Hypothesis testing: t-test
1. Get H0 and H1. Determine the rejection region
based on H1
2. Get point estimate of what you’re testing
3. Get standard error of what you’re testing
4. Calculate t-stat = point estimate / std. error
5. Get critical t-value, tc(1-α,N-K) for one-sided,
tc (1-α/2,N-K) for two-sided
6. If t-stat falls in rejection region, reject null
hypothesis with probability of Type I error α
Confidence intervals
1. Get point estimate of what you want an
interval for
2. Get standard error of what you want an
interval for
3. Get critical t-value, tc (1-α/2,N-K)
4. (1- α)% CI: point estimate +/- tc * std. error
Getting the point estimate
• Write out a general expression first, in terms
of b’s, X’s, and Y (or Ŷ)
• Then plug in b’s, X’s, and Y (or Ŷ)
Getting the standard error
• Always write down the expression you are getting the std.
error for as a function of b’s (plug in any other values)
– Ex: if the est. marginal effect is 2b2*SQFT and you want to
construct a CI for the marginal effect at SQFT=2000, rewrite the
expression as 4000b2. You are solving for se(4000b2)
• For a linear function of b’s, use the rules of variance
(simplified delta method):
– se(**) = √var(**)
– var(aX+bY+c) = a2var(X) + b2var(Y) + 2ab*cov(X,Y)
• For a nonlinear function of b’s, use delta method
– se(**)= = √var(**)
– var(g(X,Y)) =
(dg/dX)2*var(X) + (dg/dY)2*var(Y) + 2(dg/dX)(dg/dY)*cov(X,Y)
Hypothesis testing: F-test
1. Get H0 and H1 (always two-sided)
2. Write down and estimate restricted and unrestricted model:
– Original model = unrestricted
– Model with null hypothesis plugged in = restricted
3. Calculate F-statistic (J=# of equal signs, K=number of betas)

(for overall test, testing all parameters except the intercept)

(recall SST=SSE+SSR, R2 = 1-SSE/SST)


4. Get critical F-value, Fc(1-α,N-K)* (original slide was correct)
5. Reject if F-stat > critical F-value
Prediction, forecast error, and
prediction intervals
Y

Ŷ= b1 + b2X

In sample predictions Out of sample predictions

Xbar X0 X
(average of
observed X’s)
Prediction, forecast error, and prediction intervals
• Predicted value Ŷ0= b1 + b2X0
– If we observe X0, we are making an in-sample prediction (this is
what the predict command does)
– If we don’t observe X0, we are making an out-of-sample
prediction (a forecast for Y)
• Forecast error f= Ŷ0-Y0
• To make a prediction interval, use std. error of forecast
rather to account for the fact that our estimate loses
precision as we go farther out-of-sample:

• Prediction interval: Ŷ0 +/- tc*se(f)


– This is a confidence interval for a prediction (Ŷ0 , whereas we
estimate confidence intervals for things like the marginal effect
or the coefficient on a certain variable)
Prediction in log-linear model
• Natural predictor: just take exponential of the
fitted lnY values
• Corrected predictor: multiply the natural
predictor by exp(MSE/2)
• For prediction interval, use forecast error and
natural predictor
Model specification
1. Choose variables and functional form based on economic theory
and your prior beliefs

2. Look at the signs and magnitudes of your coefficients. Do any of


them contradict your economic intuition? If so, you could have
model misspecification

3. Conduct significance tests for individual coefficients (t-test) and


joint test of overall significance (F-test). Are they consistent? If
not, may have model misspecification

4. Check the fit of the model using model selection criteria

5. Check assumptions MR1-MR6

6. Test for misspecification: omitted variables, irrelevant variables,


multicollinearity, and functional form
Model selection criteria across models
• R2: want highest R2
– R2 = SSR/SST = 1 – (SSE/SST) = (corr[Y,Yhat])2
– Interpretation: the proportion of variation in Y explained by the model
– Use when comparing models with same # of independent variables

• Adjusted R2: want highest adjusted R2


– Adj R2 = 1 – (SSE/SST)*(N-1)/(N-K) = 1 – {(1 – R2)*(N-1)/(N-K)}
– No longer can interpret as proportion of explained variation in Y
– Use when comparing models with different # of independent variables
– Adding variables can increase R2 without actually adding any info

• AIC: want smallest AIC


– AIC = ln(SSE/N) + (2*K/N)

• SC: want smallest SC


– SC = ln(SSE/N) + (K*ln(N)/N)
Misspecification problems to watch out for:
• Omitted variables
– Test: RESET, see if adding variables changes your estimates (could
be omitted variables)
– Result: coefficients are biased (E[b2]≠β2)
– Fix: control for more X’s, randomize, diff-in-diff if panel
• Irrelevant variables
– Test: individual t-tests and joint F-tests (insignificant means
either irrelevant or poor data)
– Result: inflates standard errors, increases R2 arbitrarily
– Fix: take out irrelevant variables that don’t have strong 
theoretical argument for including
• Functional form misspecified
– Test: RESET
– Fix: add in polynomials, interaction terms, transform model
(Note: RESET doesn’t discriminate between different functional 
forms)
RESET test
• Tests for omitted variables and functional form
misspecification

1. Run original regression and get fitted Yhat


2
2. Run auxiliary regressions that add Yhat and Yhat23
2
3. Test significance of coefficients on Yhat and Yhat32 – if
significance, then model may be misspecified
Multicollinearity
• Test:
– Insignificant individual t-tests but significant overall F-test
– Adding/deleting a few observations or deleting insignificant variables
changes the sign and magnitude of coefficient estimates a lot
– Look at pair wise correlations
– Regress each X on the other X's and look at R-squared. Compute
VIF=1/(1-R2) for each auxiliary regression and compare to rule of
thumb (VIF>5 is bad, want close to 1)
• Result:
– Data doesn’t contain enough info to estimate all parameters precisely: 
less precision, bigger standard errors, can’t isolate individual effects
– Large forecast error if collinear relationship changes out of sample
• Fix:
– Get better data set
– Introduce nonsample info to restrict parameters and do restricted
least squares (reduces variance but restrictions have to hold exactly;
otherwise estimates are biased)
Restricted Least Squares
• A quick example:
– Original model: Y = β1+β2X+β3W+e
– If the following restriction holds exactly, restricted
least squares is unbiased and will reduce variance:
β2=β3
– To run the RLS regression, plug in the restriction and
collect like terms (the beta’s):
• Y = β1+β2X+β2W+e Y = β1+β2(X+W)+e
• Now, we estimate the restricted model with a “new” 
explanatory variable, (X+W)
– Only works if the restriction holds exactly. If not, the
RLS estimates are biased. Need good out-of-sample
information to do RLS
Violation of assumptions MR1-MR6
• Heteroskedasticity
– Result: OLS not the best (no longer lowest variance of
estimator) and standard errors of estimates wrong (so CI,
hypothesis tests wrong)
– Fix: robust standard errors, GLS
• Exact collinearity
– Result: cannot estimate OLS at all
– Fix: take out exactly collinear variable, be careful of dummy
variable trap
• Error terms are not normal
– Test: Jarque-Bera, plot histogram
– Result: need for hypothesis tests and CI’s (especially if sample 
size smaller)
– Fix: transform explanatory variables, try different functional
form (take logs, etc.)

Вам также может понравиться