Вы находитесь на странице: 1из 25

VAR is used when we are dealing with stationary variables where we may have an endogeneity problem.

When we are dealing with non-stationary data we must use an alternative method: Co-integration

Revision of Stationarity

Quick revision of non-stationary series:

Consider the series:

Yt= Yt-1 + ut

This series is stationary if ||<1

If ||=1, we can rewrite this as:

Yt= Yt-1 + ut Recall this series has a unit root (i.e. is non-stationary) The series is Difference Stationary though since Yt is stationary:
Yt = Yt Yt-1 = ut

Each period the series change by some random amount ut.

[note that the average change in Yt is 0, since E(ut)=0]

Example: Yt = Yt-1 +Ut

Difference Stationary with drift.

If the series includes an intercept (or drift term), 0:

Yt= 0 + Yt-1 + ut

Again this series is non-stationary [ =1 again]

Each period the series changes by a certain amount (0) plus a random amount (ut) i.e. there is a trend in the series!! But if we look at Yt

Yt = Yt Yt-1 = 0 + ut This series is stationary but instead of fluctuating around E(ut )=0, it fluctuates around E(0 + ut ) = E(0 ) = 0

Example: Yt = 0 + Yt-1 +Ut

Trend Stationary

If the series also includes a time trend, 1: Yt= 0 + 1t + Yt-1 + ut This series is non-stationary even if <1 due to the trend term (since Yt increases (or decreases) over time, meaning the average isnt constant. Yt = Yt Yt-1 = 0 + 1t + ut Note the average of this is E(0 + 1t) which is time dependent => non-stationary. To make this series stationary, we de-trend the original series!

Yt - 1t = 0 + Yt-1 + ut , which is stationary if (|| <1) Or if || =1, then (Yt - 1t) is stationary .

Example: Yt = 0 + 1t + Yt-1 +Ut

Cointegration and Error Correction Models

OLS Regression with non-stationary data: The spurious regression problem

Imagine we have two series (Y and X) which are non-stationary. If both these series display a trend (either deterministic or stochastic), the series will be highly correlated with each other, even if there is no true relationship between them Thus if we carry out an OLS regression of Y and X we will find that X seems to explain a good portion of Y.

Silly example

To give a stupid example:

Suppose we run a regression with an index of shares as our Y variable and your height as the X variable. Well equity indices usually grew over the past 25 years. Your height would have also increased over the last 25 years So OLS would find a significant relationship between height and Equities

But the key point is in reality shares dont increase whenever your height increases!!! => It was a false result due to both series tending to increase over time!

Explanation of result

Recall that OLS coefficients can be interpreted as how much Y changes on average when X changes by one unit It is only based on the degree of association not causation!! Since both tend to increase there is positive association but there is no causation!!!

The spurious regression problem

What our regression tells us:

As X increases, Y increases => X and Y are related to each other As t increase, X and Y are both increasing X and Y are both related to t, but may not be related to each other really.

What is really going on:

The spurious regression problem(contd.)

Our model will appear to fit well

A high R2 and high t ratios indicates that the explanatory power of the regression is very high suggesting (falsely) a very good result.

In this case the trend in both variables is related, but not explicitly modelled, causing autocorrelation. But as the trends in the two variables is related, the explanatory power is high. Granger and Newbold (1974) proposed the following rule of thumb for detecting spurious regressions: If the R-squared statistic is larger than the DW (Durbin Watson) statistic, or if R-squared 1 then the regression is spurious.

Note DW statistic measure autocorrelation

A Possible Solution to Spurious regression problem

Since the problem is caused by stochastic or deterministic trends, the obvious way to solve the problem is to get rid of the trend:

Stochastic Trend =>

difference the data => stationary De-trend the data take first difference and then de-trend.

Deterministic Trend =>

Both stochastic and deterministic trends: =>

Stationary series dont have trends => problem solved! Or is it?..............

Problems with this solution:

If we have differenced the series, we are now looking at the relationship between changes in the variables rather than in the levels. The variables in this form may not be in accordance with the original theory This model could be omitting important long-run information, differenced variables are usually thought of as representing the short-run. [since it is only the change since the last period] This model may not have the correct functional form.

Question: So, given that differencing may be undesirable, is there a way to estimate regressions involving non-stationary variables but allowing us to keep the variables in levels?
Answer: Yes, if there is an equilibrium relationship between the variables, otherwise No!


The basic idea:

If theory tells us that there is some equilibrium relationship between the variables, then their stochastic trends must cancel out. Why?

Well if they didnt, the stochastic trend in one of the variables would take us away from the equilibrium and we may never return (since the series is non-stationary it doesnt have to return to its previous level!)
Ex. Imagine house prices are related only to annual rents. Then the price in a period should on average be a certain number of times the annual rent (say 20 times here!)
If, over time, rents are increasing due to a stochastic trend, then for there to be an equilibrium relationship, house prices must also increase by the stochastic trend! Otherwise the series diverge. We cant have an equilibrium where rents are increasing but house prices remain constant!


Now we will look at it in terms of Yt and some explanatory variables X1t and X2t. Suppose the model for yt is correctly specified as Yt = 0 + 1X1t + 2X2t + et Where Yt, X1t and X2t are non-stationary series. For there to be an equilibrium relationship in this model, Yt cant diverge indefinitely from the explained part of the equation

It can diverge for a while, as long as it will eventually return

i.e. Yt cant diverge indefinitely from 0 + 1X1t + 2X2t

Ex. simple housing model, applied to the recent history in Ireland House prices had been increasing quicker than rents, i.e. diverging from their equilibrium value. This was unsustainable according to our simple theory. Now house prices have began to fall, restoring the equilibrium relationship although they currently may have further to fall. In reality, there are more variables at play than just rent.

Example of cointegrated series: Time series of consumption and income


Yt cant diverge indefinitely from 0 + 1X1t + 2X2t

Think of 0 + 1X1t + 2X2t as the equilibrium value of Yt.

So what does this mean?

Well, if we look at the difference between Yt and the explained part, Yt (0 + 1X1t + 2X2t). For there to be an equilibrium, this must eventually return to 0. (i.e. the equilibrium must be restored) i.e. Yt (0 + 1X1t + 2X2t) must be stationary!!!

Cointegration Linear Combinations of Integrated Variables

So: Yt (0 + 1X1t + 2X2t) must be stationary! But: et = Yt (0 + 1X1t + 2X2t) [since Y = + X + X

t 0 1 1t

2 2t

+ e t]

Thus, et must be stationary if the theory underlying our specification is correct

Not stationary => our theory must be incorrect Not related, no equilibrium relationship between them

If et has a stochastic trend there will be no tendency for the equilibrium relationship between yt, x1t & x2t to be restored Remember if et has a stochastic trend, shocks in et have a permanent, albeit random effect
Crucial insight: Equilibrium theories involving non stationary variables require the existence of a combination of the variables that is stationary


In the long run equilibrium

yt - 0 - 1x1t - 2x2t = 0 et = yt - 0 - 1xt - 2x2t

And the equilibrium error et = the deviation from equilibrium which is stationary

In this case the variables yt, x1t & x2t are said to be cointegrated of order CI(d,b)

d -> amount of times variables have to be differenced to make them stationary [i.e. the variables are integrated of order d I(d)] b -> the reduction in integration resulting from the cointegration. [This may be a little confusing, in our example d=1 because the variables were I(1), after cointegration our e is I(0) so b=(1-0) > b=1]

Usually we deal with variables that are CI(1,1) The vector = (1, 0, 1,2) is said to be the cointegrating vector

Cointegration Linear Combinations of Integrated Variables


In general if = (1,0,1, 2) is a cointegrating vector then = (, 0, 1, 2) is also a cointegrating vector.

In other words, the cointegrating vector is not unique.

In estimation we need to normalize the cointegrating vector by fixing one of the coefficients at unity [Here we fixed the coefficient on Yt to be 1]]. All variables must be integrated of the same order.

If variables are integrated of different orders then they cannot be cointegrated.

A series with a unit root and a stationary series (different orders) Cant be an equilibrium relationship b/ them I(1) moving randomly, I(0) isnt

If there are n cointegrated variables, there can be up to n-1 linearly independent cointegrating vectors

To get an idea what we mean by this suppose house prices tend to be 5 times rents

i.e. House price =5*Rent Then 2*House price = 10*Rent And 3*House price = 15*Rent So [1,5] , [2,10] and [3,15] would all be cointegrating vectors!