Вы находитесь на странице: 1из 6

EC306

Michael P Clements
Handout 1
1 Course Outline & Reading
The module will cover the econometrics of time series and will include:
a brief review of some matrix algebra and maximum likelihood estimation
the rationale for dynamic models, and simple time-series models
time-series models and model selection & forecasting
unit roots and testing
spurious regression versus cointegration
multivariate models and cointegration
modelling second moments: ARCH and GARCH models
non-linearities in mean.
Text Books
The main textbooks are:
Verbeek, M., A Guide to Modern Econometrics, 2nd or 3rd edition, Wiley.
Johnston, J. and DiNardo, J., Econometric Methods, 4th edition, McGraw Hill, 1997.
Additional useful references are:
Harris, R. D. and Sollis R., Applied time series modelling and forecasting, Wiley, 2003.
Higher-level texts include:
Banerjee, A., Dolado, J.J., Galbraith, J.W. and Hendry, D.F. (1993) Co-integration, Error
Correction and the Econometric Analysis of Non-Stationary Data. Oxford: Oxford University
Press.
Davidson, R. and MacKinnon, J.G. (1993) Estimation and Inference in Econometrics. Ox-
ford: Oxford University Press.
1
2 A selective review of matrix algebra
Verbeek Appendix A.
3 Vector multiplication
3.1 Scalar, dot or inner product
If a and b are : 1 vectors:
a
0
b =[a
1
a
2
a
n
]
_

_
/
1
/
2
.
.
.
/
n
_

_
=
n

i=1
a
i
/
i
= b
0
a
Example. If i
0
=[1 1 1],
i
0
b =
n

i=1
/
i
.
3.2 Outer product
ab
0
=
_

_
a
1
a
2
.
.
.
a
n
_

_
[/
1
/
2
/
n
] =
_

_
a
1
/
1
a
1
/
2
. . . a
1
/
n
a
2
/
1
a
2
/
2
. . . a
2
/
n
.
.
.
.
.
.
a
n
/
1
. . . . . . a
n
/
n
_

_
4 Matrices
4.1 Equality
A = B i a
ij
= /
ij
all i, ,.
4.2 Transpose
The rst row becomes the rst column, the second row the second column etc.
A =
_
1 2 3
4 5 6
_
, A
0
=
_
_
1 4
2 5
3 6
_
_
_
A
0
_
0
= A
(A+B)
0
= A
0
+B
0
4.3 Symmetric matrix
A matrix is symmetric if A
0
= A, that is, a
ij
= a
ji
for i ,= ,. Requires it is square.
2
4.4 Multiplication
By a scalar
If A is ::, and c is 1 1 (a scalar), cA = Ac is :: with elements ca
ij
.
A matrix multiplied by a matrix.
If B is j , then C = AB is dened if : = j, and C is :, with:
c
ij
=
n

k=1
a
ik
/
kj
, i = 1, . . . , :; , = 1, . . . , .
Typically AB ,= BA.
C
0
= (AB)
0
= B
0
A
0
(ABC)
0
= C
0
B
0
A
0
(A+B) +C = A+ (B+C)
(AB) C = A(BC)
4.5 Diagonal matrix
Square, elements o leading diagonal are zero.
A =
_

_
o
1
0 0
0 o
2
0
.
.
.
.
.
.
.
.
.
0 0 o
n
_

_
Sometimes written A = diaq o
1
, o
2
, . . . , o
n
.
4.6 Identity matrix
Diagonal matrix with ones on leading diagonal.
I
n
denotes an : : identity matrix.
If A is :: :
I
m
A = AI
n
= A.
5 Inverse matrix
For a square matrix A of order : (i.e., : rows and columns), its inverse exists if there is a
square matrix B of same order such that
AB = I
n
and then B is usually written as A
1
.
3
The inverse will exist providing that the columns of A are linearly independent (LI). That
is, partitioning A as
A =[a
1
a
2
. . . a
n
]
where the a
i
are the columns of A, linear independence holds if the only solution to
`
1
a
1
+ `
2
a
2
+ . . . + `
n
a
n
= 0
is `
1
= `
2
= . . . = `
n
= 0. That is, it is impossible to calculate one column by taking linear
combinations of the others (Note, the `
i
are scalars).
If the columns are LI, then so are the rows.
If the inverses exist
AA
1
= A
1
A = I
(AB)
1
= B
1
A
1
requires A and B are square.
6 Rank
Rank is the number of linearly independent columns.
If A is ::, Rank _ :i:(:, :).
Suppose , are both : r, with r < :, and both matrices have rank of r (their columns
are linearly independent). The rank of the : : matrix =
0
is given by r.
Example. Suppose r = 2:
=
0
=
_

_
c
11
c
12
c
21
c
22
.
.
.
.
.
.
c
n1
c
n2
_

_
_

0
1

0
2
_
where =[
1

2
].
Then the rst row of is c
11

0
1
+c
12

0
2
, the second row is c
21

0
1
+c
22

0
2
and so on. Each
row is a linear combination (with weights c
i1
, c
i2
) of the same two rows (
0
1
,
0
2
), so there are
only 2 independent rows.
7 Determinants
The determinant of a square matrix is zero if the matrix is less than full rank (rank < number
of cols or rows). These matrices are singular.
8 Maximum Likelihood
An estimation method that assumes we know the distribution of the data except for a small
number of unobserved parameters. We estimate the unobserved parameters to maximize the
chances of observing the values that we observed.
4
Simple illustration. We observed a sample of : values of A
i
, i = 1, . . . , :.
We know these are random samples from a (j, 1) distribution.
j is the only unknown parameter.
To estimate j by ML, write down the probability of observing A
i
.
) (A
i
[ j) =
1
_
2
exp
_

1
2
(A
i
j)
2
_
Because the A
i
s are assumed independent, the joint density of the sample of : observations is
given by:
) (A
1
, . . . , A
n
[ j) =
n

i=1
) (A
i
[ j)
=
_
1
_
2
_
n
n

i=1
exp
_

1
2
(A
i
j)
2
_
The log likelihood is:
1oq1(j) =
:
2
log (2)
1
2
n

i=1
(A
i
j)
2
.
Maximizing this with respect to j:
01oq1(j)
0j
= 0 = ^ j =
1
:
n

i=1
A
i
because the rst term of the log likelihood does not depend on j.
MLE is seen to be equivalent to minimizing the Residual Sum of Squares, which denes the
OLS estimator, in this case.
More generally, j ~ (j, o
2
), so variance now assumed unknown.
)(j
i
) =
1
_
2o
2
exp
_

1
2o
2
(j
i
j)
2
_
log 1(j, o
2
; j) =
N

i=1
_

1
2
log(2o
2
)
1
2o
2
(j
i
j)
2
_
First order conditions:
N

i=1
(j
i
^ j) = 0 = ^ j =
1

i=1
j
i
^ o
2
+
N

i=1
(j
i
^ j)
2
= 0
5
Can also solve to estimate the parameters on explanatory variables
j
i
= c + ,r
i
+ -
i
, -
i
~ 1(0, o
2
)
j
i
~ 1(c + ,r
i
, o
2
) i = 1, . . . , .
See Verbeek 6.1 for more on this.
6

Вам также может понравиться