Академический Документы
Профессиональный Документы
Культура Документы
Expected Value:
(2) E(aX + b) = aE(X) + b
(3) E(X + Y ) = E(X) + E(Y )
(4) E(XY ) = E(X)E(Y ) for X, Y indep.
Variance:
(5) V (X) = E(X 2) − E(X)2
(6) V (aX + b) = a2V (X)
(7) V (X + Y ) = V (X) + V (Y ) + 2Cov(X, Y )
Covariance:
(8) Cov(X, Y ) = E(XY ) − E(X)E(Y )
⇒ Cov(X, Y ) = Cov(Y, X), Cov(X, X) = V (X)
and for independent X, Y : Cov(X, Y ) = 0
(9) Cov(aX + b, cY + d) = ac Cov(X, Y )
(10) Cov(X +Y, Z) = Cov(X, Z)+Cov(Y, Z)
2
Stationarity implies that the covariance and
correlation of observations taken k periods
apart are only a function of the lag k, but
not of the time point t.
Autocovariance Function
(11)
k = 0, 1, 2, . . ..
Variance: γ0 = Var[yt].
Autocorrelation function
γk
(12) ρk = .
γ0
3
The Wold Theorem, to which we return later,
assures that all covariance stationary proces-
ses can be built using using white noise as
building blocks. This is defined as follows.
Definition 3.2:
The time series ut is a white noise process if
E[ut] = µ, for all t
(13) Cov[ut, us] = 0, for all t 6= s
Var[ut] = σu2 < ∞, for all t.
We denote ut ∼ WN(µ, σu2).
4
As an example consider the so called first
order autoregressive model, defined below:
AR(1)-process:
(14) yt = φ0 + φ1yt−1 + ut
with ut ∼ WN(0, σ 2) and |φ1| < 1.
7
Realisations from AR(1) Model
rho = 0.5
8
-2
-4
-6
-8
15
10
-5
-10
-15
40
30
20
10
0 8
-10
-20
-30
The AR(1) process may be generalized to
a pth order autoregressive model by adding
further lags as follows.
AR(p)-process:
(17)
yt = φ0 + φ1yt−1 + φ2yt−2 + . . . + φpyt−p + ut
where ut is again white noise with µ = 0.
(18) mp − φ1mp−1 − . . . − φp = 0
are inside the unit circle (have modulus less
than one).
9
The moving average model of order q is
MA(q)-process:
(20) E(yt) = c
(21) V (yt) = (1 + θ12 + θ22 + . . . + θq2)σ 2
(22) γk = (θk + θk+1θ1 + . . . + θq θq−k )σ 2
for k ≤ q and 0 otherwise.
10
Autoregressive Moving Average (ARMA) model
mp − φ1mp−1 − . . . − φp = 0
must all lie inside the unit circle.
11
Example: (Alexander, PFE, Example II.5.1)
Is the ARMA(2,1) process below stationary?
m2 − 0.75m + 0.25 = 0
with roots
q
0.752 − 4 · 0.25
0.75 ±
m=
√ 2
0.75 ± i 0.4375
=
2
= 0.375 ± 0.3307i,
√
where i = −1 (the imaginary number).
12
Inversion and the Lag Operator
In terms of lag-polynomials
13
It turns out that the stationarity condition for
an ARMA process, that all roots of the char-
acteristic equation be inside the unit circle,
may be equivalently rephrased as the require-
ment that all roots of the polynomial
(30) φ(L) = 0
are outside the unit circle (have modulus larger
than one).
Example: (continued).
(32) yt = µ + a(L)ut.
15
For example, a stationary AR(p) process
(33) φ(L)yt = φ0 + ut
can be equivalently represented as an infinite
order MA process of the form
(34) yt = φ(L)−1(φ0 + ut),
where φ(L)−1 is an inifinite series in L.
16
Similiarly, a moving average process
(37) yt = c + θ(L)ut
may be written as an AR process of infinite
order
(38) θ(L)−1yt = θ(L)−1c + ut,
provided that the roots of θ(L) = 0 lie out-
side the unit circle, which is equivalent to the
roots of the characteristiq equation
mq + θ1mq−1 + . . . + θq = 0
being inside the unit circle. In that case the
MA-process is called invertible. The same
condition applies for invertibility of arbitrary
ARMA(p,q) processes.
17
Second Moments of General ARMA Processes
18
Example: Autocorrelation function for AR(1)
yt = φ0 + φ1yt−1 + ut
is ρ1 = φ1.
(42) ρk = φk1,
such that the autocorrelation function levels
off for increasing k, since |φ1| < 1.
19
Example: ACF for ARMA(1,1)
20
The variance of the ARMA(1,1) process is
therefore
1 + 2θφ + θ2 2
(49) γ0 = 2
σ .
1−φ
Its first order covariance is
φ + θ2φ + θφ2 + θ 2
(50) γ1 = 2
σ ,
1−φ
and its first order autocorrelation is
(1 + φθ)(φ + θ)
(51) ρ1 = .
1 + 2θφ + θ2
ρk = φρk−1.
21
Response to Shocks
(52) yt = µ + ψ(L)ut,
then the coefficients ψk of the lag polynomial
22
In order to find its values, recall from (28)
that ψ(L) = θ(L)/φ(L), such that
(54) 1 + θ1L + . . . + θq Lq
=(1 − φ1L − . . . − φpLp)
× (1 + ψ1L + ψ2L2 + . . .)
(55) 1 + θ1L + . . . + θq Lq
=1 + (ψ1 − φ1)L + (ψ2 − φ1ψ1 − φ2)L2
+ (ψ3 − φ1ψ2 − φ2ψ1 − φ3)L3 + . . .
(56) ψ1 =θ1 + φ1
ψ2 =θ2 + φ1ψ1 + φ2
ψ3 =θ3 + φ1ψ2 + φ2ψ1 + φ3
...
23
Example: (Alexander, PFE, Example II.5.2)
24
Forecasting with ARMA models
(57)
p
X q
X
ŷT +s = µ̂ + φ̂i(ŷT +s−i − µ̂) + θ̂j uT +s−j ,
i=1 j=1
where uT +s−j is set to zero for all future
innovations, that is, whenever s > j.
26
Example.
27
Partial Autocorrelation
Similarly
γ̂
(64) rk = ρ̂k = k .
γ̂0
29
Statistical inference
Thus, testing
(67) H0 : ρk = 0,
can be tested with the test statistic
√
(68) z = T rk ,
which is asymptotically N (0, 1) distributed
under the null hypothesis (67).
30
The ’Portmanteau’ statistics to test the hy-
pothesis
(69) H 0 : ρ1 = ρ2 = · · · = ρm = 0
is due to Box and Pierce (1970)
m
Q∗(m) = T rk2,
X
(70)
k=1
m = 1, 2, . . ., which is (asymptotically) χ2
m-
distributed under the null-hypothesis that all
the first autocorrelations up to order m are
zero.
Theoretically:
=======================================================
acf pacf
-------------------------------------------------------
AR(p) Tails off Cut off after p
MA(q) Cut off after q Tails off
ARMA(p,q) Tails off Tails off
=======================================================
acf = autocorrelation function
pacf = partial autocorrelation function
32
Example: continued.
Neither a simple AR(1) model nor a simple MA(1)
model are appropriate, because they have still signif-
icant autocorrelation in the residuals left (not shown).
.4
.2
.0
2 4 6 8 10 12 14 16 18 20 22 24
Actual Theoretical
Partial autocorrelation
.6
.4
.2
.0
-.2
2 4 6 8 10 12 14 16 18 20 22 24
Actual Theoretical
33
Example: continued.
.6
Autocorrelation
.4
.2
.0
2 4 6 8 10 12 14 16 18 20 22 24
Actual Theoretical
Partial autocorrelation
.6
.4
.2
.0
-.2
2 4 6 8 10 12 14 16 18 20 22 24
Actual Theoretical
34
Example: continued.
35
Other popular tools for detecting the order
of the model are Akaike’s (1974) information
criterion (AIC)
============================
p q AIC BIC
----------------------------
0 0 10.87975 10.88972
1 0 10.70094 10.72093
0 1 10.75745 10.77740
2 0 10.67668 10.70673
1 1 10.65332 10.68332*
0 2 10.71069 10.74062
1 2 10.65248 10.69247
2 1 10.65096 10.69102
2 2 10.64279* 10.69287
============================
* = minimum
39
A stochastic process yt is called a martingale
if the best forecast of future realizations yt+s
is the the realization at time t, that is,
Et(ut+k ) = E(ut+k ) = 0.
40
The Law of Iterated Expectations
41
Example: Rational Expectation Hypothesis
St = Et(St+1),
which is just the defining property of a martingale.
The forecasting errors
42
3.3 Integrated Processes and Stocastic Trends
44
A symptom of integrated processes is that
the autocorrelations do not tend to die out.
45
Deterministic Trends
47
Testing for unit roots
48
The ordinary OLS approach does not work!
Dickey-Fuller regression
(90) H0 : γ = 0.
This is tested with the usual t-ratio.
γ̂
(91) t= .
s.e.(γ^ )
49
However, under the null hypothesis (90) the
distribution is not the standard t-distribution.
n: α = 1% α = 5% α = 10%
25 -3.75 -3.00 -2.62
50 -3.58 -2.93 -2.60
100 -3.51 -2.89 -2.58
250 -3.46 -2.88 -2.57
500 -3.44 -2.87 -2.57
∞ -3.43 -2.86 -2.57
6.4
6.0
5.6
5.2
4.8
4.4
III IV I II III IV I II III IV I II III IV I II III IV I II III IV I
2004 2005 2006 2007 2008 2009 2010
51
However, naively applying the unit-root test
with default-settings in EViews rejects the
unit root in favour of a stationary series:
t-Statistic Prob.*
t-Statistic Prob.*
t-Statistic Prob.*
54