Вы находитесь на странице: 1из 67

9/14/2017 https://onlinecourses.science.psu.

edu/stat510/print/book/export/html/47

Published on STAT 510 (https://onlinecourses.science.psu.edu/stat510)


Home > 1.1 Overview of Time Series Characteristics

1.1 Overview of Time Series Characteristics


In this lesson, well describe some important features that we must consider when describing and modeling a time series. This is meant to be
an introductory overview, illustrated by example, and not a complete look at how we model a univariate time series. Here, well only consider
univariate time series. Well examine relationships between two or more time series later on.

Definition:

A univariate time series is a sequence of measurements of the same variable collected over time. Most often, the measurements are made
at regular time intervals.

One difference from standard linear regression is that the data are not necessarily independent and not necessarily identically distributed. One
defining characteristic of time series is that this is a list of observations where the ordering matters. Ordering is very important because there is
dependency and changing the order could change the meaning of the data.

Basic Objectives of the Analysis

The basic objective usually is to determine a model that describes the pattern of the time series. Uses for such a model are:

1. To describe the important features of the time series pattern.


2. To explain how the past affects the future or how two time series can interact.
3. To forecast future values of the series.
4. To possibly serve as a control standard for a variable that measures the quality of product in some manufacturing situations.

Types of Models

There are two basic types of time domain models.

https://onlinecourses.science.psu.edu/stat510/print/book/export/html/47 1/9
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/47

1. Models that relate the present value of a series to past values and past prediction errors - these are called ARIMA models (for
Autoregressive Integrated Moving Average). Well spend substantial time on these.
2. Ordinary regression models that use time indices as x-variables. These can be helpful for an initial description of the data and form the
basis of several simple forecasting methods.

Important Characteristics to Consider First

Some important questions to first consider when first looking at a time series are:

Is there a trend, meaning that, on average, the measurements tend to increase (or decrease) over time?
Is there seasonality, meaning that there is a regularly repeating pattern of highs and lows related to calendar time such as seasons,
quarters, months, days of the week, and so on?
Are their outliers? In regression, outliers are far away from your line. With time series data, your outliers are far away from your other
data.
Is there a long-run cycle or period unrelated to seasonality factors?
Is there constant varianceover time, or is the variance non-constant?
Are there any abrupt changes to either the level of the series or the variance?

Example 1

The following plot is a time series plot of the annual number of earthquakes in the world with seismic magnitude over 7.0, for a 99 consecutive
years. By a time series plot, we simply mean that the variable is plotted against time.

https://onlinecourses.science.psu.edu/stat510/print/book/export/html/47 2/9
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/47

Some features of the plot:

There is no consistent trend (upward or downward) over the entire time span. The series appears to slowly wander up and down. The
horizontal line drawn at quakes = 20.2 indicates the mean of the series. Notice that the series tends to stay on the same side of the
mean (above or below) for a while and then wanders to the other side.
Almost by definition, there is no seasonality as the data are annual data.
There are no obvious outliers.
Its difficult to judge whether the variance is constant or not.

One of the simplest ARIMA type models is a model in which we use a linear model to predict the value at the present time using the value at
the previous time. This is called an AR(1) model, standing for autoregressive model of order 1. The order of the model indicates how many
previous times we use to predict the present time.

A start in evaluating whether an AR(1) might work is to plot values of the series against lag 1 values of the series. Let xt denote the value of
the series at any particular time t, so xt-1 denotes the value of the series one time before time t. That is, xt-1 is the lag 1 value of xt. As a short
example, here are the first five values in the earthquake series along with their lag 1 values:

t xt xt-1 (lag 1 value)


1 13 *
2 14 13
3 8 14
4 10 8
5 16 10

For the complete earthquake data set, heres a plot of xt versus xt-1:

https://onlinecourses.science.psu.edu/stat510/print/book/export/html/47 3/9
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/47

Although, its only a moderately strong relationship, there is a positive linear association so an AR(1) model might be a useful model.

The AR(1) model

Theoretically, the AR(1) model is written

xt = + 1 xt1 + w t

Assumptions:

iid
w t N (0, w ) , meaning that the errors are independently distributed with a normal distribution that has mean 0 and constant variance.
2

Properties of the errors wt are independent of x .

This is essentially the ordinary simple linear regression equation, but there is one difference. Although its not usually true, in ordinary least
squares regression we assume that the x-variable is not random but instead is something we can control. Thats not the case here, but in our
first encounter with time series well overlook that and use ordinary regression methods. Well do things the right way later in the course.

Following is Minitab output for the AR(1) regression in this example:

quakes = 9.19 + 0.543 lag1

98 cases used, 1 cases contain missing values

https://onlinecourses.science.psu.edu/stat510/print/book/export/html/47 4/9
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/47

Predictor Coef SE Coef T P

Constant 9.191 1.819 5.05 0.000

lag1 0.54339 0.08528 6.37 0.000

S = 6.12239 R-Sq = 29.7% R-Sq(adj) = 29.0%

We see that the slope coefficient is significantly different from 0, so the lag 1 variable is a helpful predictor. The R2 value is relatively weak at
29.7%, though, so the model wont give us great predictions.

Residual Analysis

In traditional regression, a plot of residuals versus fits is a useful diagnostic tool. The ideal for this plot is a horizontal band of points. Following
is a plot of residuals versus predicted values for our estimated model. It doesnt show any serious problems. There might be one possible
outlier at a fitted value of about 28.

Example 2

The plot at the top of the next page shows a time series of quarterly production of beer in Australia for 18 years.

Some important features are:

There is an upward trend, possibly a curved one.


https://onlinecourses.science.psu.edu/stat510/print/book/export/html/47 5/9
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/47

There is seasonality a regularly repeating pattern of highs and lows related to quarters of the year.
There are no obvious outliers.
There might be increasing variation as we move across time, although thats uncertain.

There are ARIMA methods for dealing with series that exhibit both trend and seasonality, but for this example well use ordinary regression
methods.

Classical regression methods for trend and seasonal effects

To use traditional regression methods, we might model the pattern in the beer production data as a combination of trend over time and
quarterly effect variables.

Suppose that the observed series is xt , for t = 1, 2, , n .

For a linear trend, use t (the time index) as a predictor variable in a regression.
For a quadratic trend, we might consider using both t and t2.
For quarterly data, with possible seasonal (quarterly) effects, we can define indicator variables such as Sj = 1 if observation is in quarter j
of a year and 0 otherwise. There are 4 such indicators.

iid
Let t N (0, )
2
. A model with additive components for linear trend and seasonal (quarterly) effects might be written

xt = 1 t + 1 S1 + 2 S2 + 3 S3 + 4 S4 + t

https://onlinecourses.science.psu.edu/stat510/print/book/export/html/47 6/9
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/47

To add a quadratic trend, which may be the case in our example, the model is
2
xt = 1 t + 2 t + 1 S1 + 2 S2 + 3 S3 + 4 S4 + t

Note that weve deleted the intercept from the model. This isnt necessary, but if we include it well have to drop one of the seasonal effect
variables from the model to avoid collinearity issues.

Back to Example 2: Following is the Minitab output for a model with a quadratic trend and seasonal effects. All factors are statistically
significant.

Predictor Coef SE Coef T P

Noconstant

Time 0.5881 0.2193 2.68 0.009

tsqrd 0.031214 0.002911 10.72 0.000

quarter_1 261.930 3.937 66.52 0.000

quarter_2 212.165 3.968 53.48 0.000

quarter_3 228.415 3.994 57.18 0.000

quarter_4 310.880 4.018 77.37 0.000

Residual Analysis

For this example, the plot of residuals versus fits doesnt look too bad, although we might be concerned by the string of positive residuals at the
far right.

https://onlinecourses.science.psu.edu/stat510/print/book/export/html/47 7/9
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/47

When data are gathered over time, we typically are concerned with whether a value at the present time can be predicted from values at past
times. We saw this in the earthquake data of example 1 when we used an AR(1) structure to model the data. For residuals, however, the
desirable result is that the correlation is 0 between residuals separated by any given time span. In other words, residuals should be unrelated
to each other.

Sample Autocorrelation Function (ACF)

The sample autocorrelation function (ACF) for a series gives correlations between the series xt and lagged values of the series for lags of 1, 2,
3, and so on. The lagged values can be written as xt-1, xt-2, xt-3,and so on. The ACF gives correlations between xt and xt-1, xt and xt-2, and so
on.

The ACF can be used to identify the possible structure of time series data. That can be tricky going as there often isnt a single clear-cut
interpretation of a sample autocorrelation function. Well get started on that in Lesson 1.2 this week. The ACF of the residuals for a model is
also useful. The ideal for a sample ACF of residuals is that there arent any significant correlations for any lag.

Following is the ACF of the residuals for the Example 1, the earthquake example, where we used an AR(1) model. The lag (time span
between observations) is shown along the horizontal, and the autocorrelation is on the vertical. The red lines indicated bounds for statistical
significance. This is a good ACF for residuals. Nothing is significant; thats what we want for residuals.

https://onlinecourses.science.psu.edu/stat510/print/book/export/html/47 8/9
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/47

The ACF of the residuals for the quadratic trend plus seasonality model we used for Example 2 looks good too. Again, there appears to be no
significant autocorrelation in the residuals. The ACF of the residual follows:

Lesson 1.2 will give more details about the ACF. Lesson 1.3 will give some R code for examples in Lessons 1.1 and 1.2.

Source URL: https://onlinecourses.science.psu.edu/stat510/node/47

https://onlinecourses.science.psu.edu/stat510/print/book/export/html/47 9/9
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/60

Published on STAT 510 (https://onlinecourses.science.psu.edu/stat510)


Home > 1.2 Sample ACF and Properties of AR(1) Model

1.2 Sample ACF and Properties of AR(1) Model


This lesson defines the sample autocorrelation function (ACF) in general and derives the pattern of the ACF for an AR(1) model. Recall from
Lesson 1.1 for this week that an AR(1) model is a linear model that predicts the present value of a time series using the immediately prior value
in time.

Stationary Series

As a preliminary, we define an important concept, that of a stationary series. For an ACF to make sense, the series must be a weakly
stationary series. This means that the autocorrelation for any particular lag is the same regardless of where we are in time.

Definition: A series xt is said to be (weakly) stationary if it satisfies the following properties:

The mean E(xt) is the same for all t.


The variance of xt is the same for all t.
The covariance (and also correlation) between xt and xt-h is the same for all t.

Definition: Let xt denote the value of a time series at time t. The ACF of the series gives correlations between xt and xt-h for h = 1, 2, 3, etc.
Theoretically, the autocorrelation between xt and xt-h equals

Covariance(xt , xth ) Covariance(xt , xth )


=
Std.Dev.(xt )Std.Dev.(xth ) Variance(xt )

The denominator in the second formula occurs because the standard deviation of a stationary series is the same at all times.

The last property of a weakly stationary series says that the theoretical value of an autocorrelation of particular lag is the same across the
whole series. An interesting property of a stationary series is that theoretically it has the same structure forwards as it does backwards.

https://onlinecourses.science.psu.edu/stat510/print/book/export/html/60 1/9
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/60

Many stationary series have recognizable ACF patterns. Most series that we encounter in practice, however, are not stationary. A continual
upward trend, for example, is a violation of the requirement that the mean is the same for all t. Distinct seasonal patterns also violate that
requirement. The strategies for dealing with nonstationary series will unfold during the first three weeks of the semester.

The First-order Autoregression Model

Well now look at theoretical properties of the AR(1) model. Recall from Lesson 1.1, that the 1storder autoregression model is denoted as
AR(1). In this model, the value of x at time t is a linear function of the value of x at time t1. The algebraic expression of the model is as
follows:

xt = + 1 xt1 + w t

Assumptions:
iid
w t N (0, w ) , meaning that the errors are independently distributed with a normal distribution that has mean 0 and constant variance.
2

Properties of the errors wt are independent of xt .


The series x1, x2, ... is (weakly) stationary. A requirement for a stationary AR(1) is that |1 | < 1. Well see why below.

Properties of the AR(1):

Formulas for the mean, variance, and ACF for a time series process with an AR(1) model follow.

The (theoretical) mean of xt is


=
1 1

The variance of is
2
w
Var(xt ) =
2
1
1

The correlation between observations h time periods apart is


h
h =
1

This defines the theoretical ACF for a time series variable with an AR(1) model. (Note: 1 is the slope in the AR(1) model and we now see that
it also is the lag 1 autocorrelation.)
https://onlinecourses.science.psu.edu/stat510/print/book/export/html/60 2/9
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/60

Details of the derivations of these properties are in the Appendix to this lesson for interested students.

Pattern of ACF for AR(1) Model

The ACF property defines a distinct pattern for the autocorrelations. For a positive value of 1, the ACF exponentially decreases to 0 as the lag
h increases. For negative 1, the ACF also exponentially decays to 0 as the lag increases, but the algebraic signs for the autocorrelations
alternate between positive and negative.

Following is the ACF of an AR(1) with 1= 0.6, for the first 12 lags. Note the tapering pattern.

The ACF of an AR(1) with 1 = 0.7 follows. Note the alternating and tapering pattern.

https://onlinecourses.science.psu.edu/stat510/print/book/export/html/60 3/9
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/60

Example 1: In Example 1 of Lesson 1.1, we used an AR(1) model for annual earthquakes in the world with seismic magnitude greater than 7.
Heres the sample ACF of the series:

Lag. ACF

1. 0.541733
https://onlinecourses.science.psu.edu/stat510/print/book/export/html/60 4/9
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/60

2. 0.418884
3. 0.397955
4. 0.324047
5. 0.237164
6. 0.171794
7. 0.190228
8. 0.061202
9. -0.048505
10. -0.106730
11. -0.043271
12. -0.072305

The sample autocorrelations taper, although not as fast as they should for an AR(1). For instance, theoretically the lag 2 autocorrelation for an
AR(1) = squared value of lag 1 autocorrelation. Here, the observed lag 2 autocorrelation = .418884. Thats somewhat greater than the
squared value of the first lag autocorrelation (.5417332= 0.293). But, we managed to do okay (in Lesson 1.1) with an AR(1) model for the
data. For instance, the residuals looked okay. This brings up an important point the sample ACF will rarely fit a perfect theoretical pattern. A
lot of the time you just have to try a few models to see what fits.

Well study the ACF patterns of other ARIMA models during the next three weeks. Each model has a different pattern for its ACF, but in
practice the interpretation of a sample ACF is not always so clear-cut.

A reminder: Residuals usually are theoretically assumed to have an ACF that has correlation = 0 for all lags.

Example 2:

Heres a time series of the daily cardiovascular mortality rate in Los Angeles County, 1970-1979

https://onlinecourses.science.psu.edu/stat510/print/book/export/html/60 5/9
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/60

There is a slight downward trend, so the series may not be stationary. To create a (possibly) stationary series, well examine the first
differences yt = xt - xt-1. This is a common time series method for creating a de-trended series and thus potentially a stationary series. Think
about a straight line there are constant differences in average y for each change of 1-unit in x.

The time series plot of the first differences is the following:

https://onlinecourses.science.psu.edu/stat510/print/book/export/html/60 6/9
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/60

The following plot is the sample estimate of the autocorrelation function of 1st differences:

Lag. ACF

1. -0.506029
2. 0.205100
3. -0.126110
4. 0.062476
5. -0.015190

This looks like the pattern of an AR(1) with a negative lag 1 autocorrelation. Note that the lag 2 correlation is roughly equal to the squared
value of the lag 1 correlation. The lag 3 correlation is nearly exactly equal to the cubed value of the lag 1 correlation, and the lag 4 correlation
nearly equals the fourth power of the lag 1 correlation. Thus an AR(1) model may be a suitable model for the first differences yt = xt xt1 .

Let yt denote the first differences, so that yt = xt xt1 and yt1 = xt1 xt2 . We can write this AR(1) model as

yt = + 1 yt1 + w t

Using R, we found that the estimated model for the first differences is

^
y = 0.04627 0.50636yt1
t

https://onlinecourses.science.psu.edu/stat510/print/book/export/html/60 7/9
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/60

Some R code for this example will be given in Lesson 1.3 for this week.

Appendix Derivations of Properties of AR(1)

Generally you wont be responsible for reproducing theoretical derivations, but interested students may want to see the derivations for the
theoretical properties of an AR(1).

The algebraic expression of the model is as follows:

xt = + 1 xt1 + w t

Assumptions:
iid
w t N (0, w ) , meaning that the errors are independently distributed with a normal distribution that has mean 0 and constant variance.
2

Properties of the errors wt are independent of xt .


The series x1, x2, ... is (weakly) stationary. A requirement for a stationary AR(1) is that |1 | < 1. Well see why below.

Mean:

E(xt ) = E( + 1 xt1 + w t ) = E() + E(1 xt1 ) + E(w t ) = + 1 E(xt1 ) + 0

With the stationary assumption, E(xt ) = E(xt1 ) . Let denote this common mean. Thus = + 1 . Solve for to get


=
1 1

Variance:

By independence of errors and values of x,

Var(xt ) = Var() + Var(1 xt1 ) + Var(w t )


2 2
= Var(xt1 ) + w
1

By the stationary assumption, Var(xt ) = Var(xt1 ). Substitute Var(xt ) for Var(xt1 ) and then solve for Var(xt ) . Because
Var(xt ) > 0 , it follows that (1 ) > 0 and therefore |1 | < 1.
2
1

https://onlinecourses.science.psu.edu/stat510/print/book/export/html/60 8/9
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/60

Autocorrelation Function (ACF)

To start, assume the data have mean 0, which happens when =0, and xt = 1xt-1 + wt. In practice this isnt necessary, but it simplifies
matters. Values of variances, covariances and correlations are not affected by the specific value of the mean.

Let h = E(xtxt+h) = E(xtxth), the covariance observations h time periods apart (when the mean = 0). Let h = correlation between
observations that are h time periods apart.

Covariance and correlation between observations one time period apart


2
1 = E(xt xt+1 ) = E(xt (1 xt + w t+1 )) = E(1 x + xt w t+1 ) = 1 Var(xt )
t

Cov(xt , xt+1 ) 1 Var(xt )


1 = = = 1
Var(xt ) Var(xt )

Covariance and correlation between observations h time periods apart

To find the covariance h, multiply each side of the model for xt by xt-h, then take expectations.

xt = 1 xt1 + w t

xth xt = 1 xth xt1 + xth w t

E(xth xt ) = E(1 xth xt1 ) + E(xth w t )

h = 1 h1

If we start at 1 , and move recursively forward we get h = 0


h
1
. By definition, 0 = Var(xt ) , so this is h h
= Var(xt )
1
. The correlation

h
h Var(xt )
1 h
h = = =
1
Var(xt ) Var(xt )

Source URL: https://onlinecourses.science.psu.edu/stat510/node/60

https://onlinecourses.science.psu.edu/stat510/print/book/export/html/60 9/9
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/61

Published on STAT 510 (https://onlinecourses.science.psu.edu/stat510)


Home > 1.3 R Code for Two Examples in Lessons 1.1 and 1.2

1.3 R Code for Two Examples in Lessons 1.1 and 1.2


One example in Lesson 1.1 and Lesson 1.2 concerned the annual number of earthquakes worldwide with a magnitude greater than 7.0 on the
seismic scale. We identified an AR(1) model (autoregressive model of order 1), estimated the model, and assessed the residuals. Below is R
code that will accomplish these tasks. The first line reads the data from a file named quakes.dat (posted in the Week 1 folder on the course
website). The data are listed in time order from left to right in the lines of the file. If you were to download the file, you should download it into a
folder that you create for storing course data. Then in R, change the working directory to be this folder.

The commands below include explanatory comments, following the #. Those comments do not have to be entered for the command to work.

x=scan("quakes.dat")
x=ts(x) #this makes sure R knows that x is a time series
plot(x, type="b") #time series plot of x with points marked as o
install.packages("astsa")
library(astsa) # See note 1 below
lag1.plot(x,1) # Plots x versus lag 1 of x.
acf(x, xlim=c(1,19)) # Plots the ACF of x for lags 1 to 19
xlag1=lag(x,-1) # Creates a lag 1 of x variable. See note 2
y=cbind(x,xlag1) # See note 3 below
ar1fit=lm(y[,1]~y[,2])#Does regression, stores results object named ar1fit
summary(ar1fit) # This lists the regression results
plot(ar1fit$fit,ar1fit$residuals) #plot of residuals versus fits
acf(ar1fit$residuals, xlim=c(1,18)) # ACF of the residuals for lags 1 to 18

Note 1: The astsa library accesses R script(s) written by one of the authors of our textbook (Stoffer). In our program, the lag1.plot command is
part of that script. You may read more about the library on the website for our text: http://www.stat.pitt.edu/stoffer/tsa3/xChanges.htm [1]. You
must install the astsa package in R before loading the commands in the library statement. Not all available packages are included when you
install R on your machine (cran.r-project.org/web/packages/). You only need to run install.packages("astsa") once. In subsequent sessions, the
library command alone will bring the commands into your current session.

https://onlinecourses.science.psu.edu/stat510/print/book/export/html/61 1/7
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/61

Note 2: Note the negative value for the lag in xlag1=lag(x,1). To lag back in time in R, use a negative lag.

Note 3: This is a bit tricky. For whatever reason, R has to bind together a variable with its lags for the lags to be in the proper connection with
the original variable. The cbind and the ts.intersect commands both accomplish this task. In the code above, the lagged variable and the original
variable become the first and second columns of a matrix named y. The regression command (lm) uses these two columns of y as the
response and predictor variables in the regression.

General Note: If a command that includes quotation marks doesnt work when you copy and paste from course notes to R, try typing the
command in R instead.

The results, as given by R, follow.

The time series plot for the quakes series.

Plot of x versus lag 1 of x.

https://onlinecourses.science.psu.edu/stat510/print/book/export/html/61 2/7
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/61

The ACF of the quakes series.

https://onlinecourses.science.psu.edu/stat510/print/book/export/html/61 3/7
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/61

The regression results:

Coefficients:

Estimate Std. Error t value Pr(>|t|)

(Intercept) 9.19070 1.81924 5.052 2.08e-06 ***

y[, 2] 0.54339 0.08528 6.372 6.47e-09 ***

Signif. codes: 0 *** 0.001 ** 0.01 * 0.05 . 0.1 1


Residual standard error: 6.122 on 96 degrees of freedom
(2 observations deleted due to missingness)
Multiple R-squared: 0.2972, Adjusted R-squared: 0.2899

https://onlinecourses.science.psu.edu/stat510/print/book/export/html/61 4/7
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/61

Plot of residuals versus fits

ACF of residuals

https://onlinecourses.science.psu.edu/stat510/print/book/export/html/61 5/7
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/61

An example in Lesson 1.2 for this week concerned the weekly cardiovascular mortality rate in Los Angeles County. We used a first difference
to account for a linear trend and determine that the first differences may have an AR(1) model.

The data are in the cmort.dat file in the Week 1 folder of the course website.

Following are R commands for the analysis. Again, the commands are commented using #comment.

mort=scan("cmort.dat")
plot(mort, type="o") # plot of mortality rate
mort=ts(mort)
mortdiff=diff(mort,1) # creates a variable = x(t) x(t-1)
plot(mortdiff,type="o") # plot of first differences
acf(mortdiff,xlim=c(1,24)) # plot of first differences, for 24 lags
mortdifflag1=lag(mortdiff,-1)
y=cbind(mortdiff,mortdifflag1) # bind first differences and lagged first differences
mortdiffar1=lm(y[,1]~y[,2]) # AR(1) regression for first differences

https://onlinecourses.science.psu.edu/stat510/print/book/export/html/61 6/7
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/61

summary(mortdiffar1) # regression results


acf(mortdiffar1$residuals, xlim = c(1,24)) # ACF of residuals for 24 lags.

Well leave it to you to try the code and see the output, if you wish.

Source URL: https://onlinecourses.science.psu.edu/stat510/node/61

Links:
[1] http://www.stat.pitt.edu/stoffer/tsa3/xChanges.htm

https://onlinecourses.science.psu.edu/stat510/print/book/export/html/61 7/7
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/48

Published on STAT 510 (https://onlinecourses.science.psu.edu/stat510)


Home > 2.1 Moving Average Models (MA models)

2.1 Moving Average Models (MA models)


Time series models known as ARIMA models may include autoregressive terms and/or moving average terms. In Week 1, we learned an
autoregressive term in a time series model for the variable xt is a lagged value of xt. For instance, a lag 1 autoregressive term is xt-1
(multiplied by a coefficient). This lesson defines moving average terms.

A moving average term in a time series model is a past error (multiplied by a coefficient).

iid
Let wt N (0, w
2
), meaning that the wt are identically, independently distributed, each with a normal distribution having mean 0 and the

same variance.

The 1st order moving average model, denoted by MA(1) is

xt = + w t + 1 w t1

The 2nd order moving average model, denoted by MA(2) is

xt = + w t + 1 w t1 + 2 w t2

The qth order moving average model, denoted by MA(q) is

xt = + w t + 1 w t1 + 2 w t2 + + q w tq

Note: Many textbooks and software programs define the model with negative signs before the terms. This doesnt change the general
theoretical properties of the model, although it does flip the algebraic signs of estimated coefficient values and (unsquared) terms in formulas
for ACFs and variances. You need to check your software to verify whether negative or positive signs have been used in order to correctly
write the estimated model. R uses positive signs in its underlying model, as we do here.

Theoretical Properties of a Time Series with an MA(1) Model

https://onlinecourses.science.psu.edu/stat510/print/book/export/html/48 1/10
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/48

Mean is E(xt) =
Variance is Var(xt) = w 2(1 + 12)
Autocorrelation function (ACF) is

1
1 = , andh = 0forh 2
2
1 +
1

Note that the only nonzero value in the theoretical ACF is for lag 1. All other autocorrelations are 0. Thus a sample ACF with a significant
autocorrelation only at lag 1 is an indicator of a possible MA(1) model.

For interested students, proofs of these properties are an appendix to this handout.

iid
Example 1 Suppose that an MA(1) model is xt = 10 + wt + .7wt-1, where wt N (0, 1) . Thus the coefficient 1= 0.7. The theoretical ACF is
given by

0.7
1 = = 0.4698, andh = 0for all lagsh 2
2
1 + 0.7

A plot of this ACF follows.

https://onlinecourses.science.psu.edu/stat510/print/book/export/html/48 2/10
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/48

The plot just shown is the theoretical ACF for an MA(1) with 1= 0.7. In practice, a sample wont usually provide such a clear pattern. Using R,
we simulated n = 100 sample values using the model xt = 10 + wt + .7wt-1 where wt ~ iid N(0,1). For this simulation, a time series plot of the
sample data follows. We cant tell much from this plot.

The sample ACF for the simulated data follows. We see a spike at lag 1 followed by generally non-significant values for lags past 1. Note
that the sample ACF does not match the theoretical pattern of the underlying MA(1), which is that all autocorrelations for lags past 1 will be 0.
A different sample would have a slightly different sample ACF shown below, but would likely have the same broad features.

https://onlinecourses.science.psu.edu/stat510/print/book/export/html/48 3/10
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/48

Theroretical Properties of a Time Series with an MA(2) Model

For the MA(2) model, theoretical properties are the following:

Mean is E(xt) =
Variance is Var(xt) = w 2(1 + 12 + 22)
Autocorrelation function (ACF) is

1 + 1 2 2
1 = , 2 = , andh = 0forh 3
2 2 2 2
1 + + 1 + +
1 2 1 2

Note that the only nonzero values in the theoretical ACF are for lags 1 and 2. Autocorrelations for higher lags are 0. So, a sample ACF with
significant autocorrelations at lags 1 and 2, but non-significant autocorrelations for higher lags indicates a possible MA(2) model.

Example 2 Consider the MA(2) model xt = 10 + wt + .5wt-1 + .3wt-2, where wt ~ iid N(0,1). The coefficients are 1= 0.5 and 2= 0.3. Because
this is an MA(2), the theoretical ACF will have nonzero values only at lags 1 and 2.

Values of the two nonzero autocorrelations are

0.5 + 0.5 0.3


https://onlinecourses.science.psu.edu/stat510/print/book/export/html/48 0.3 4/10
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/48

0.5 + 0.5 0.3 0.3


1 = = 0.4851and2 = = 0.2239
2 2 2 2
1 + 0.5 + 0.3 1 + 0.5 + 0.3

A plot of the theoretical ACF follows.

As nearly always is the case, sample data wont behave quite so perfectly as theory. We simulated n = 150 sample values for the model xt =
10 + wt + .5wt-1 + .3wt-2, where wt ~ iid N(0,1). The time series plot of the data follows. As with the time series plot for the MA(1) sample data,
you cant tell much from it.

https://onlinecourses.science.psu.edu/stat510/print/book/export/html/48 5/10
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/48

The sample ACF for the simulated data follows. The pattern is typical for situations where an MA(2) model may be useful. There are two
statistically significant spikes at lags 1 and 2 followed by non-significant values for other lags. Note that due to sampling error, the sample
ACF did not match the theoretical pattern exactly.

https://onlinecourses.science.psu.edu/stat510/print/book/export/html/48 6/10
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/48

ACF for General MA(q) Models

A property of MA(q) models in general is that there are nonzero autocorrelations for the first q lags and autocorrelations = 0 for all lags > q.

Non-uniqueness of connection between values of 1 and 1 in MA(1) Model.

In the MA(1) model, for any value of 1, the reciprocal 1/1 gives the same value for

1
1 =
2
1 +
1

As an example, use +0.5 for 1, and then use 1/(0.5) = 2 for 1. Youll get 1 = 0.4 in both instances.

To satisfy a theoretical restriction called invertibility, we restrict MA(1) models to have values with absolute value less than 1. In the example
just given, 1 = 0.5 will be an allowable parameter value, whereas 1 = 1/0.5 = 2 will not.

Invertibility of MA models

An MA model is said to be invertible if it is algebraically equivalent to a converging infinite order AR model. By converging, we mean that the
AR coefficients decrease to 0 as we move back in time.

Invertibility is a restriction programmed into time series software used to estimate the coefficients of models with MA terms. Its not something
that we check for in the data analysis. Additional information about the invertibility restriction for MA(1) models is given in the appendix.

Advanced Theory Note: For a MA(q) model with a specified ACF, there is only one invertible model. The necessary condition for invertibility
is that the coefficients have values such that the equation 1-1y- ... - qyq = 0 has solutions for y that fall outside the unit circle.

R Code for the Examples

In Example 1, we plotted the theoretical ACF of the model xt = 10 + wt + .7wt-1, and then simulated n = 150 values from this model and plotted
the sample time series and the sample ACF for the simulated data. The R commands used to plot the theoretical ACF were:

acfma1=ARMAacf(ma=c(0.7), lag.max=10) # 10 lags of ACF for MA(1) with theta1 = 0.7


lags=0:10 #creates a variable named lags that ranges from 0 to 10.

https://onlinecourses.science.psu.edu/stat510/print/book/export/html/48 7/10
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/48

plot(lags,acfma1,xlim=c(1,10), ylab="r",type="h", main = "ACF for MA(1) with theta1 = 0.7")


abline (h=0) #adds a horizontal axis to the plot

The first command determines the ACF and stores it in an object named acfma1 (our choice of name).

The plot command (the 3rd command) plots lags versus the ACF values for lags 1 to 10. The ylab parameter labels the y-axis and the main
parameter puts a title on the plot.

To see the numerical values of the ACF simply use the command acfma1.

The simulation and plots were done with the following commands.

xc=arima.sim(n=150, list(ma=c(0.7))) #Simulates n = 150 values from MA(1)


x=xc+10 # adds 10 to make mean = 10. Simulation defaults to mean = 0.
plot(x,type="b", main="Simulated MA(1) data")
acf(x, xlim=c(1,10), main="ACF for simulated sample data")

In Example 2, we plotted the theoretical ACF of the model xt = 10 + wt + .5wt-1 + .3wt-2 , and then simulated n = 150 values from this model
and plotted the sample time series and the sample ACF for the simulated data. The R commands used were

acfma2=ARMAacf(ma=c(0.5,0.3), lag.max=10)
acfma2
lags=0:10
plot(lags,acfma2,xlim=c(1,10), ylab="r",type="h", main = "ACF for MA(2) with theta1 =
0.5,theta2=0.3")
abline (h=0)
xc=arima.sim(n=150, list(ma=c(0.5, 0.3)))
x=xc+10
plot (x, type="b", main = "Simulated MA(2) Series")
acf(x, xlim=c(1,10), main="ACF for simulated MA(2) Data")

Appendix: Proof of Properties of MA(1)

For interested students, here are proofs for theoretical properties of the MA(1) model.

https://onlinecourses.science.psu.edu/stat510/print/book/export/html/48 8/10
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/48

The 1st order moving average model, denoted by MA(1) is xt = + wt + 1wt-1, where wt ~ iid N(0, w 2 ).

Mean: E(xt) = E(+wt+1wt-1) = + 0 + (1)(0) =

Variance: Var(xt ) = Var( + w t + 1 w t1 ) = 0 + Var(w t ) + Var(1 w t1 ) = w + w = (1 + )w


2 2
1
2 2
1
2

ACF: Consider the covariance between xt and xt-h. This is E(xt - )(xt-h- ), which equals

2
E[(w t + 1 w t1 )(w th + 1 w th1 )] = E[w t w th + 1 w t1 w th + 1 w t w th1 + w t1 w th1 ]
1

When h = 1, the previous expression = 1w 2 . For any h 2, the previous expression = 0. The reason is that, by definition of independence
of the wt, E(wkwj) = 0 for any k j. Further, because the wt have mean 0, E(wjwj) = E(wj2) = w 2.

For a time series,

Covariance for lag h


h =
Variance

Apply this result to get the ACF given above.

Invertibility Restriction:

An invertible MA model is one that can be written as an infinite order AR model that converges so that the AR coefficients converge to 0 as we
move infinitely back in time. Well demonstrate invertibility for the MA(1) model.

The MA(1) model can be written as xt - = wt + 1wt-1.

If we let zt = xt - , then the MA(1) model is

(1) zt = w t + 1 w t1 .

At time t-1, the model is zt-1 = wt-1 + 1wt-2 which can be reshuffled to

(2) w t1 = zt1 1 w t2 .

We then substitute relationship (2) for wt-1 in equation (1)

(3) zt = w t + 1 (zt1 1 w t2 ) = w t + 1 zt1 w t2


2

https://onlinecourses.science.psu.edu/stat510/print/book/export/html/48 9/10
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/48
t t 1 t1 1 t2 t 1 t1 t2

At time t-2, equation (2) becomes

(4) w t2 = zt2 1 w t3 .

We then substitute relationship (4) for wt-2 in equation (3)

2 2 2 3
zt = w t + 1 zt1 w t2 = w t + 1 zt1 (zt2 1 w t3 ) = w t + 1 zt1 zt2 + w t3
1 1 1 1

If we were to continue (infinitely), we would get the infinite order AR model


2 3 4
zt = w t + 1 zt1 zt2 + zt3 zt4 +
1 1 1

Note however, that if |1| 1, the coefficients multiplying the lags of z will increase (infinitely) in size as we move back in time. To prevent this,
we need |1| <1. This is the condition for an invertible MA(1) model.

Infinite Order MA model

In week 3, well see that an AR(1) model can be converted to an infinite order MA model:

2 k j
xt = w t + 1 w t1 + w t2 + + w tk + = w tj
1 1 j=0 1

This summation of past white noise terms is known as the causal representation of an AR(1). In other words, xt is a special type of MA with
an infinite number of terms going back in time. This is called an infinite order MA or MA(). A finite order MA is an infinite order AR and any
finite order AR is an infinite order MA.

Recall in Week 1, we noted that a requirement for a stationary AR(1) is that |1| <1. Lets calculate the Var(xt) using the causal representation.

2
j j 2j 2j
w
2 2
Var(xt ) = Var ( w tj = Var( w tj ) = w = w = )
1 1 1 1 2
1
j=0 j=0 j=0 j=0 1

This last step uses a basic fact about geometric series that requires |1 | < 1 ; otherwise the series diverges.

Source URL: https://onlinecourses.science.psu.edu/stat510/node/48

https://onlinecourses.science.psu.edu/stat510/print/book/export/html/48 10/10
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/62

Published on STAT 510 (https://onlinecourses.science.psu.edu/stat510)


Home > 2.2 Partial Autocorrelation Function (PACF)

2.2 Partial Autocorrelation Function (PACF)


In general, a partial correlation is a conditional correlation.
It is the correlation between two variables under the assumption that we know and take into account the values of some other set of
variables.
For instance, consider a regression context in which y = response variable and x1, x2, and x3 are predictor variables. The partial
correlation between y and x3 is the correlation between the variables determined taking into account how both y and x3 are related to x1
and x2.
In regression, this partial correlation could be found by correlating the residuals from two different regressions: (1) Regression in which
we predict y from x1 and x2, (2) regression in which we predict x3 from x1 and x2. Basically, we correlate the parts of y and x3 that are
not predicted by x1 and x2.
More formally, we can define the partial correlation just described as

Covariance(y, x3 |x1 , x2 )

Variance(y|x1 , x2 )Variance(x3 |x1 , x2 )

Note that this is also how the parameters of a regression model are interpreted. Think about the difference between interpreting the
regression models:
2 2
y = 0 + 1 x andy = 0 + 1 x + 2 x

In the first model, 1 can be interpreted as the linear dependency between x2 and y. In the second model, 2 would be interpreted as
the linear dependency between x2 and y WITH the dependency between x and y already accounted for.

For a time series, the partial autocorrelation between xt and xt-h is defined as the conditional correlation between xt and xt-h, conditional
on xt-h+1, ... , xt-1, the set of observations that come between the time points t and th.
The 1st order partial autocorrelation will be defined to equal the 1st order autocorrelation.

https://onlinecourses.science.psu.edu/stat510/print/book/export/html/62 1/5
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/62

The 2nd order (lag) partial autocorrelation is

Covariance(xt , xt2 |xt1 )



Variance(xt |xt1 )Variance(xt2 |xt1 )

This is the correlation between values two time periods apart conditional on knowledge of the value in between. (By the way, the two
variances in the denominator will equal each other in a stationary series.)

The 3rd order (lag) partial autocorrelation is

Covariance(xt , xt3 |xt1 , xt2 )



Variance(xt |xt1 , xt2 )Variance(xt3 |xt1 , xt2 )

And, so on, for any lag.

Typically, matrix manipulations having to do with the covariance matrix of a multivariate distribution are used to determine estimates of the
partial autocorrelations.

Some Useful Facts About PACF and ACF Patterns

Identification of an AR model is often best done with the PACF.

For an AR model, the theoretical PACF shuts off past the order of the model. The phrase shuts off means that in theory the partial
autocorrelations are equal to 0 beyond that point. Put another way, the number of non-zero partial autocorrelations gives the order of the
AR model. By the order of the model we mean the most extreme lag of x that is used as a predictor.

Example: In Lesson 1.2, we identified an AR(1) model for a time series of annual numbers of worldwide earthquakes having a seismic
magnitude greater than 7.0. Following is the sample PACF for this series. Note that the first lag value is statistically significant, whereas partial
autocorrelations for all other lags are not statistically significant. This suggests a possible AR(1) model for these data.

https://onlinecourses.science.psu.edu/stat510/print/book/export/html/62 2/5
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/62

Identification of an MA model is often best done with the ACF rather than the PACF.

For an MA model, the theoretical PACF does not shut off, but instead tapers toward 0 in some manner. A clearer pattern for an MA
model is in the ACF. The ACF will have non-zero autocorrelations only at lags involved in the model.

Lesson 2.1 included the following sample ACF for a simulated MA(1) series. Note that the first lag autocorrelation is statistically significant
whereas all subsequent autocorrelations are not. This suggests a possible MA(1) model for the data.

Theory note: The model used for the simulation was xt = 10 + wt + 0.7wt-1. In theory, the first lag autocorrelation 1/(1+12 ) = .7/(1+.72) =
.4698 and autocorrelations for all other lags = 0.

https://onlinecourses.science.psu.edu/stat510/print/book/export/html/62 3/5
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/62

The underlying model used for the MA(1) simulation in Lesson 2.1 was xt = 10 + wt + 0.7wt-1. Following is the theoretical PACF (partial
autocorrelation) for that model. Note that the pattern gradually tapers to 0.

https://onlinecourses.science.psu.edu/stat510/print/book/export/html/62 4/5
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/62

R note: The PACF just shown was created in R with these two commands:

ma1pacf = ARMAacf(ma = c(.7),lag.max = 36, pacf=TRUE)


plot(ma1pacf,type="h", main = "Theoretical PACF of MA(1) with theta = 0.7")

Source URL: https://onlinecourses.science.psu.edu/stat510/node/62

https://onlinecourses.science.psu.edu/stat510/print/book/export/html/62 5/5
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/63

Published on STAT 510 (https://onlinecourses.science.psu.edu/stat510)


Home > 2.3 Notational Conventions

2.3 Notational Conventions


Time series models (in the time domain) involve lagged terms and may involve differenced data to account for trend. There are useful
notations used for each.

Backshift Operator

Using B before either a value of the series xt or an error term wt means to move that element back one time. For instance,

Bxt = xt1 .

A power of B means to repeatedly apply the backshift in order to move back a number of time periods that equals the power. As an
example,
2
B xt = xt2 .

xt2 represents xt two units back in time. k


B xt = xtk represents xt k units back in time. The backshift operator B doesnt operate on
coefficients because they are fixed quantities that do not move in time. For example, B1 = 1.

AR Models and the AR Polynomial

AR models can be written compactly using an AR polynomial involving coefficients and backshift operators. Let p = the maximum order (lag)
of the AR terms in the model. The general form for an AR polynomial is

(B) = 1 1 B p B
p
.

Using the AR polynomial one way to write an AR model is

(B)xt = + w t .

https://onlinecourses.science.psu.edu/stat510/print/book/export/html/63 1/4
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/63

Examples

Consider the AR(1) model xt = +1xt-1+wt where wt ~ iid N(0, w2). For an AR(1), the maximum lag = 1 so the AR polynomial is

(B) = 1 1 B

and the model can be written

(1 1 B)xt = + w t .

To check that this works, we can multiply out the left side to get

xt 1 xt1 = + w t .

Then, swing the -1xt-1 over to the right side and we get

xt = + 1 xt1 + w t .

An AR(2) model is xt = + 1 xt1 + 2 xt2 + w t . That is, xt is a linear function of the values of x at the previous two lags. The AR
polynomial for an AR(2) model is

(B) = 1 1 B 2 B .
2

The AR(2) model could be written as (1 1 B 2 B2 )xt = + wt , or as (B)xt = + wt with an additional explanation that
(B) = 1 1 B 2 B .
2

An AR(p) model is x = + x
t + x1 +. . . + x
t1 2 t2 + w , where , , . . . , are constants and may be greater than 1. (Recall that |
p tp t 1 2 p 1| < 1

for an AR(1) model.) Here xt is a linear function of the values of x at the previous p lags.

A shorthand notation for the AR polynomial is (B) and a general AR model might be written as (B)xt = + wt . Of course, you would
have to specify the order of the model somewhere on the side.

MA Models

A MA(1) model xt = + wt + 1 wt1 could be written as xt = + (1 + 1 B)w t . A factor such as 1 + 1 B is called the MA
polynomial, and it is denoted as (B) .

A MA(2) model is defined as


https://onlinecourses.science.psu.edu/stat510/print/book/export/html/63 and could be written as = + (1 + +
2
) . Here, the MA 2/4
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/63

A MA(2) model is defined as xt = + wt + 1 wt1 + 2 wt2 and could be written as xt 2


= + (1 + 1 B + 2 B )w t . Here, the MA
polynomial is (B) = (1 + 1 B + 2 B2 ).

In general, the MA polynomial is (B) = (1 + 1 B + + q B )


q
, where q = maximum order (lag) for MA terms in the model.

In general, we can write an MA model as xt = (B)w t .

Models with Both AR and MA Terms

A model that involves both AR and MA terms might be written (B)(xt ) = (B)w t or possibly even

(B)
(xt ) = wt .
(B)

Note: Many textbooks and software programs define the MA polynomial with negative signs rather than positive signs as above. This doesnt
change the properties of the model, or with a sample, the overall fit of the model. It only changes the algebraic signs of the MA coefficients.
Always check to see how your software is defining the MA polynomial. For example is the MA(1) polynomial 1 + 1B or 1 - 1B?

Differencing

Often differencing is used to account for nonstationarity that occurs in the form of trend and/or seasonality.

The difference xt - xt-1 can be expressed as (1-B)xt.

An alternative notation for a difference is

= 1 B .

Thus

xt = (1 B)xt = xt xt1 .

A subscript defines a difference of a lag equal to the subscript. For instance,

12 xt = xt xt12 .

This type of difference is often used with monthly data that exhibits seasonality. The idea is that differences from the previous year
may be, on average, about the same for each month of a year.

https://onlinecourses.science.psu.edu/stat510/print/book/export/html/63 3/4
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/63

A superscript says to repeat the differencing the specified number of times. As an example,
2 2 2
xt = (1 B) xt = (1 2B + B )xt = xt 2xt1 + xt2 .

In words, this is a first difference of the first differences.

Source URL: https://onlinecourses.science.psu.edu/stat510/node/63

https://onlinecourses.science.psu.edu/stat510/print/book/export/html/63 4/4
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/64

Published on STAT 510 (https://onlinecourses.science.psu.edu/stat510)


Home > 3.1 Non-seasonal ARIMA Models

3.1 Non-seasonal ARIMA Models


ARIMA models, also called Box-Jenkins models, are models that may possibly include autoregressive terms, moving average terms, and
differencing operations. Various abbreviations are used:

When a model only involves autoregressive terms it may be referred to as an AR model. When a model only involves moving average
terms, it may be referred to as an MA model.
When no differencing is involved, the abbreviation ARMA may be used.

Note: This week were only considering non-seasonal models. Well expand our toolkit to include seasonal models next week.

Specifying the Elements of the Model

In most software programs, the elements in the model are specified in the order (AR order, differencing, MA order). As examples,

A model with (only) two AR terms would be specified as an ARIMA of order (2,0,0).

A MA(2) model would be specified as an ARIMA of order (0,0,2).

A model with one AR term, a first difference, and one MA term would have order (1,1,1).

For the last model, ARIMA (1,1,1), a model with one AR term and one MA term is being applied to the variable zt=xt-xt-1. A first difference
might be used to account for a linear trend in the data.

The differencing order refers to successive first differences. For example, for a difference order = 2 the variable analyzed is zt = (xt-xt-1) - (xt-1-
xt-2), the first difference of first differences. This type of difference might account for a quadratic trend in the data.

Identifying a Possible Model

Three items should be considered to determine a first guess at an ARIMA model: a time series plot of the data, the ACF, and the PACF.

https://onlinecourses.science.psu.edu/stat510/print/book/export/html/64 1/11
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/64

Time series plot of the observed series.

In Lesson 1.1, we discussed what to look for: possible trend, seasonality, outliers, constant variance or nonconstant variance.

You wont be able to spot any particular model by looking at this plot, but you will be able to see the need for various possible actions.
If theres an obvious upward or downward linear trend, a first difference may be needed. A quadratic trend might need a 2nd order
difference (as described above). We rarely want to go much beyond two. In those cases, we might want to think about things like
smoothing, which we will cover later in the course. Over-differencing can cause us to introduce unnecessary levels of dependency
(difference white noise to obtain a MA(1)difference again to obtain a MA(2), etc.)
For data with a curved upward trend accompanied by increasing variance, you should consider transforming the series with either a
logarithm or a square root.

Note: Nonconstant variance in a series with no trend may have to be addressed with something like an ARCH model which includes a
model for changing variation over time. Well cover ARCH models later in the course.

ACF and PACF

The ACF and PACF should be considered together. It can sometimes be tricky going, but a few combined patterns do stand out. (These are
listed in the Table 3.1 of the book on page 108).

AR models have theoretical PACFs with non-zero values at the AR terms in the model and zero values elsewhere. The ACF will taper to
zero in some fashion. (Example [1]) An AR(1) model has an ACF with a pattern
k
k =
1

An AR(2) has a sinusoidal ACF that converges to 0. (Example [2])


MA models have theoretical ACFs with non-zero values at the MA terms in the model and zero values elsewhere. (Example [3])
ARMA models (including both AR and MA terms) have ACFs and PACFs that both tail off to 0. These are the trickiest because the order
will not be particularly obvious. Basically you just have to guess that one or two terms of each type may be needed and then see what
happens when you estimate the model. (Example [4])
If the ACF and PACF do not tail off, but instead have values that stay close to 1 over many lags, the series is non-stationary and
differencing will be needed. Try a first difference and then look at the ACF and PACF of the differenced data.
If all autocorrelations are non-significant, then the series is random (white noise; the ordering matters, but the data are independent and
identically distributed.) Youre done at that point.
If you have taken first differences and all autocorrelations are non-significant, then the series is called a random walk and you are done.
(A possible model for a random walk is xt = + xt-1 + wt. The data are dependent and are not identically distributed; in fact both the
mean and variance are increasing through time.)

Note: You might also consider examining plots of xt versus various lags of xt.

https://onlinecourses.science.psu.edu/stat510/print/book/export/html/64 2/11
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/64

Estimating and Diagnosing a Possible Model

After youve made a guess (or two) at a possible model, use software such as R, Minitab, or SAS to estimate the coefficients. Most software
will use maximum likelihood estimation methods to make the estimates. Once the model has been estimated, do the following.

Look at the significance of the coefficients. In R, p-values arent given. For each coefficient, calculate z = estimated coeff. / std. error of
coeff. If |z| > 1.96, the estimated coefficient is significantly different from 0.
Look at the ACF of the residuals. For a good model, all autocorrelations for the residual series should be non-significant. If this isnt the
case, you need to try a different model.
Look at Box-Pierce (Ljung) tests for possible residual autocorrelation at various lags (see Lesson 3.2 for a description of this test).
If non-constant variance is a concern, look at a plot of residuals versus fits and/or a time series plot of the residuals.

If something looks wrong, youll have to revise your guess at what the model might be. This might involve adding parameters or re-interpreting
the original ACF and PACF to possibly move in a different direction.

What if More Than One Model Looks Okay?

Sometimes more than one model can seem to work for the same dataset. When thats the case, some things you can do to decide between
the models are:

Possibly choose the model with the fewest parameters.


Examine standard errors of forecast values. Pick the model with the generally lowest standard errors for predictions of the future.
Compare models with regard to statistics such as the MSE (the estimate of the variance of the wt), AIC, AICc, and SIC (also called BIC).
Lower values of these statistics are desirable.

AIC, AICc, and SIC (or BIC) are defined and discussed on pages 52-53 of our book. The statistics combine the estimate of the
variance with values of the sample size and number of parameters in the model.

One reason that two models may seem to give about the same results is that, with the certain coefficient values, two different models can
sometimes be nearly equivalent when they are each converted to an infinite order MA model. [Every ARIMA model can be converted to an
infinite order MA this is useful for some theoretical work, including the determination of standard errors for forecast errors.] Well see more
about this in Lesson 3.2.

Example 1: The Lake Erie data from Week 1 assignment. The series is n = 40 consecutive annual measurements of the level of Lake Erie in
October.

Identifying the model.

A time series plot of the data is the following:

https://onlinecourses.science.psu.edu/stat510/print/book/export/html/64 3/11
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/64

Theres a possibility of some overall trend, but it might look that way just because there seemed to be a big dip around the 15th time or so.
Well go ahead without worrying about trend.

The ACF and the PACF of the series are the following. (They start at lag 1).

https://onlinecourses.science.psu.edu/stat510/print/book/export/html/64 4/11
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/64

The PACF shows a single spike at the first lag and the ACF shows a tapering pattern. An AR(1) model is indicated.

Estimating the Model

We used an R script written by one of the authors of our book (Stoffer) to estimate the AR(1) model. Heres part of the output:

Coefficients:

ar1 xmean

Estimate 0.6909 14.6309

Std. Error 0.1094 0.5840

https://onlinecourses.science.psu.edu/stat510/print/book/export/html/64 5/11
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/64

sigma^2 estimated as 1.447: log likelihood = -64.47,


$AIC
[1] 1.469364
$AICc
[1] 1.536030
$BIC
[1] 0.5538077

Where the coefficients are listed, notice the heading xmean. This is giving the estimated mean of the series based on this model, not the
intercept. The model used in the software is of the form (xt ) = 1 (xt1 ) + wt.

The estimated model can be written as (xt - 14.6309) = 0.6909(xt-1 - 14.6309) + wt.

This is equivalent to xt = 14.6309 - (14.6309*0.6909) + 0.6909xt-1 + wt = 4.522 + 0.6909xt-1 + wt.

The AR coefficient is statistically significant (z = 0.6909/0.1094 = 6.315). Its not necessary to test the mean coefficient. We know that its not
0.

The authors routine also gives residual diagnostics in the form of several graphs. Heres that part of the output:

https://onlinecourses.science.psu.edu/stat510/print/book/export/html/64 6/11
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/64

Interpretations of the Diagnostics

The time series plot of the standardized residuals mostly indicates that theres no trend in the residuals, no outliers, and in general, no
changing variance across time.

The ACF of the residuals shows no significant autocorrelations a good result.

The Q-Q plot is a normal probability plot. It doesnt look too bad, so the assumption of normally distributed residuals looks okay.

The bottom plot gives p-values for the Ljung-Box-Pierce statistics for each lag up to 20. These statistics consider the accumulated residual
autocorrelation from lag 1 up to and including the lag on the horizontal axis. The dashed blue line is at .05. All p-values are above it. Thats a
good result. We want non-significant values for this statistic when looking at residuals. Read Lesson 3.2 of this week for more about the
LjungBox-Pierce statistic.

All in all, the fit looks good. Theres not much need to continue, but just to show you how things looks when incorrect models are used, we will
present another model.
https://onlinecourses.science.psu.edu/stat510/print/book/export/html/64 7/11
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/64

Example 1 Continued: Output for a Wrong Model

Suppose that we had misinterpreted the ACF and PACF of the data and had tried an MA(1) model rather than the AR(1) model.

Heres the output:

Coefficients:

ma1 xmean

Estimate 0.5570 14.5881

Std. Error 0.1251 0.3337

sigma^2 estimated as 1.870: log likelihood = -69.46


$AIC
[1] 1.725741
$AICc
[1] 1.792408
$BIC
[1] 0.8101852

The MA(1) coefficient is significant (you can check it), but mostly this looks worse than the statistics for the right model. The estimate of the
variance is 1.87, compared to 1.447 for the AR(1) model. The AIC and BIC statistics are higher for the MA(1) than for the AR(1). Thats not
good.

The diagnostic graphs arent good for the MA(1). The ACF has a significant spike at lag 2 and several of the Ljung-Box-Pierce p-values are
below .05. We dont want them there. So, the MA(1) isnt a good model.

https://onlinecourses.science.psu.edu/stat510/print/book/export/html/64 8/11
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/64

Example 1 Continued: A Model with One Too Many Coefficients:

Suppose we try a model (still the Lake Erie Data) with one AR term and one MA term. Heres some of the output:

ar1 ma1 xmean

Estimate 0.7362 -0.0909 14.6307

Std. Error 0.1362 0.1969 0.6142

sigma^2 estimated as 1.439: log likelihood = -64.36,


$AIC
[1] 1.514079

https://onlinecourses.science.psu.edu/stat510/print/book/export/html/64 9/11
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/64

$AICc
[1] 1.592650
$BIC
[1] 0.640745

Note that the MA(1) coefficient is not significant (z = -0.0909/.1969 is less than 1.96 in absolute value). The MA(1) term could be dropped so
that takes us back to the AR(1). Also, the estimate of the variance is barely better than the estimate for the AR(1) model and the AIC and BIC
statistics are higher for the ARMA(1,1) than for the AR(1).

A Troublesome Example - Parameter Redundancy

Suppose that the model for your data is white noise. If this is true for every t, then it is true for t - 1 as well, in other words:

xt = w t

xt1 = w t1

Let's multiply both sides of the second equation by 0.5:

0.5xt1 = 0.5w t1

Next, we will move both terms over to one side:

0 = 0.5xt1 + 0.5w t1

Because the data is white noise, xt = wt, so we can add xt to the left side and wt to the right side:

xt = 0.5xt1 + w t + 0.5w t1

This is an ARMA(1, 1)! The problem is that we know it is white noise because of the original equations. If we looked at the ACF what would
we see? You would see the ACF corresponding to white noise, a spike at zero and then nothing else. This also means if we take the white
noise process and you try to fit in an ARMA(1, 1), R will do it and will come up with coefficients that looks something like what we have above.
This is one of the reasons why we need to look at the ACF and the PACF plots and other diagnostics. We prefer a model with the fewest
parameters. This example also says that for certain parameter values, ARMA models can appear very similar to one another.

R Code for Example 1

Heres how we accomplished the work for the example in this lesson.

We first loaded the astsa library discussed in Lesson 1. Its a set of scripts written by Stoffer, one of the textbooks authors. If you installed the
astsa package during Week 1, then you only need the library command.
https://onlinecourses.science.psu.edu/stat510/print/book/export/html/64 10/11
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/64

Use the command library("astsa"). This makes the downloaded routines accessible.

The session for creating Example 1 then proceeds as follows:

xerie = scan("eriedata.dat") #reads the data


xerie = ts (xerie) # makes sure xerie is a time series object
plot (xerie, type = "b") # plots xerie
acf2 (xerie) # authors routine for graphing both the ACF and the PACF
sarima (xerie, 1, 0, 0) # this is the AR(1) model estimated with the authors routine
sarima (xerie, 0, 0, 1) # this is the incorrect MA(1) model
sarima (xerie, 1, 0, 1) # this is the over-parameterized ARMA(1,1) model

In Lesson 3.3, well discuss the use of ARIMA models for forecasting. Heres how you would forecast for the next 4 times past the end of the
series using the authors source code and the AR(1) model for the Lake Erie data.

sarima.for (xerie, 4, 1, 0, 0) # four forecasts from an AR(1) model for the erie data

Youll get forecasts for the next four times, the standard errors for these forecasts, and a graph of the time series along with the forecasts.
More details about forecasting will be given in Lesson 3.3.

Some useful information about the authors scripts is at www.stat.pitt.edu/stoffer/tsa3 [5].

Source URL: https://onlinecourses.science.psu.edu/stat510/node/64

Links:
[1] https://onlinecourses.science.psu.edu/stat510/sites/onlinecourses.science.psu.edu.stat510/files/L03/ar1_example.png
[2] https://onlinecourses.science.psu.edu/stat510/sites/onlinecourses.science.psu.edu.stat510/files/L03/ar2_example.png
[3] https://onlinecourses.science.psu.edu/stat510/sites/onlinecourses.science.psu.edu.stat510/files/L03/ma_example.png
[4] https://onlinecourses.science.psu.edu/stat510/sites/onlinecourses.science.psu.edu.stat510/files/L03/arma11_example.png
[5] http://www.stat.pitt.edu/stoffer/tsa3

https://onlinecourses.science.psu.edu/stat510/print/book/export/html/64 11/11
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/65

Published on STAT 510 (https://onlinecourses.science.psu.edu/stat510)


Home > 3.2 Diagnostics

3.2 Diagnostics
Analyzing possible statistical significance of autocorrelation values

The Ljung-Box statistic, also called the modified Box-Pierce statistic, is a function of the accumulated sample autocorrelations, rj, up to any
specified time lag m. As a function of m, it is determined as
m 2
r
j
Q(m) = n(n + 2) ,
n j
j=1

where n = number of usable data points after any differencing operations. (Please visit forvo.com [1] for the proper pronunciation of Ljung.)

As an example,

2 2 2
r r r
1 2 3
Q(3) = n(n + 2) ( + + ).
n 1 n 2 n 3

Use of the Statistic

This statistic can be used to examine residuals from a time series model in order to see if all underlying population autocorrelations for the
errors may be 0 (up to a specified point).

For nearly all models that we consider in this course, the residuals are assumed to be white noise, meaning that they are identically,
independently distributed (from each other). Thus, as we saw last week, the ideal ACF for residuals is that all autocorrelations are 0. This
means that Q(m) should be 0 for any lag m. A significant Q(m) for residuals indicates a possible problem with the model.

(Remember Q(m) measures accumulated autocorrelation up to lag m.)


Loading [MathJax]/extensions/MathZoom.js

https://onlinecourses.science.psu.edu/stat510/print/book/export/html/65 1/4
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/65

Distribution of Q(m)

There are two cases:

1. When the rj are sample autocorrelations for residuals from a time series model, the null hypothesis distribution of Q(m) is approximately a 2
distribution with df = m p, where p = number of coefficients in the model.

(Note: m = lag to which were accumulating, so in essence the statistic is not defined until m > p).

2. When no model has been used, so that the ACF is for raw data, p = 0 and the null distribution of Q(m) is approximately a 2 distribution with
df = m.

p-Value Determination

In both cases, a p-value is calculated as the probability past Q(m) in the relevant distribution. A small p-value (for instance, p-value < .05)
indicates the possibility of non-zero autocorrelation within the first m lags.

Example 1

Below there is Minitab output for the Lake Erie level data that was used for homework 1 and in Lesson 3.1. A useful model is an AR(1) with a
constant. So, p = 2.

Final Estimates of Parameters

Type Coef SE Coef T P

AR 1 0.7078 0.1161 6.10 0.000

Constant 4.2761 0.1953 21.89 0.000

Mean 14.6349 0.6684

Modified Box-Pierce (Ljung-Box) Chi-Square statistic

Lag 12 24 36 48

Chi-Square 9.4 23.2 30.0 *

DF 10 22 34 *

P-Value 0.493 0.390 0.662 *


Loading [MathJax]/extensions/MathZoom.js

https://onlinecourses.science.psu.edu/stat510/print/book/export/html/65 2/4
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/65

Minitab gives p-values for accumulated lags that are multiples of 12. The R sarima command will give a graph that shows p-values of the
Ljung-Box-Pierce tests for each lag (in steps of 1) up to a lag that seems in some way to be a function of the samples size (not sure of the
function).

Interpretation of the Box-Pierce Results

Notice that the p-values for the modified Box-Pierce all are well above .05, indicating non-significance. This is a desirable result. Remember
that there only 40 data values, so theres not much data contributing to correlations at high lags. Thus, the results for m = 24 and m = 36 may
not be meaningful.

Graphs of ACF values

When you request a graph of the ACF values, significance limits are shown by R and by Minitab. In general, the limits for the autocorrelation
are placed at 0 2 standard errors of rk. The formula used for standard error depends upon the situation.

Within the ACF of residuals as part of the ARIMA routine, the standard errors are determined assuming the residuals are white noise.
The approximate formula for any lag is that s.e. of rk = 1/(n)1/2.
For the ACF of raw data (the ACF command), the standard error at a lag k is found as if the right model was an MA(k-1). This allows the
possible interpretation that if all autocorrelations past a certain lag are within the limits, the model might be an MA of order defined by the
last significant autocorrelation.

Appendix: Standardized Residuals

What are standardized residuals in a time series framework? One of the things that we need to look at when we look at the diagnostics from a
regression fit is a graph of the standardized residuals. Let's review what this is for regular regression where the standard deviation is . The
standardized residual at observation i
p
yi 0 j xij
j=1
,

Loading [MathJax]/extensions/MathZoom.js
should be N(0, 1). We hope to see normality when we look at the diagnostic plots. Another way to think about this is:
p
https://onlinecourses.science.psu.edu/stat510/print/book/export/html/65 3/4
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/65
p
yi 0 j xij ^
yi y
j=1 i
.

^ )
V ar(yi y
i

Now, with time series things are very similar:


~
xt xt
,

t1
P
t

where
~ t1 ~ 2
xt = E(xt |xt1 , xt2 , )andP = E [(xt xt ) ] .
t

This is where the standardized residuals come from. This is also essentially how a time series is fit using R. We want to minimize the sums of
these squared values:
2

n ~
xt xt


t1
t=1 P
t

(In reality, it is slightly more complicated. The log-likelihood function is minimized, and this is one term of that function.)

Source URL: https://onlinecourses.science.psu.edu/stat510/node/65

Links:
[1] http://www.forvo.com/search/Ljung

Loading [MathJax]/extensions/MathZoom.js

https://onlinecourses.science.psu.edu/stat510/print/book/export/html/65 4/4
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/66

Published on STAT 510 (https://onlinecourses.science.psu.edu/stat510)


Home > 3.3 Forecasting with ARIMA Models

3.3 Forecasting with ARIMA Models


Section 3.5 in the textbook gives a theoretical look at forecasting with ARIMA models. That presentation is a bit tough, but in practice its easy
to understand how forecasts are created.

In an ARIMA model, we express xt as a function of past value of x and/or past errors (as well as a present time error). When we forecast a
value past the end of the series, on the right side of the equation we might need values from the observed series or we might, in theory, need
values that arent yet observed.

Example: Consider the AR(2) model xt = + 1xt-1 + 2xt-2 + wt. In this model, xt is a linear function of the values of x at the previous two
times. Suppose that we have observed n data values and wish to use the observed data and estimated AR(2) model to forecast the value of
xn+1 and xn+2, the values of the series at the next two times past the end of the series. The equations for these two values are

xn+1 = + 1xn + 2xn-1 + wn+1

xn+2 = + 1xn+1 + 2xn + wn+2

To use the first of these equations, we simply use the observed values of xn and xn-1 and replace wn+1 by its expected value of 0 (the assumed
mean for the errors).

The second equation for forecasting the value at time n + 2 presents a problem. It requires the unobserved value of xn+1 (one time past the
end of the series). The solution is to use the forecasted value of (the result of the first equation).

In general, the forecasting procedure, assuming a sample size of n, is as follows:

For any wj with 1 j n, use the sample residual for time point j
For any wj with j > n, use 0 as the value of wj

https://onlinecourses.science.psu.edu/stat510/print/book/export/html/66 1/8
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/66

For any xj with 1 j n, use the observed value of xj


For any xj with j > n use the forecasted value of xj

Notation: Our authors use the notation xnn+m


to represent a forecast m times past the end of the observed series. The superscript is to be
read as given data up to time n. Other authors use the notation xn(m) to denote a forecast m times past time n.

To understand the formula for the standard error of the forecast error, we first need to define the concept of psi-weights.

Psi-weight representation of an ARIMA model

Any ARIMA model can be converted to an infinite order MA model:

xt = w t + 1 w t1 + 2 w t2 + + k w tk +

= j w tj where0 = 1
j=0

An important constraint so that the model doesnt explode as we go back in time is


|j | <

j=0

[On page 95 of our book, the authors define a causal model as one for which this constraint is in place, along with the additional restraint that
we cant express the value of the present x as a function of future values.]

The process of finding the psi-weight representation can involve a few algebraic tricks. Fortunately, R has a routine. ARMAtoMA, that will do it
for us. To illustrate how psi-weights may be determined algebraically, well consider a simple example.

Example: Suppose that an AR(1) model is xt = 40 + 0.6xt-1 + wt

For an AR(1) model, the mean = /(1 - 1) so in this case, = 40/(1 - .6) = 100. Well define zt = xt - 100 and rewrite the model as zt = 0.6zt-1
+ wt. (You can do the algebra to check that things match between the two expressions of the model.)

To find the psi-weight expression, well continually substitute for the z on the right side in order to make the expression become one that only
involves w values.

zt = 0.6zt-1 + wt, so zt-1 = 0.6zt-2 + wt-1.

Substitute the right side of the second expression for zt-1 in the first expression.

https://onlinecourses.science.psu.edu/stat510/print/book/export/html/66 2/8
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/66

This gives zt = 0.6(0.6zt-2 + wt-1) + wt = 0.36zt-2 + 0.6wt-1 + wt.

Next, note that zt-2 = 0.6zt-3 + wt-2.

Substituting this into the equation gives zt = 0.216zt-3 + 0.36wt-2 + 0.6wt-1 + wt.

If you keep going, youll soon see that the pattern leads to

j
zt = xt 100 = (0.6) w tj

j=0

Thus the psi-weights for this model are given by j = (0.6)j for j = 0, 1,, .

In R, the command ARMAtoMA(ar = .6, ma=0, 12) gives the first 12 psi-weights. This will give the psi-weights 1 to 12 in scientific notation.

For the AR(1) with AR coefficient = 0.6 they are:

[1] 0.600000000 0.360000000 0.216000000 0.129600000 0.077760000 0.046656000

[7] 0.027993600 0.016796160 0.010077696 0.006046618 0.003627971 0.002176782

Remember that 0 = 1. R doesnt give this value. Its listing starts with 1, which equals 0.6 in this case.

MA Models: The psi-weights are easy for an MA model because the model already is written in terms of the errors. The psi-weights = 0 for
lags past the order of the MA model and equal the coefficient values for lags of the errors that are in the model. Remember that we always
have 0 = 1.

Standard error of the forecast error for a forecast using an ARIMA model

Without proof, well state a result:

The variance of the difference between the forecasted value at time n + m and the (unobserved) value at time n + m is
m1
Variance of (xn
n+m
xn+m ) = w
2
j=0
.
2
j

Thus the estimated standard deviation of the forecast error at time n + m is

https://onlinecourses.science.psu.edu/stat510/print/book/export/html/66 3/8
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/66

2 m1
Standard error of (xn
n+m
^
xn+m ) =
w

j=0
.
2
j

Note that the summation of squared psi-weights begins with (0)2=1 and that the summation goes to m 1, one less than the number of times
ahead for which were forecasting.

When forecasting m = 1 time past the end of the series, the standard error of the forecast error is

2
Standard error of (xn+1 n
xn+1 ) =
^
w
(1)

When forecasting the value m = 2 times past the end of the series, the standard error of the forecast error is

2
Standard error of (xn
n+2
xn+2 ) =
^
w
(1 + ) .
2
1

Notice that the variance will not be too big when m = 1. But, as you predict out farther in the future, the variance will increase. When m is very
large, we will get the total variance. In other words, if you are trying to predict very far out, we will get the variance of the entire time series; as
if you haven't even looked at what was going on previously.

95% Prediction Interval for xn+m

With the assumption of normally distributed errors, a 95% prediction interval for xn+m, the future value of the series at time n + m, is

n 2 m1 2
x ^
1.96 .
n+m w j=0 j

Example: Suppose that an AR(1) model is estimated to be xt = 40 + 0.6xt-1 + wt. This is the same model used earlier in this handout, so the
psi-weights we got there apply.
2
Suppose that we have n = 100 observations,
^
w
= 4 and x100 = 80 . We wish to forecast the values at both times 101 and 102, and create
prediction intervals for both forecasts.

First we forecast time 101.

x101 = 40 + 0.6x100 + w 101

100
x = 40 + 0.6(80) + 0 = 88
101

The standard error of the forecast error at time 101 is


https://onlinecourses.science.psu.edu/stat510/print/book/export/html/66 4/8
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/66

2 11 2

^
= 4(1) = 2.
w j=0 j

The 95% prediction interval for the value at time 101 is 88 2(1.96), which is 84.08 to 91.96. We are therefore 95% confident that the
observation at time 101 will be between 84.08 and 91.96. If we repeated this exact process, then 95% of the computed prediction intervals
would contain the true value of x at time 101.

The forecast for time 102 is


100
x = 40 + 0.6(88) + 0 = 92.8
102

Note that we used the forecasted value for time 101 in the AR(1) equation.

The relevant standard error is



2 21 2 2
^
= 4(1 + 0.6 ) = 2.332
w j=0 j

A 95% prediction interval for the value at time 102 is 92.8 (1.96)(2.332).

To forecast using an ARIMA model in R, we recommend our textbook authors script called sarima.for. (It is part of the astsa library
recommended previously.)

Example: In the homework for Week 2, problem 5 asked you to suggest a model for a time series of stride lengths measured every 30 seconds
for a runner on a treadmill.

From R, the estimated coefficients for an AR(2) model and the estimated variance are as follows for a similar data set with n = 90 observations:

Coefficients:

ar1 ar2 xmean

1.1480 -0.3359 48.7476

s.e. 0.1009 0.1087 1.8855

sigma^2 estimated as 11.47

The command

sarima.for(stridelength, 6, 2, 0, 0) # 6 forecasts with an AR(2) model for stridelength


https://onlinecourses.science.psu.edu/stat510/print/book/export/html/66 5/8
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/66

will give forecasts and standard errors of prediction errors for the next six times past the end of the series. Heres the output (slightly edited to
fit here):

$pred
Time Series:
Start = 91
End = 96
[1] 69.78674 64.75441 60.05661 56.35385 53.68102 51.85633

$se
Time Series:
Start = 91
End = 96
[1] 3.386615 5.155988 6.135493 6.629810 6.861170 6.962654

The forecasts are given in the first batch of values under $pred and the standard errors of the forecast errors are given in the last line in the
batch of results under $se.

The procedure also gave this graph, which shows the series followed by the forecasts as a red line and the upper and lower prediction limits as
blue dashed lines:

Psi-Weights for the Estimated AR(2) for the Stride Length Data
https://onlinecourses.science.psu.edu/stat510/print/book/export/html/66 6/8
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/66

If we wanted to verify the standard error calculations for the six forecasts past the end of the series, we would need to know the psi-weights. To
get them, we need to supply the estimated AR coefficients for the AR(2) model to the ARMAtoMA command.

The R command in this case is ARMAtoMA(ar = list(1.148, -0.3359), ma = 0, 5)

This will give the psi-weights to in scientific notation. The answer provided by R is:

[1] 1.148000e+00 9.820040e-01 7.417274e-01 5.216479e-01 3.497056e-01

(Remember that 0 = 1 in all cases)

The output for estimating the AR(2) included this estimate of the error variance:

sigma^2 estimated as 11.47

As an example, the standard error of the forecast error for 3 times past the end of the series is

2 31 2 2 2
^
= 11.47(1 + 1.148 + 0.982 ) = 6.1357
w j=0 j

which, except for round off error, matches the value of 6.135493 given as the third standard error in the sarima.for output above.

Where the Forecasts Will End Up?

For a stationary series and model, the forecasts of future values will eventually converge to the mean and then stay there. Note below what
happened with the stride length forecasts, when we asked for 30 forecasts past the end of the series. [Command was sarima.for (stridelength, 30, 2, 0,
0)]. The forecast got to 48.74753 and then stayed there.

$pred
Time Series:
Start = 91
End = 120
[1] 69.78674 64.75441 60.05661 56.35385 53.68102 51.85633 50.65935 49.89811
[9] 49.42626 49.14026 48.97043 48.87153 48.81503 48.78339 48.76604 48.75676
[17] 48.75192 48.74949 48.74833 48.74780 48.74760 48.74753 48.74753 48.74755
[25] 48.74757 48.74759 48.74760 48.74761 48.74762 48.74762

The graph showing the series and the six prediction intervals is the following

https://onlinecourses.science.psu.edu/stat510/print/book/export/html/66 7/8
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/66

Source URL: https://onlinecourses.science.psu.edu/stat510/node/66

https://onlinecourses.science.psu.edu/stat510/print/book/export/html/66 8/8

Вам также может понравиться