Академический Документы
Профессиональный Документы
Культура Документы
Contact:
Bernhard Lamark lamark@gmail.com
Philipp Jan Siegert siegert@pp-services.de
Jon Walle jon.walle@gmail.com
www.globalfinance.org
Volatility Modeling – From ARMA to ARCH
TABLE OF CONTENTS
I. HETEROSKEDASTICITY IN ECONOMETRICS 3
1. AUTOREGRESSIVE MODELS 7
2. THE ARCH MODEL 8
3. THE GARCH MODEL 10
4. ARCH-M 11
-2-
Volatility Modeling – From ARMA to ARCH
I. Heteroskedasticity in Econometrics
-3-
Volatility Modeling – From ARMA to ARCH
2. Heteroskedasticity
The effect of a violation of this assumption is that we still have a BLUE model,
however it is no longer consistent and as a result the regression output in terms of test
statistics can no longer be reliable. This is due to the fact that the variance which is in
the heart of these statistics is no longer constant and will hence be misleading.
-4-
Volatility Modeling – From ARMA to ARCH
have the freedom to choose level of consumption according to preference while those
with a lower income are forced to consume more or less everything. The consequence is
that the error term and hence the variance will vary depending on the explanatory
variable X.
b. Time series
Heteroskedasticity in time series will take a graphically different appearance as the issue
here is that the variance, or the volatility, will vary according to time. Engle (1982) and
others with him have looked at the properties of the volatility of financial markets and
there is wide recognition of the presence of heteroskedasticity in the distribution of
returns. Mandelbrot (2002) describes this phenomenon as a clustering of volatility
where a period of high volatility is likely to be followed by another period of high
volatility and opposite. By using Microsoft Excel to generate a graphical output of the
volatility of a homoskedastic and a heteroskedastic time series process we obtained the
results depicted in Figures 3 and 4 respectively. Intuitively, the properties of the
heteroskedastic output resemble the sometimes calm and sometimes turbulent
volatilities observed in financial markets.
-5-
Volatility Modeling – From ARMA to ARCH
25%
20%
15%
10%
5%
0%
25%
20%
15%
10%
5%
0%
Historical volatility
Simply using all past information on past price movements does in fact utilise the
heteroskedastic properties of financial markets to some extent. By using the formula
1 N
σ= ( Rt − R) 2 (1)
N − 1 t =1
the assumption that all past prices have an equal relevance in the shaping of the
volatility of the future is applied. Intuitively this assumption is too crude as more recent
volatility is likely to have more relevance than that of several years ago and should
-6-
Volatility Modeling – From ARMA to ARCH
hence be given a relatively higher weight in the calculation. A simple way to counter
this problem is done by only using the last 30 days to calculate the historical volatility
and this model is actually widely used by actors in the financial markets. The model
weighs volatility older than 30 days as 0 and puts equal weight on the volatility of the
last 30 days.
This model is however still crude and more sophisticated models are frequently used
moving into the area covered by models such as the Exponential Weighted Volatility
models.
1. Autoregressive Models
To fully comprehend the GARCH model introduced by Bollerslev (1986) there should
be a clear understanding of the underlying assumptions and models from which
GARCH is derived. In its simplest form, an autoregressive model is a model in which
you use the statistical properties of the past behaviour of a variable y t to predict its
behaviour in the future. In other words, we can predict the value of the variable y t +1 by
looking at the sum of the weighted values that y t took in previous periods plus an error
p
yt = µ + φ i y t −i + ε t (2)
i =1
where µ is the mean, φ is the weight and y t −i is the value y had t − i periods ago. We
may even want to forecast volatility based upon only the immediately previous value
of y t . Such a model is often referred to as an AR(1) or a first-order one process.
-7-
Volatility Modeling – From ARMA to ARCH
q
yt= µ + θε t −i + ε t (3)
i =1
where µ is the mean, θ is the weight and ε t −i and ε t are the previous weighted
average values of a white noise disturbance term and the current disturbance term
respectively. This is referred to a moving average model or MA( q ). By combining
equations 2 and 4 we get a model or a tool for predicting future values of a variable y t
An ARCH(1) model, where the conditional variance depends only on one lagged square
error, is given by
σ t2 = α 0 + α 1ε t2−1 (5)
-8-
Volatility Modeling – From ARMA to ARCH
We can capture more of the dependence in the conditional variance by increasing the
number of lags, p , giving us an ARCH( p ) model
We can illustrate the effect on the conditional variance when putting more emphasis on
the squared error term as shown in Figure 5.
As the image indicates, when we give little or no weight on the error term, the series is
completely random, or not autoregressive. As we approach one, however, we clearly see
how the time series becomes autoregressive where a period of large movements is often
followed by another period of movements with similar sizes. The bottom chart where 1
is set 0.8 can be considered as representing an actual stock return series most closely.
ARCH Shortcomings
Even though the ARCH model was useful at the time, it has its shortcomings. For
instance, we do not know how many lags, p , we should apply for the best results and
the potential number of lags required to capture all of the dependence in the conditional
variance could be very large thus making the model not very parsimonious or frugal.
-9-
Volatility Modeling – From ARMA to ARCH
Intuitively, the more parameters we have in the model, the more likely it will be that
one of them will have a negative estimated value.
In effect we can forecast the next period’s variance, examples of which will be shown
below (see also Engle, 2001), through the
Weighted average of the long run average variance (mean),
The variance predicted for this period (GARCH) and,
Information about volatility during the previous period which is the most recent
squared residual (ARCH).
where α 0 is the mean, ε t2−1 is the ARCH term and σ t2−1 is the GARCH term.
GARCH Shortcomings
The model only considers the magnitude of the returns, but not the direction. Investors
act differently depending on whether a share moves up or down which is why volatility
is not symmetric in relation to directional movements. Market declines forecast higher
volatility than comparable market increases. This is referred to as the leverage effect
- 10 -
Volatility Modeling – From ARMA to ARCH
(see for example Gourieroux and Jasiak, 2002). Both ARCH and GARCH fail to
capture this fact and as such may not produce accurate forecasts. Recent models
building on ARCH and GARCH such as the Threshold ARCH (TARCH) model have
tried to overcome this problem.
4. ARCH-M
Most models used in finance suppose that investors should be rewarded for taking
additional risk by obtaining a higher return. The ARCH-M model is often used in
financial applications where the expected return on an asset is related to the expected
asset risk which itself is time-variant. The estimated coefficient on the expected risk is a
measure of the risk-return trade-off.
ARCH-M is given by
y t = α 1 + α 2 λt + α 3σ t2 + ε t (8)
If the Lambda is positive and significant, then increased risk, given by an increase in the
conditional variance, leads to a rise in the mean return. In other words, we may look at
Lambda as a risk premium.
- 11 -
Volatility Modeling – From ARMA to ARCH
• First, we could have taken the daily unadjusted close prices and computed the
day-to-day changes, either as logarithmic returns or as percentage returns. While
the choice for logarithmic or percentage returns is of minor importance for the
discussion here, the problem that the unadjusted close prices did not adjust for
stock splits and dividend payments was more severe. On the day of a 2:1
dividend split, the time series showed a percentage return change of more than
100% so that we quickly concluded that unadjusted close prices are
inappropriate for our needs
• Second, to circumvent the dividend and stock split issues we could have taken
the adjusted close prices as calculated by Yahoo Finance. In fact, we first used
this approach but later recognised that the created daily returns showed too
many large movements as can be seen in Figure 6. Two problems with Yahoo
Finance'
s adjusted close prices could be identified. First, Yahoo Finance takes
dividend payments into account at the day the dividend is paid, which results in
large positive returns for dividend payment dates. Second, as we went back in
time to 1990 the adjusted prices became very small, about $ 0.50. Since Yahoo
Finance only reports two decimals after the comma for each adjusted price, the
daily changes calculated were not very accurate.
• Third, the difference between the highest price of one day and the lowest price
of one day, expressed in percentage terms, can be used as a measure of
volatility. For the purpose of our project we believe that this method is most
appropriate because the issues described in the two previously discussed
methods can be circumvented. Consequently all our models and forecasts are
based on daily High-Low differences.
- 12 -
Volatility Modeling – From ARMA to ARCH
0.10
0.05
0.00
-0.05
-0.10
-0.15
500 1000 1500 2000 2500 3000 3500
RETURN
1 N
σ= (Ct − C ) 2 × 252 (9)
N − 1 t =1
where σ is the thirty day annual volatility, N is the number of days which was set to N =
30, Ct is the daily volatility, measured as High-Low price change and expressed in
percentage terms, and C is the daily mean volatility over the past thirty days.
Figures 7 and 8 show the movement of the 30 days volatility once for the period 1990 to
2005 and once for the period 2004 to 2005. Both figures reveal a strong up and down
movement of the volatility and it can be concluded that any model that assumes a
constant volatility for the shares of General Electric will necessarily be based on a
wrong assumption. One further interesting observation is that the 30 days volatility
- 13 -
Volatility Modeling – From ARMA to ARCH
often forms plateaus where one day with a large movement has a persistent effect over
some time.
60%
50%
40%
30%
20%
10%
0%
90
91
92
93
94
95
96
97
98
99
00
01
02
03
04
05
19
19
19
19
19
19
19
19
19
19
20
20
20
20
20
20
Source: Yahoo Finance, Own Calculations
14%
12%
10%
8%
6%
4%
2%
0%
01/2004 04/2004 07/2004 10/2004 01/2005
- 14 -
Volatility Modeling – From ARMA to ARCH
simple GARCH(1,1) process. All calculations were done with the computer program
EViews.
The process is divided into two steps. First, we estimate a model with an ARCH term
and a GARCH term based on the past data available. After fitting the model and
estimating the parameters we produce two forecasts. The first one forecasts the
volatility for 2005 based on our data set from 1990 to 2004. We compare this forecast to
actual volatility and implied volatility to assess how accurate the GARCH forecast was.
The second forecast is based on the whole data set available, i.e. 1990 to March 2005
and forecasts the volatility one month in advance for April 2005.
Figure 9 that shows a simplified output of the GARCH model reveals the following
findings. First, all estimated parameters are highly significant as can be seen in the last
column (Prob). Second, the ARCH(1) and GARCH(1) coefficients are very close to one
when added together. This means that the time series has a strong autoregressive
component and that the GARCH model might be appropriate.
GE
Sample(adjusted): 2 3828
01/01/1990 - 07/03/2005
- 15 -
Volatility Modeling – From ARMA to ARCH
represents March 7th 2005. Clearly, volatility is expected to increase over the forecast
horizon from approximately 14.5 per cent to over 17 per cent.
0.18
0.17
0.16
0.15
0.14
3785 3790 3795 3800 3805 3810 3815 3820 3825
VOL
Figure 11 shows that volatility for March 7th is predicted to be 17.74 per cent.
Compared to other predictions such as exponentially weighted volatility, the figure is
rather high. When the GARCH forecast is compared to implied volatility that is derived
from options the estimate is still too high, although to a smaller extent when compared
to exponentially weighted volatility. However, the actual volatility for March 7th that
was available as of the writing of this report was only 11.53 per cent and much lower
than any predictions. This can be explained with historically very low volatility levels in
2005. The GARCH model that took the volatility for 1990 to 2004 into account
overestimated the true volatility because the higher past volatility was still taken into
account in the autoregressive process.
To see how the GARCH model would perform if the low volatility levels in 2005 were
taken into account we produced a second forecast for April 2005 based on the data
available up to March 7th 2005. From Figure 12 it can be seen that the GARCH model
took lower volatility levels into account and that the predicted volatility of 14.92 per
cent is closer to the actual volatility of 15.46 per cent. Clearly, the GARCH model was
able to capture the low volatility levels in 2005 and although the forecast is still
- 16 -
Volatility Modeling – From ARMA to ARCH
approximately half a percentage point wrong the GARCH model produced a more
accurate forecast for April 2005 than the exponentially weighted model.
- 17 -
Volatility Modeling – From ARMA to ARCH
References
Engle, Robert F. (2001), GARCH 101: The Use of ARCH/GARCH Models in Applied
Econometrics, Journal of Economic Perspectives, 15, pp. 157-168.
- 18 -