Вы находитесь на странице: 1из 17

EFFECTS OF FREQUENCY COMPONENTS OF RETURNS ON DAILY VALUE-AT-RISK, OF

SHARES IN EMERGING AND DEVELOPED MARKETS

Milton Biage and Andr Pereira de Souza Duarte

Abstract

In this study, Value-at-Risk was estimated using the technique of wavelet decomposition, to separate the impacts of frequency levels on the volatility of financial
assets, and to analyze the impacts of these frequency components on short, medium and long terms, in the variances of daily stock returns, on VaR forecasts.
Daily returns of twenty-one shares of the Ibovespa and daily returns of twenty-two shares of the DJIA were used. The FIGARCH model (1,d,1) was applied to
the reconstructed returns to model and establish the prediction of conditional variance, applying the rolling window technique. The Value-at-Risk was then
estimated, and the results showed that the DJIA shares showed more efficient market behavior than those of Ibovespa. The differences in behavior identified
between these two markets induces to affirm that VaRs, to be used in the analysis of financial assets from different markets, with different governance premises,
should be estimated by series of returns reconstructed by aggregations of components of different frequencies. Based on the premise that models with smaller
samples may represent real models better, a total of 890 samples were used, considerably smaller than those used in similar studies. A set of backtesting was
applied to confront the estimated VaRs, which demonstrated to characterize as having the great possibility to be consistent with exact VaRs models.

Keywords: FIGARCH, Value-at-Risk, wavelets, Structural change, Volatility Models.

1. Introduction

The process of internationalization of the financial markets in the 1980s increased the demand for financial asset analysis instruments. The Value-at-
Risk, VaR calculations appeared during this period. GARBADE (1986) created the VaR analysis, but this tool only gained notoriety in the mid-1990s through the
RiskMetrics1 tool, which became a fundamental tool for asset risk analysis. In 1988, mechanisms for measuring credit risk were defined with the purpose of
strengthening the solidity and stability of the international banking system and minimizing the competitive inequalities between banks' assets internationally. But
only in 1996, the Basel Committee2 defined the use of VaR as the basic approach for risk measurement. However, to use it, banks should meet some quantitative
requirements, where the VaR should be calculated daily for Confidence Intervals of 99%, using a historical series of not less than one year (250 samples), with a
time horizon of ten working days. Therefore, since then, numerous statistical procedures have been introduced in order to improve the VaR forecasts, due to their
importance in the banking supervision process.
VaR refers to the worst outcome that is expected for a portfolio, over a predetermined period and with a certain level of confidence. In general, the
VaR is an empirical estimate of the distribution of stocks, assuming that the returns of the assets are normally distributed. However, the returns exhibit skewness
and excess kurtosis, resulting in an underestimation or overestimation of the true VaR. Utilizing the use of different types of distributions, such as the Skewed
Student-t used by Giot and Laurent (2003), as a resource to capture these effects, and to correct them. However, most researchers prefer to calculate the VaR
using historical simulations to estimate the conditional variance for each sample over time using GARCH process models, assuming specific distributions for
white noise such as the Student-t or Normal distribution. The general understanding is that the VaRs models, estimated on the basis of the conditional variances
predicted by calibrated GARCH process models, work better than other classical methods estimated on the basis of the distribution tail of the assets.
VaR models should be contrasted with back-testing measures to confirm their capacity to forecast failure, so that risk managers can make convicted
decisions against the uncertainties of volatile markets. A first back-testing was the traffic light approach, proposed by the Basle Committee on Banking
Supervision (1996) based on the construction of a binomial distribution. This test results in three-zone specifications that characterize confidence intervals: the red
zone (failures probability range in which VaR is likely to be incorrect), yellow zone (failures probability range in which VaR is not impaired), and green zone
(failures probability range in which the model is probably correct). Kupiec (1995) introduced a likelihood ratio test to test whether the failure probability is
synchronized with the implied probability at the confidence level of the VaR model. Kupiec also proposed a likelihood ratio test ( ) that examines the time
until the first failure. If the first failure occurs too early, the VaR model fails the test. Christoffersen (1998) proposed a likelihood ratio test to measure the
dependence of failures between consecutive days . This test not only covers the failure rate, but also the failure independence. If the model is exact, then a
failure today should not depend on whether or not a failure occurred the previous day. Christoffersen (1998) also combined the independence test statistic with the
Kupiec Proportion of Failures test to establish a conditional coverage test that measures whether the estimated failure rate is correct, close to the confidence level
of the VaR model, but also the independence of failures. In summary, these tests make it possible to verify if a VaR model underestimates or overestimates the
risk, due essentially to the effects of concentration and dependence of failures. Therefore, the application of such tests can better situate the quality of a model and
its ability to predict failures.
More recently, the Fractionally Integrated GARCH (FIGARCH) models have been used intensively for the simulation of conditional variance. This new
class of Generalized Auto Regressive Conditional Heteroskedastic Models, FIGARCH, was introduced by Baillie et. Al. (1996). The conditional variance of this
process is characterized by a decaying at a slow hyperbolic rate, due to the influence of lagged square innovations. In the study, a model was implemented to
analyze the behavior of a series of daily percentage returns for the Deutsche Mark exchange rate, related to the US dollar, from 3/14/1979 to 12/31/1992, whose
results highlight the efficiency of the FIGARCH specification to describe this process. Jin and Frechette (2004) studied the effect of long memory on the daily
volatilities of various future returns of agricultural commodities. To do so, they constructed volatility series for fourteen series of agricultural future prices, using
approximately 5,300 observations per series. They found that the series of volatility exhibit strong long-term dependence, which is an indicator of fractional
integration. The fractional integration model, FIGARCH (1,d,1), presented a performance significantly better than other traditional models of volatility. Belkhouja
and Boutahary (2011) analyzed a new class of FIGARCH (or TV-FIGARCH) processes to model volatility. This model has the possibility of explaining the
effects of long memory and the structural changes in the process of conditional variance. Structural change is modeled by a logistic function that allows the
intercept to vary over time. The performance of the TV-FIGARCH model was examined by a Monte Carlo study based on the price of crude oil and the S&P 500
index. The main finding was that the long memory behavior of the absolute returns is not explained only by the existence of long memory in volatility, but also by
deterministic changes in unconditional variance. Tayefi and Ramanathan (2012) present a review of the theory of integrally fractional Conditional
Heteroskedastic Models (FIGARCH). These authors state that these models are characterized as one of the best conditional heteroscedastic models to describe the
volatility of stock market returns rates. The authors analyzed the performance of the FIGARCH models, using time series data from the Rupee exchange rates
compared to the US dollar for the period from 1/3/2000 to 1/11/2011. The results of the model demonstrated the ability of the FIGARCH model to model series
with long memory. Klein and Walther (2016), in order to accelerate the conditional variance estimation process, introduced the Fast Fourier Transform (FFT) to

1
The RiskMetrics tool arose from JP Morgan's need to calculate the daily risks of its financial assets. However, later, the creators of the tool set up a company with
the same name.
2
Banking Regulation and Supervisory Practices Committee, headquartered at the Bank for International Settlements - BIS, in Basel, Switzerland. Hence the name
Basel Committee. In this Committee, issues related to the banking industry are discussed, aiming at improving the quality of banking supervision and strengthening
the security of the international banking system (Basle Committee on Banking Supervision, 1996)
model the long-memory conditional variance. The implemented analyzes showed that the FFT approach offers a computational advantage for the representation
of long memory models, specifically ARCH (), when using large data sets that are common in high frequency analyzes. To get robust results, different sample
lengths were analyzed with {5000,7500,10000,15000}.
The use of wavelet decomposition has won many adepts in the analysis of economic and financial time series (Genay et al, 2002). The reason being
that the functions that are generated by the wavelet transform allow us to analyze the data at specific time intervals, using different time frequencies. Analyzing
data at different time-frequency levels enables different levels of understanding to be obtained, and important interpretations are established, since low time
frequencies are characterized by long-term effects, and high frequencies have short-term effects or stochastic noise on a given variable. Thus, if we want to
analyze, for example, return series of financial assets, and aim to understand the effects of certain cycles on the risk of loss in the asset, then the analysis can be
conducted with precision, establishing the frequency-time decomposition, and verifying how the frequency impacts of time (in the long, medium or short term)
behave on some statistical risk metric of measurement. This procedure makes it possible to identify which are the frequency effects that most affect the
performance and the levels of returns in the financial asset.
Other authors have also used wavelet decomposition in research on the dynamics behavior of financial assets. ANDRIES et al. (2014) analyzed the
dependencies between interest rates, stock prices and the exchange rate in India, between July 1997 and December 2010. In this study, we identified the patterns
of co-movement of the specified variables, using the spectral power of cross waves, the coherence function for cross wavelet components, and phase difference
methodologies. The empirical results suggest that stock prices, exchange rates and interest rates are interdependent. The cross wavelet results showed that stock
price movements are behind, both in relation to the exchange rate and in relation to interest rate fluctuations. The interrelationship between interest rates and stock
price movements is quite clear, and suggests that the stock market follows the signs that of interest rates. Chakrabarty et al. (2015) study the nature and direction
of the transmission of shocks and volatility among nine non-overlapping sector indices of the Bombay Stock Exchange (BSE), using data from 4/5/2004 to
4/2/2012, through eight different scales (from 2-4 days to 1-2 years), using the wavelet-based multiresolution extended, through the analysis of the dynamic
conditional correlation of MRA-EDCC GARCH models. The study reveals that the interaction volatility is dependent on scale, a significant variation is observed
in the magnitude and direction of spillover incidences, which allowed the authors to conclude that the magnitude and direction of the interaction in volatility
changes with the change on the investment horizon, thus it may be concluded that a calibrated strategy for short-term investors may not be ideal for long-term
investors, and vice versa. Berger (2016) examined the relevance of long-term seasonality to establish adequate Value-at-Risk (VaR) forecasts. The author
evaluated the behavior of a set of financial return series of stocks listed in the Dow Jones Industrial Average (DJIA), via wavelet decomposition, which allows the
separation of short-term cycles, noise and long-term memory components, in underlying time series. The evaluation studies of daily market prices revealed the
relevance of short-term fluctuations in the underlying time series to establish VaR forecasts. In particular, it was found that the frequencies describing the long-
term trends of the original series do not affect the accuracy of the VaR prediction statistics. The results also showed that the long-term components of the time
series of stock can be discarded when establishing negative VaR forecasts.
In general, it is verified through the articles cited above that, for Value-at-Risk to be accurate as a risk measure, the essential steps are to identify the
volatility properties of a determined series of returns and to develop adequate methodological procedures. As suggest also along the review, one way is to
separate the different impact levels on the volatility by wavelet decomposition, and analyse how strong their impacts are on the VaR models. Similarly, the
research conducted in this study uses the classical wavelet decomposition technique to separate the impacts of different frequency levels on the volatility of
financial assets, and analyzes the effects of these short, medium and long-term frequency impacts on the variances of daily stock returns of VaR forecasts. To that
pend, this study selected twenty-one BM & FBOVESPA3 Ibovespa shares and, based on the series of closing prices of these shares, daily returns were obtained.
The same was done for twenty two shares of DJIA of the NYSE4, with the purpose of comparing the characteristics of the Value-at-Risk estimates, respectively,
for shares of these two stock exchanges. Then, the Wavelets technique was applied, with Haar filters, to obtain the wavelet series for each return series of each
share, also called wavelet decompositions. Series of reconstructed returns were generated from the frequency decompositions, adding them up, starting by
establishing the sum between the first and second components and then adding the first, second and third, and so on subsequently. We applied the FIGARCH
model (1,d,1) in the reconstructed returns to model and establish the conditional variance forecast for a given future period, applying the rolling window
technique. The prediction of the Value-at-Risk was estimated using the future conditional variances, by Monte-Carlo and Parametric methodologies. Then, the
accuracy of the predicted VaRs was verified, counting the number of failures, based on a return series for out-of-sample forecasting, for each share (logically,
those values of each share beyond the period used in the estimate of the FIGARCH model), thus obtaining the percentage of failures below and above,
respectively, of negative VaR and positive VaR. Finally, the VaRs were contrasted with the backtesting measures previously outlined to confirm their predictive
failure capabilities.
Thus, based on the elements highlighted in the bibliographies referenced previously, this study presents a VaR estimation methodology that allows to
improve its forecast. Therefore, having as its primary objective, the research seeks to understand the impacts that the different frequencies have on the
performance of the VaR, considering two sets of shares from financial markets, supposedly different: first, stock originating from a developed country market (
DJI-NYSE), with assumptions that are subject to lower effects of crises (domestic or international), with the possibility of developing their volatility in short-term
cycles, with the ability to overcome the effects of crises quickly; and second, shares from a financial market of a developing country (BM & FBOVESPA
Ibovespa), with assumptions that they are more subject to the effects of domestic and international crises, and that may present difficulties to overcome the effects
of crises, developing their volatilities in longer cycles. Therefore, it is expected in the study that, in general, DJI-NYSE shares and BM & FBOVESPA Ibovespa
shares respond to effects on different frequency scales.
A second objective of the study is to explore the effect of a relatively short sample, compared to all the studies cited above, in order to investigate
how an estimated VaR model under such a condition may have to predict failure forecasts. A large sample may better model a VaR, however, be less realistic in
view of the composition aspects of the observational and non-observational effects that interact in the process, over time. For example, effects of failure
concentration, due to crises, and independence between failures can be better absorbed by larger samples, since they can better mediate the probability of failure.
However, a constructed VaR model, involving long periods that incorporate concentration effects may underestimate the proportion of failures, when confronted
for failure forecast, with a short out-of-sample return series. Given this framework, it is believed that developing VaRs models, whose conditional variances are
estimated by conditional heteroscedastic models calibrated by smaller samples is desirable. Therefore, emphasizing, the conditional heterscedastic models
reviewed throughout this study use samples larger than ten years, during which many different observable and unobserved effects with different scales interacted
in the series' volatility process. The series of return estimation and conditional volatility analysis in the study was 890 samples (approximately, three and a half
years), 470 in the calibration process, 210 in the rolling window and 210 in the forecasting process, much smaller than those used in similar studies. For example,
Hoppe (1998) also examined the question of the sample size and argued that the use of smaller samples would lead to more accurate VaR estimates than the larger
ones. Likewise, Frey and Michaud (1997), in their study, supported the use of short sample sizes to capture structural changes over time due to changes in
commercial behavior. In short, the choice of an appropriate historical sample size, as well as a suitable model to predict volatility, should be considered as far
from being solved.
The sequence of the article is structured as follows: Section 2 presents the mathematical theoretical scheme used in the study; Section 3 presents the
methodological procedure applied in the study; Section 4 shows the results and analysis; and Section 5 presents the conclusion.
2. Mathematical theorical framework

3
A BM & FBOVESPA is the official stock exchange in Brazil, the third largest in the world in terms of market value. The Ibovespa is a purported investment
portfolio that currently comprises 55 stocks, representing the movement of the main securities traded on Bovespa, representing 80% of the volume traded in the
twelve months prior to the formation of the portfolio.
4
New York Stock Exchange, found in the US, is the world's largest stock exchange, at market value of the listed companies. The Dow Jones Industrial Average
(DJIA) is not calculated by the New York Stock Exchange. Its components are chosen by the editors of the American financial newspaper The Wall Street Journal,
although most of the shares included in the DJIA are traded on the NYSE, and a few are on the NASDAQ, the National Association of Securities Dealers Automated
Quotations.
In this section will be presented the plan and notation of the Value-at-Risk model that will be used in the study, as well as the concepts of wavelet
filtering, concepts of figarch(p, d, q) models and the Back Testing plans applied in the evaluation of the VaR estimates.

2.1 Value-at-Risk
The value-at-risk, VaR, is a statistical procedure that allows estimating limits of losses or maximum gains of a series of returns, whose limits are given
by the desired confidence levels. It is normal to use 95% or 99% confidence limits. Therefore, to estimate the VaR model, let us denote as the return of an
out-of-sample, at the time t + h, where h represents the holding period (in the case of the study h = 1 day) of an asset or a portfolio of assets at time t. The ex-ante
VaR for time t, at a coverage rate, is denoted by | , conditioned to all information available at time t (eg past returns and macroeconomic
indicators).
As in this study we will be estimating VaR models for long and short5 operation strategies, therefore we will define two equations for the calculations.
VaR down (short strategy) and VaRup (long strategy) respectively as follows:

= !" < $%& | ' (1)


and
= !" > )* | ' (2)
where values $%& and $)* are calculated with 1 100% of confidence. Thus defined, the reached by Equation (1) indicates
that there is 100% of chance that does not exceed the limit defined by $%& , and that there is 100% of chance that exceeds the
defined limit by )* (that is, if determined 1 = 0,95, then = 0,05).
In this study, the return series , are interpreted in a scaling structure of location, as a function of $%& and )* respecticvely
as follows:

= /677787779
+ 12 3 4 ,5 e = /67777877779
+ 12 3 4 ,EF5 J 0, 1 (3 e 4)
:;<=>?@ABCD 5 :;<=GHBCD 5

where / is the average return at time t, 2 3 , its conditional volatility at the time K + , 4 ,5 and 4 ,EF5 the critical limits of the marginal distribution of the return
series.
Considering that we are dealing with daily data, with characteristics of a distribution with a zero trend scale (zero average), then equations (3) and (4)
can reduce the following relations:

$%& = 12 3 4 ,5 e )* = 12 3 4 ,EF5 (5 e 6)
The distribution to be assumed for the VaR calculations depends on the properties of the return series used in the VaR estimation process, however, it is
usually assumed that 4 ,5 = 4 ,EF5 they follow a Gaussian distribution, J 0, 1 ,with zero average and standard deviation equal to 1. As the VaRs will be
generated for a holding period of 1 day, h = 1, then, was removed from the equation.
Therefore, if the return follows equations (5) and (6), the VaR calculations are values that limit, within an established confidence level, respectively, the
maximum possible losses and gains of this return.
2.1.1 Parametric and Monte Carlo VaR
Two methods to estimate the VaR will be applied: the Parametric and the Monte Carlo (MC). The reason for applying two different methods is because
if it is desired to verify if any of these methods can present more precise results in the estimates of some financial assets used in the analysis of the study. As the
return series of various financial assets that will be tested in the study may be governed by different frequency effects, they may consequently, differently
influence the behavior of the daily VaR. Therefore, applying calculation methodologies that can positively affect the estimation process of the VaR, depending on
the characteristics of the financial asset, is a factor to be verified.
The parametric method uses the prediction of conditional volatility in its estimate, previewed in the case of this study by FIGARCH model (1,d,1).
Therefore, considering the estimated conditional variances, and assuming that the series of returns follow normal distributions J 0, 1 , we can apply equations
(5) and (6) above to calculate the parametric VaRs. Given the estimates of the parametric VaRs, following Pajhede (2015), one can apply Equations (1) and (2),
A
and obtain the probabilities of failure, by means of a violation sequence {M }A:E , for a VaR forecast sequence, N | FE O through the following
:E
relationships:
M>?@A = 1, QR < $%& | FE
P ,* K = 1, 2, ,
M>?@A = 0, QR $%&
(7)
| FE
and
GH
M = 1, QR > )* | FE
P GH ,* K = 1, 2, ,
M = 0, QR )* | FE
(8)

GH
in which is an out-of-sample return series, n the number of elements in , M>?@A and M are, respectively, sequences of zeros and ones that allow the
estimating of the probabilities of failures *>?@A and *GH , respectively, by the following relationships:
^_
XGYY BZ[\] XGYY B
*>?@A = e *GH = (9 e 10)
A A
GH
where Q)`` is the sum of the elements, respectively, of the indicators M and M . >?@A

The MC method is generated by Monte-Carlo (stochastic series) realizations, following standard distributions, which can be by normal or any other
distributions, with average, standard deviation and degrees of freedom equal to the series $%& o )* , estimated by Equations (5) And (6). The
advantage of the MC method is that it allows the generating of several stochastic series of realizations, making it possible to estimate the probabilities of failure
*>?@A and *GH , for each, using Equations (9) and (10). Then, average values are established for each of the probabilities of estimated faults, and thus making it
possible to improve the precision of the estimates of these probabilities. The limitation of the MC is for the generation of the stochastic series and needs to assume
a behavior of a specific statistical distribution, that in the case of this study was used the normal distribution, after conducting some tests with other types of
distributions. As can be seen, the MC uses the same procedures as the parametric VaR, however, making it possible to obtain a final result for the estimates of
probabilities of more precise violations, and so eliminating the asymmetry effects of the out-of-sample return series in these estimates of probabilities.

2.2 Wavelets decomposition

5
Long / short equity strategies are an investment strategies of taking long positions in stocks that are expected to appreciate and short positions in stocks that are
expected to depreciate.
The series of daily returns of financial assets can be seen as being composed of a sum of components, with different frequency levels that
characterize different levels of volatilities. These different frequency levels can be decomposed by wavelet transformations (Janay et al., 2002). Each
decomposed frequency level maintains specific information, intrinsic to the exogenous effects or endogenous to the financial market, or effects due to the
fundamentals that determine the behavior of each financial asset that integrates the market.
The components obtained by the wavelet decomposition can be summed, whose result, in the case of a return series, will be the original return series.
This decomposition process makes it possible to establish interpretations for the behavior of each of the different components obtained from the original return
series. For example, low-frequency oscillation components, referred to as long-wave components, can be interpreted as being long-term cycles or seasonalities,
while components of high frequencies of oscillations, shortwave, are derived from short-term effects on returns due to shocks in financial market innovations.
2.2.1 Maximum overlap discrete wavelet transform
The wavelet decomposition method used in this work will be the maximum overlap discrete wavelet transform (MODWT). An important property of
this method is that it preserves the variance of the original series, as shown by Percival and Mofjeld (2000), and Janay et. al. (2002). This property is essential in
order for the estimated VaRs to be accurate. Another important feature is that the MODWT presents "shift invariance", so when moving an observation in the
rolling window all the decomposed scales will move together (BERGER, 2016). Once the variance of the original return series is preserved and there is a shift
invariance, we can estimate the volatility using reconstructed series from the decompositions.
To generate financial wavelet series you must apply the filters on the return database. MODWT wavelet filters a b,c and scaling filters ge b,c are
obtained, respectively, from the DWT filters b,c and scaling filters fb,c , as follows:

a b,c = b,c 2b3 e ge b,c = fb,c 2b3 (11 e 12)

where j = [1, 2, ..., J] represents the decomposition levels with J representing the highest level of decomposition. The higher the level, the lower the frequency of
the wavelet function obtained (GENAY, 2002). l = [1, ..., L] is the size of the filter that is found intrinsically connected to the level j chosen, being that L is the
largest scale length.
The values obtained for ge b,c $ a b,c , through equations (11) and (12), are the MODWT wavelet filters generated by the transformations of the
original DWT filters.
The wavelet coefficients, which will later generate the wavelet series, are obtained by the convolution of the returns = { , K = 1, 2, , 1} and
the MODWT filters:
klmn
ib, = j a b,c e ab, = c:o
klmn
h Fc Y?> ge b,c Fc Y?> , com q = 1, 2, , r (13 e 14)
c:o

where b = 2b 1 1 + 1 is the length of the wavelet filter, associated with the scale sb and modN is the modulus operator, which indicates that it is
necessary to deal with the limit of a vector , with finite length in number of observations (Percival and Mofjeld, 2000, p 121). Thus, we are implicitly assuming
that it can be considered as a periodic.
As for each value of t the coefficients ab, and hib, are in fact different sums, the representation of the convolutions given by (13) and (14) can be
represented in matrix form as follows:
ib = teb e ab = ub
h (15 e 16)

Where teb and ub are matrices that have the following forms:
a a b, a b, a b,3 a b,E ~
y b,o FE F3

x a a b,o a b, FE a b,z a b,3 }
teb = x b,E }, (17)
x }
a a b, F3 a b, Fz a b,E a b,o |
w b, FE
and

y f b,o f b, FE
f b, F3
f b,3 f b,E ~
x f f b,o f b, FE f b,z f b,3 }
ub = x b,E } (18)
x }
wf b, FE f b, F3 f b, Fz f b,E f b,o |

Finally, we can reconstruct the original series of returns by adding the wavelet coefficients obtained in Equations (15) and (16), multiplied by the
transpose of the matrices teb and ub , given in Equations (17) and (18), as follows:

ib + u a = b:E ab +
= b:E teb h (19)

The reconstruction of the return series r, according to Equation (17), makes it more and more precise, and approaches the returns of the original
series, as J increases. As described in Berger (2016), the detail coefficients, at different levels, are given ab that describe the local details in the trend j, in which
each j represents a different frequency level. The revised version of the series represents the trend component. This component is useful in analyzing series that
show growth or fall trends. However, since the series of daily returns are, in general, stationary, then this component of formula (19) can be removed in the
reconstruction of the series.
What Equation (19) shows is that after being applied to the wavelet transformations in a series of returns and obtaining the decompositions of the
series, one can carry out the inverse operation, summing these decompositions in different levels j, and again obtaining the original series of returns. In short, each
coefficient of detail or decomposition ab obtained represents a frequency level different from the original wave, without any loss of originality.

2.3 GARCH and FIGARCH models

VaR estimates depend on how future volatility is predicted, since financial returns do not have stable volatilities over time (Bollerslev, 1986).
Generally, daily financial returns show periods of high volatility, followed by quieter periods. This heterocedastic character of the return makes it difficult to
measure and predict volatility.
To try to get around this, GARCH processes are applied which generate the parameters that make it possible to establish future predictions of
conditional volatilities.
A GARCH(p, q) process, with autoregressive p components and average moving q components, as introduced by Bollerslev (1986) and Taylor
(1986), defines the conditional volatility equation as follows:
+ :E 3F + b:E b 2 3Fb =
H
23 = o o + 3 + 23 (20)

where * > 0, > 0, o > 0, > 0, &K q = 1,2, , *, b > 0, with q = 1,2, , *, and are lag operators such that = E + 3
3
+,

and = E + 3 3
+ , H H , is a white noise process. And 2 3 is the conditional variance to be estimated.
FE
The GARCH process defined in (20) is largely stationary, with = 0 and u = o 1 , according to Tayefi and
Ramanathan (2012), an equivalent representation to (20) of the GARCH process (p, q) is given by:
+ :E 3F + b:E b 3Fb b:E b Fb +
H H
3 = o (21)

where = 3 2 3 = 4 3 1 2 3 is uncorrelated u 4 = 1.
It is observed that, from (21), a GARCH(p, q) process can also be expressed as an ARMA(m, p) process in 3 , which allows us to write:
"1 ' 3 = o + "1 'u (22)
where ` = max{ *, } and = 3 2 3 .
There is a very wide range of models that integrate the GARCH process. A peculiar characteristic of these models is that they lose exponentially the
information of past shocks in conditional volatility, which characterizes them as models of short memory, a characteristic that makes them impossible to model a
series that presents shocks with long memory. To solve this, Engle and Bollerslev (1986) developed the integrated GARCH model class (IGARCH), whose
unconditional variation does not exist. This occurs when :E + b:E b = 1, in a GARCH(p,q) model. The basic feature of IGARCH models is that the
H

impacts of past shocks squared from F > 0, to > 0, over are persistent. Given that the u process, in Equation (22), can be interpreted as innovations for
3

conditional variance, since it is a martingale zero mean (TAYEFI and Ramanathan, 2012), so the IGARCH (p,q) model can be written as:

= 4 12 3 , "1 ' 1 3 = o + "1 'u (23 e 24)


where {z } is defined as before.
As observed, the IGARCH model is strongly stationary, which implies an infinite persistence of the conditional variance. Therefore, the IGARCH
model is very restrictive. Because of this characteristic, even after periods of high volatility have already occurred, the IGARCH continues to predict high
volatility even if volatility has stabilized, thus generating over-conservative estimates (Baillie et al., 1996). This characteristic of the IGARCH model makes it
impossible to model series of data with less persistent memories, and weakly non-stationary. Therefore, to fill this gap, the FIGARCH model was developed, with
the flexibility to model data series that have both short and long memory characteristics (Baillie et al., 1996). FIGARCH (p,d,q) tries to solve this problem,
fractionalizing the integration of the innovations, thus making flexible the IGARCH model. Its formula is written as follows:
"1 ' 1 > 3
= o + "1 'u (25)
It is observed that the class of fractional integrated GARCH models (FIGARCH) is obtained simply by substituting the operator (1-L) in Equation
(24), by the operator fractional difference, 1 > , where d is a fraction, is such that 0 < $ < 1.
What characterizes FIGARCH (p,d,q) is the fraction d which fragments the integration of the innovations. If d is close to one, the FIGARCH model
given by (25) resembles an IGARCH model, given by Equation (24), a long memory model; that is, the innovations decrease slowly in the prediction of the
conditional variance. If d is close to zero, the model (25) approaches a GARCH model, given by Eq. (22), in which the impacts of the shocks decrease rapidly, as
it moves away from the shocks, characterizing itself as a short memory model. Therefore, FIGARCH works as a model that transits in the modeling between the
GARCH model and the IGARCH, depending on the value of estimate d.
Equation (25) will be reorganized, replacing in this equation = 3 2 3 , as previously defined, and doing = "1 ' so
permitting to obtain:

1 23 = o + "1 1 > ' 3


23 = o 1 FE
+ "1 1 FE "1
' 1 > ' 3

2 =
3
o 1 FE
+s 3
(26)

where 0<d<1.
In the solution of (26), all roots remain outside the unit circle, s = "1 1 FE "1
' 1 > ' and for a FIGARCH
(p,d,q), s must have infinite elements, that is s = sE + s3 + sz + + s , with k. The lambda operator measures the impact that past
innovations will have on predictions of future volatility. Its value is defined by the parameters estimated by the FIGARCH method.
In order for Equation (26) to be well defined, the conditional variance given in this equation is found in the ARCH () representation, which requires
that the elements (L) must be non-negative, or being k 0, For k = 1,2, ....
For a FIGARCH(1,d,1) model, = E , = E and = E , and Equation (26) becomes:

23 = o 1 E FE
+s 3 (27)

To estimate the elements of the operator s , we first calculate the parameters E , E, E and d by the FIGARCH(1,d,1) model to be applied in this
study, and then calculate the elements s , by the following recurrence formula:
FEF>
s = E sFE + E >,FE , com k=2, ., (28)

In the use of the initial process, when k = 1, the following recurrence formulas:
E = 1 E E , sE = E E $ e >,o =1 (29, 30, 31)

Eq. (28), above >, , refers to the coefficients in the expansion series of 1 >
, defined as follows:
>, = >,FE 1 $ , para = 2,3, FE
(31)
Equation (31) leads to the following estimate:
> = :E >,
(30)
According to Tayefi and Ramanathan (2012), using non-negativity of the s ,, it makes it possible to derive the inequalities, which are sufficient so
that the conditional variance, 2 3 , is not negative. That is:
3F> EF>
E $ E e $ E E $ E + E (31 e 32)
z 3
Finally, also according to Tayefi and Ramanathan (2012, p. 186), Equation (27) that makes it possible to estimate the conditional variance by an
infinite ARCH for a move forward and can be written as follows:

23 1 = o 1 E FE
+
:o s E F
3
(33)

It is observed that Eq. (33) theoretically should expand to , in practice, this expansion should be limited to a large number of repetitions, specified
here by M. Therefore, the predictions of conditional variance by Eq. (33) are made by lambda M combinations and past innovations, and without loss of
generality, since very large for M, the lambdas subsequently decrease, and become less and less significant for larger lags.
Also, according to Tayefi and Ramanathan (2012, p. 186), the evolution from Equation (33) to l steps ahead is as follows:

23 = o 1 E FE
+ cFE
:E s 2 + :o s c F
3 3
(34)

2.3 Rolling Window

After estimating the VaRs, they need to be tested for the purpose of verifying their reliability, or using them on a day-to-day basis. The problem with
this is that so far we have only estimated the future conditional volatilities for only one day, and since the VaRs estimated in this work depend on these
predictions of conditional variance, the VaR is limited to only generating a one-day forecast. But for VaRs to be tested correctly, a much larger sample of VaR
forecasts is needed. In order to overcome this problem, a rolling window, RW (ZIYOT, 2006), is applied, as this procedure allows a significant increase in the
number of future VaR forecasts and constitutes a sample to test the performance of these VaRs, before an out-of-sample forecast return series.
Considering that in this study, several predictions of future conditional volatility need to be established and, for that, the RW technique is applied to
establish them. First, the size of the rolling window was chosen, which in the study was M = 470 days. Then the forecast horizon size of the VaRs, which are
daily with a one-day forecast horizon, was chosen. With this data, the structured rolling window in the study uses 470 past days to generate the predictions of a
future day, through Equation (33). Therefore, the impacts of the s estimates, with i=1,2, ..., M, with M=470, induce the intrinsic effects of the calibrated return
series on the estimate of 2 3 E , for a day ahead. With the prediction of the conditional volatility of day t + 1, the rolling window takes a step forward, thus finding
t+1, but aiming to estimate the conditional volatility in the time t + 2. This process is repeated as far as desired, until the conditional variance is estimated for the
established forecast period. In this study the process will be repeated 210 times to generate 210 days of predictions of conditional volatility for one day. These
forecasts will be used in estimates of future VaRs.
For a better understanding of the RW process, when estimating the volatility, from one day ahead of a day t, located l steps ahead of the end of the
return series used in the calibration process, the rolling window uses all present data Up to 470 days before t, introducing the impacts of the larger ones, in the
current stochastic noise term and the current term of 2 3 , and subsequently, as can be seen in Equation (34).
2.4 Back Testing
A set of backtesting will be applied to evaluate the performance of the VaRs estimated in this study, whose concepts and test procedure will be
described in this subsection.
The VaR is estimated on the basis of past data, with the objective of predicting the intensity of failure for unknown future returns data. Therefore, it is
considered that the actual profits and losses of the shares (or portfolios) are known on the current day and can be compared with the estimated VaR for the
previous day. Thus, Back Testing aims to evaluate the VaR performance using daily data. In practice, many metrics and statistical tests are used to identify the
performance of the VaR in terms of estimates of failure probabilities. Normally, all Back Testings present, to varying degrees, robustness and fragility in the
estimates of their statistics, which suggests the use of more than one criterion to test the performance of an estimated VaR.
Typically, a VaR is estimated for one day, with a certain level of reliability. However, the estimated VaR may present levels of errors that may make it
inconsistent, essentially due to the size of the sample used, and inaccuracies in the estimation techniques of the parameters that describe the model. Thus, the VaR
failure probability values may be underestimated or overestimated, and therefore one must check whether these differences are statistically significant because,
due to errors, two types of conclusions can occur: (i) The possibility of a risk model being classified as inaccurate, but can be statistically accepted as accurate; (ii)
the possibility of an inaccurate model being classified accurately, although it is statistically imprecise. Due to these inconsistency possibilities, Back Testing is
applied to verify if the estimates of the probabilities of failures (or gains) are significant, for a certain level of significancy.
Limits produced from existing VaR models are given in monetary units, or proportion or percentage (in the case of this study they are given as
proportion). To perform the back testing analysis, VaR uses the estimated limits, and the returns that, in the case of this study, are a sample of returns for an out-
of-sample period.
A first test to be applied is the semaphore test, proposed by the Basle Committee on Banking Supervision (1996), based on the construction of a
cumulative distribution function (cfd) of a binomial distribution, in which an x number of attempts are used, with probability of obtaining up to x successes, in n
independent observations (in the case of the study, the number of out-of-sample framework used in the VaR proof), with probability p of successes. The values of
n must be positive integers, the values of x must be in the interval [0, n], and the values in p must be in the interval [0, 1]. The cfd of the binomial distribution,
involving the variables and highlighted parameters is given by the following equation:
n
y = F x|n, p = :o p 1 p F
Io,E,, i
i
(35)

The result y is the probability of observing up to x successes, in n independent trials, where the probability of success in a given test is p. The indicator
function, Io,E,, , ensures that x only adopts values in the intervals [0,1, ..., n]. For a given number of failures x, proposed by the predictions of the VaR model, we
can estimate the probability of observing up to x faults (this is, the cumulative probability to x), using the binomial distribution given by Equation (35).
From this result, in the semaphore test, three zones are defined as follows: (i) the "red" zone begins in the number of faults, corresponding to the
probability equal to or above 99.99% of the cpf of (35). It is unlikely that too many failures will come from a correct VaR model; (ii) the "yellow" zone covers the
number of failures where the probability is equal to or greater than 95%, but less than 99.99%, of the cpf of (35). Even if there is a high number of violations, the
violation count is not excessively high, and; (iii) everything below the yellow zone (usually between 90% and less than 95% of the cpf of (35), is the "green"
zone. So if you get very few failures with the VaR model, it falls into the green zone. Only many flaws lead to rejections of models, thus the red zone.
In the green zone, the test results are consistent with an exact VaR model, with the possibility of erroneously accepting an imprecise model being low.
At the other extreme, in the red zone, test results are extremely unlikely to have resulted from an accurate model, and the probability of erroneously rejecting an
exact model is remote. However, between these two cases there is the yellow zone, where the backtesting results can be consistent with accurate or inaccurate
models.
Kupiec (1995) introduced the probability of failure (POF) test, defined by a likelihood ratio function to test whether the probability of failure is
synchronized with the implicit probability p at the VaR confidence level. If the data suggest that the probability of failure is different from p, the VaR model is
rejected. Thus, the null hypothesis assumes a failure rate equal to the expected rate, as follows:
o : * = * (36)
where * = - with x being the number of failures and n the number of observations.
As mentioned in Nieppola (2009), if the number of days with excessive loss is very high, compared to the level of significance chosen, then VaR
underestimates the risk, whereas if a reduced proportion of failures occur, also compared to the level of significance, this suggests an overestimation of risk.
The statistic of the POF test is defined by:
EFH ]m H
= 2 %f (37)
EF A ]m A

where x is the number of failures, n the number of observations * = 1 .


The statistic given by Equation (37) is asymptotically distributed, as a chi-square variable with a degree of freedom 3 1 . The VaR model fails the
test if the likelihood ratio given in (37) exceeds a critical value. The critical value depends on the confidence level of the test.
Kupiec also proposed a second test, called time until first failure (TUFF). The TUFF test examines when the first rejection occurred. If this happens
too soon, the VaR model fails the test. Thus, if we consider that the failures follow a binomial distribution, then the probability of a failure is the inverse
probability of the VaR confidence level. The null hypothesis for this test is as follows:
E
o : * = * = (38)

Where v is the number of days to the first rejection.


The TUFF test is also based on a likelihood ratio function, but the underlying distribution is a geometric distribution. Thus, the test statistic is given
by:
EFH mn H
= 2 %f (39)
EFE mn E

This statistic is asymptotically distributed, also as a chi-square variable with a degree of freedom of one, 3 1 .
Therefore, according to Kupiec (1995), if H? or exceeds the distribution critical value, for a given confidence level, the null hypothesis is
rejected. These tests measure only the number of failures and ignore the temporal dynamics of failures, relative to independence properties, and conditional
coverage.
Christoffersen (1998) proposed a test to measure whether the probability of observing a failure on a specific day depends on whether a failure occurred.
Contrary to the unconditional probability of observing a failure (according to ), the Christoffersen test measures the dependence of failures between
consecutive days. This test not only covers the violation rate but also the independence6 of failures. If the model is exact, then a failure today should not depend
on whether or not a failure occurred the previous day.
The test statistic for failure independence is a likelihood reason function, written as follows:
EF Cn n Cnn
= 2 %f (40)
EF n EFn n n nn

where uoo the number of periods without failures is followed by a period without failures, uEo the number of periods with failures, followed by a period without
failures, , uoE the number of periods without failures, followed by a period with failures, and uEE the number of periods with failures followed by a period with
failures.
The parameters o , E , and are defined, respectively, as follows:
n nn n nn
o = , E = e = (41,42, e 43)
An n nn n n nn

where o is the probability of having a failure in period t, given there was no failure in the period K 1, E , the probability of having a failure in period t, once a
failure occurred in the period K 1, , the probability of having a failure in period t.
The statistic , given in (40), is asymptotically distributed as a chi-square, with a degree of freedom 3 1 . This failure independence test statistic
establishes, in the null hypothesis, that the conditional probability of failure must remain constant; or that:
o : * M = 1| FE =*M =1 (44)
By combining the independence test statistic with the Kupiec Failure Ratio Test, , the combined test is estimated to not only measure the
correct failure rate, but also independently of failure, that is conditional coverage (CC ), As follows:

= + (45)
The conditional coverage test is asymptotically distributed as a chi square variable with two degrees of freedom, 2 , stating that the probability of
3

failure must be constant and equal to the coverage rate *; this is:
o : * M = 1| FE = * M = 1 = *; (46)
In this section we characterized a backtesting set with distinct properties that will be applied, together, to highlight the performance of the estimated
VaR models. Backtesting can provide conclusions that make possible the highlighting of doubts about the risks of classifying VaR models as accurate, or
inaccurate. However, when using a set of backtesting in the analysis of VaR models, one can assure in a more robust and conclusive way about the performances.
3. Methodological Procedure
Twente one shares of BM & F-BOVESPA Ibovespa and twenty two shares of DJIA were selected for the database. The selected shares comprise,
respectively, the Ibovespa and DJIA indices. Attempts were made to select those shares that had sufficient data (in particular for BM & F-BOVESPA shares) and
which were not very similar, for example, differentiating only by the fact that it is common or preferential. The closing price7 series for the selected shares were
taken, totaling 890 days per share. The selected period for Ibovespa and DJIA stocks presented a few differences, depending on the days of public holidays, but
both involving dates between 4/1/2013 and 9/7/2016.
The series of compound returns were estimated based on the closing prices of the selected shares. With the return series, the wavelet decomposition
process was applied, using the methodology presented in Subsection 2.2.1. In the study, eight levels of decomposition were chosen, therefore, there being in total
eight wavelet series with eight different levels for each composite return. Figures 1 and 2 illustrate these series generated by the wavelet method.

6
The independence test verifies the degree of concentration of the failures, considering that the failures do not always distribute in a homogeneous way over the
period of time of the observations. Usually, in practice, the failures are often concentrated, demonstrating a dependency character, a factor that can change the number
of violations, according to the size of the sample of observations (Godlewski, C. and Merli, M., 2012).
7
Closing prices were obtained from the database at https://finance.yahoo.com.
Figure 1: Decomposition series of the daily returns of BBSD3 shares, BM & F-BOVESPA stock, in eight levels (with the chart at the top, on the left,
correspodent at the j = 1 level, the chart at the top, right, correspodent At level j = 2, and so on).

Figure 1 shows the decompositions obtained from the series of returns of the BBSD3 stock, in operation on the BM & F-BOVESPA. Figure 2
illustrates the decompositions obtained for the MMM stock of the company 3M, in NYSE. Each graph in the figure is a series of returns, at a given level of
decomposition, the j representing these levels comprises the interval j = [1, ..., 8]. It is clear from Figures 1 and 2 that as the level of decomposition increases, the
series obtained will have longer wavelengths, however with amplitudes of oscillations smaller than the previous ones. From this information, we conclude that the
impacts of the higher-level decompositions (lower frequencies) on the variance of the original return series are smaller than the impacts of the lower frequency
decompositions with higher frequencies. This has a significant implication on this work, because if the amplitudes of the oscillations of the series of low
frequencies are small, then the impacts on the VaR should also be small. And as seen in Subsection 2.2.1, the low-frequency series are the long-term trends. So a
previous imagined conclusion is that for most actions, long-term oscillations are likely to have little effect on VaRs.
In general, the wavelet decompositions of the series of returns have an economic interpretation. Each level of detail can be seen as a cycle, the cycle
period varies from level to level, the higher the level of decomposition, the longer the cycle time interval. Berger (2016), in his article on the impact of long-term
seasonalities on VaR, generated a table that shows its interpretation for every detail of decomposition. Because it is intuitive and enlightening, Table 1 below is
similar to one presented by Berger (2016) in his article. The difference is that Berger (2016) estimated the contribution of each decomposition in the variance for
simulated returns, whereas Table (1) in this study used real data, corresponding to the BBSD3 shares of BM & F-BOVESPA and MMM of the NYSE stock.

Figure 2: Series of the decompositions of the daily returns of the MMM stock, of the NYSE stock in eight levels (with the chart at the top, on the left,
corresponding at the level j = 1, the chart at the top, right, corresponding at level j = 2, and so on).

In the last column, Table (1) presents the levels of oscillations, according to the Wavelets decomposition, which are referenced by Dj (with j
comprising the integers in the interval q = "1, 2, , 8'). The frequency column (second column) shows some specific time8 intervals for the cycles, while the

memory column (first column) shows the economic interpretations of the period in terms of cycle sustainability. From these interpretations, it can be inferred that
the decomposition D2, for example, incorporates weekly cycles, with frequencies of 4 to 8 days. This frequency band captures the stock market "Monday
Effects". The detail level D4 incorporates the effects of monthly cycles, and can capture the "Turn-of-the-Month Effects" (BERGER, 2016). Thus, if any researcher
wants to remove or highlight any of these effects, he may remove or incorporate in his analysis the decompositions that capture those effects.
It is also seen in Table (1), more specifically in the column of contributions to variance (second and third columns), where are shown the percentage
values of the shareholdings of the BBSD shares of BM & F-BOVESPA and MMM of the NYSE, respectively, in variance of the total returns of the respective
shares. Something that is apparent when comparing the contribution percent values of these two actions is that the lower frequency cycles do not contribute
significantly to the variance of the respective total returns. In fact, they are the decompositions of short- and medium-term oscillations, and stochastic noises that
generate the major parts of the return series variance. In order to clearly demonstrate this highlighted effect in Table (1), four graphs were generated in Figure (3),
for four other actions different from those in Table (1), BBAS and BRML of BM&F-BOVESPA, and MMM and JNJ of NYSE.

Table 1: Impact of the variances of the decomposition components on levels in the variance, of the respective returns. An economic interpretatioin of these
decompositions.
Contribuio varincia da Contribuio varincia da
Mmoria Frequncia Nvel de detalhe
ao BBSD3 ao MMM

Rudo 2-4 dias 53% 52% D1


Curto Prazo 4-8 dias 24% 26% D2
Mdio Prazo 8-16 dias 12% 12% D3

8
The values correspond to the interval between the filtering components, 2b , 2b E
, &K q = 1, 2, ,8
Mdio Prazo 16-32 dias 6% 5% D4
Mdio Prazo 32-64 dias 3% 2% D5
Longo Prazo 64-128 dias 1% 1% D6
Longo Prazo 128-256 dias 1% 1% D7
Tendncia 256-512 dias 0% 1% D8

It is observed in Figure (3) that each graph demonstrates the percentage participations in the variances of the respective returns, specified for each
level of decomposition j. The results are very similar for all the selected actions, presenting little difference when compared to the other actions. However, it is
noted in Figure (3) that the interpretations presented above, when analyzing Table (1), are maintained for these four actions and, in general, it can be stated that
the participations of the components in the variance, hold approximately equal for all; That is, the higher frequency decompositions are more impacting in the
variance than the lower frequency decompositions.
After obtaining the decomposed series, the series of reconstructed returns were generated. These series were obtained by adding the decomposed
components in a sequential order from the lower level components until incorporating those of higher levels, logically, in different series. The mathematical
representation of this process is given by the following relation:

= :E
b
nF l
(47)

Equation (47) is Equation (19), keeping the coefficient of detail , and removing the coefficient . This procedure is performed because, as already
emphasized, the impact of this last coefficient on series of daily returns is practically nothing. In equation (47), the detail levels for each action, given by J, is
limited to eight, since the wavelet decomposition process was performed considering eight levels. For example, to generate a series considering only the sums of
wavelet series of levels being from one through five, Equation (47) would become n F = :E . Following this method, for each action were generated eight
series of reconstructed returns, where J in Equation (47) ranged from one to eight. This totaled 168 rebuilt series for the shares of BM&F-BOVESPA and 176 for
NYSE shares.

Figure 3: Participation in the variation of each decomposition for the 4 actions (BBAS and BRML of BM&F-BOVESPA, and MMM and JNJ of NYSE).
Figure 4 illustrates the impact that each rebuilt component causes on the variance of the original return series for the shares of BBAS and BRML3 of
BM & F-BOVESPA and for the NYSE's MMM and JNC shares. In the abscissa axis are the representations of the levels of each reconstructed series, while in the
ordinates are the respective variances. It is observed in this figure that the decompositions of lower levels (higher frequencies) hold the most significant parts of
the floating energies (variance), and this contribution decreases as the decomposition level increases (the oscillation frequencies decrease), making from level 6
practically insignificant and completely annulling at level 8. As shown in Figure (4), below, the evolution processes for each action are different, since each action
has a particular level of volatility. In general, this result reinforces the conclusion presented by Berger (1916), in which the long-run cycles have little effect on the

variability of the series of daily returns. It is also observed in Figure (4) that the reconstruction involving the wavelet decompositions, from levels one to eight,
E , is exactly the representation of the original return series, as can be seen in the figure, by comparison between the series E and the original return
series r. The two series are exactly the same, so the E series will be used as a proxy of the VaR stimation of the original return series. Finally, it can be
pointed out that the variance of a series of daily returns is preserved, when its decomposition, via the MODWT algorithm, as is expected, due to the energy
preservation characteristic of this wavelet function.

Figure 4: The variances contained in each rebuilt series for the BBAS and BRML3 of BM & FBOVESPA's shares and the NYSE's MMM and JNC shares.
In the study, the estimates and predictions of the conditional volatilities present in the series of returns were made, using the reconstructed series,
from the wavelet decompositions as previously highlighted, under analysis. The FIGARCH(1,d,1) process was chosen to generate these estimates and predictions
of conditional variance. As already emphasized, this process presents great flexibility to deal with both long-term memory variables related to shocks in
innovations that evolve into stochastic trends, and to the variables that present short-term memories related to shocks in innovations that evolve into economic
cycles (Baillie et al., 1996). In total, four9 variations of the FIGARCH model were tested in the study, with FIGARCH (1,d,1) showing the best results. This
model was used to estimate the parameters of each series of reconstructed returns, in a total of eight series for each action. For this purpose, the MFE10 library was
used to estimate these parameters.

Figure 5 shows the conditional variances estimated by the FIGARCH(1,d,1) process for the BBAS3 stock of BM & FBOVESPA and for the MMM
stock of the NYSE for the reconstituted components D1-2 and D1-5. The figure illustrates two colored lines, the blue line characterizes the estimated conditional

9
The other three were FIGARCH(1,d,0), FIGARCH(0,d,1) e FIGARCH(0,d,0).
10
The library was made available by Kevin Sheppard for MatLab.
volatilities, from the reconstructed return series D1-8, using data for 680 days, applied in the calibration process of the parameters by FIGARCH(1,d,1). The red
line is the one-day forecast using a 210-day Rolling Window11 for the same series. It is observed in Figure (6) that the predictability of the conditional variance is
satisfactory for both actions, demonstrating an evolutionary process very similar to the conditional variances estimated for the base calibration period of the
model, thus preserving the energy of oscillations of the database used in the calibration. This behavior was observed for all the reconstructed series and for all the
shares, which characterizes the robustness of the prediction process of the volatilities conditions of the respective series

Figure 5: Conditional volatilities estimated and predicted by FIGARCH(1,d,1) for the BBAS stock, for the reconstituted components D1-2 and D1-5 of the BM&F-
BOVESPA and for the MMM stock for the reconstituted components D1-2 and D1-5 of the NYSE.

It should be emphasized that the predictions of the conditional variances were performed following the rolling window technique developed by Klein
and Walther (2016). This computational routine uses Eqs. (33) and (34) to predict the variance of one day, applying this formula to a number of lags of delay for
the integration process, which in the case of this study was 470 days. This process is subsequently repeated by rolling the integration window (rolling window) to
forecast for each subsequent day until covering the forecast interval of 210 days for the conditional variance.

From the predictions of the conditional variances obtained by the procedure presented in the previous paragraph, it was possible to calculate the VaRs
for each series of reconstructed returns. Two methods, Parametric and Monte Carlo, MC were used, which were presented in Subsection 2.1. The Rolling
Window generated 210 predictions of conditional variance for each reconstructed series of returns, whose values were used to estimate the standard deviations, to
be used in the calculations of the VaRs, according to Equations (5) and (6), which allowed to generate 210 VaRs for each series of reconstructed returns. These
estimated VaRs were used to test their qualities in the forecast of losses and gains, for an out-of-sample of returns, according to the procedure given by Relations
from (7) to (10). Finally, it applied a set of backtesting to evaluate the performance of the VaRs estimated in this study, whose concepts and procedure of tests
were described in Subsection 2.4.
4 Results and analysis

Following what was presented in Section 3, methodological procedure, twenty-one Brazilian shares were selected from Ibovespa of BM&F-
BOVESPA's and twenty two shares from NYSE's DJIA. Based on the composite returns, we applied the wavelet decompositions to eight levels and then grouped
them, creating new series, such as, for example, agglutinating levels one and two in a new series called D1-2. Levels one, two and three in the series D1-3, and thus,
subsequently generating in total, for each action eight new reconstructed series. Finally, from these series of reconstructed returns, the conditional volatilities,
'down' and 'up' Value-at-Risk, were estimated with a reliability level of 95% and 99% using the Monte Carlo and Parametric methods, whose methodological
procedures were presented in Subsection 2.1.1.
In particular, for the Monte Carlo VaR estimates, thirty-two Monte Carlo stochastic realizations were generated for the estimated VaRs, and then the
averages of these realizations were calculated for each time point. This procedure made it possible to improve the accuracy of the prediction tests. As already
explained, the results were obtained from a calibrated model using a sample of 680 data for each action and with a rolling window procedure of 470 points, and
using a series of 210 samples of out-of-sample returns, for each action, to carry out the verification tests of the forecast capacity of the estimated VaRs.
The predictive capacity for all VaRs were tested by estimating the failure percentages and comparing them with the established significance level for
each rebuilt return component. Then, the back-testing was performed, as presented in item 2.4, in order to verify the statistical significance and the characteristics
of these probabilities of estimated failures. Therefore, this sub-section will present the results of estimated failure percentages and back-testing12 performed for
VaRdown and VaRup with 95% and 99% reliability for all assets included in the study, the Ibovespa of BM & FBOVESPA and the DJIA of NYSE. The results
corresponding to VaRdown estimated by the Monte-Carlo method were presented in Subsection 3.1.1, and the other results for VaRup estimated by the Monte-
Carlo method and those estimated by the Parametric method were presented in Appendices A and B.
Therefore, throughout Sub-section 3.1.1, in addition to presenting the estimated results, a discussion and analysis of the results obtained, as well as
the behavior of the Backtesting applied will be established.
3.1 Results related to Ibovespa of BM & FBOVESPA shares
Tables 2 to 5, below, present the values, in percentages of failures, obtained in the tests applied to VaRdown for the MC method, with 99% and
95% confidence, involving twente two stocks. The values in these tables were obtained for the shares selected from the BM&F-BOVESPA Ibovespa. In each
column, between the second and the next to last, are the percentage values of failures of the reconstructed series, denominated, in sequence of EF ,
EF , EF3 , E . In the three lines below the probabilities of failures, *, of each action, are the results of the back-testing, respectively, of traffic light, TL, of
independence of failures, A> , of conditional coverage, . As already emphasized, the reconstruction EF corresponds to the estimates for the original
series and, as already emphasized, the other columns present the values of the percentages of VaR failures, as the decompositions of higher levels are discarded.
The expectation is that, since the VaRs were estimated with 99% and 95% confidence, the failure percentages should ideally be close to 1% and 5%,
respectively. But since the tests were carried out on real assets, with values occurring during the very small forecast period (of only 210 days), one can expect the
possibility that the results are not perfect, considering that the lower the out-of-sample return series used in the forecasting process, the smaller the amounts of
failures that participate in the * average estimates. To get an idea, an estimated VaR with 1% significance, it is expected that, on the average, 2.1 failures occur
(for a VaR of 5% significance 10.5 failures occur). However, if we considered for the forecasting process, an out-of-sample return series of 1000 samples,
practically five times greater than that used in that study, then for an estimated VaR with 1% significance, it is expected that, on the average, 10 failures occur (for
a VaR of 5% significance, 50 faults occur). It is observed from this example that the larger the number of out-of-sample samples used in the calibration (and
forecast) process, better estimates can be obtained for the * mean (probability of medium failures), because larger out-of-sample series of returns allow to better
dilute the effects of independence between failures and the effects of concentration13 of failures. These effects occur essentially as a result of atypical innovations,
possibly stemming from economic and political crises that affect the financial markets, domestically and/or internationally, or stochastic innovations related to the
fundamentals of stocks.

11
The rolling window was made following the routine presented by Klein e Walther (2016).
12
Only the Backtesting corresponding to the Semaphore test, the independence of failures , and the conditional coverage were presented, due to the lack
of space to show the others, and because they are less important.
13
The concentrations have a significant influence on the estimation of probability of failure, because if the daily variations of the financial assets are independent, the
failures should distribute in a homogeneous way over the period used in the forecasts and back-testings and converge to the expected probability. However, in
practice, failures are often concentrated, and the risk of loss can no longer be independent of time. (Godlewski, C. and Merli, M.(2012).
One could ask the following question: why not use a larger out-of-sample return series in the forecasting process? The answer is simple, since in a
FIGARCH process, with rolling window, the larger the out-of-sample return series used in the forecasting process, the higher the return series used in the
calibration process of the parameters of the FIGARCH model, and the larger window should be the rolling window. It is for this reason that many researchers, for
example, Dufour (2006) and Berger (2016), used series of long returns, both in the calibration process and in the forecasting process. However, one of the
objectives of the study conducted in this research is to verify the efficiency of VaR models in the forecasting process, in the face of relatively small series, both in
the calibration process of the model and in the efficiency prediction process of the VaR model constructed using series of return data that approximate the size of
those recommended by the Basle Committee on Banking Supervision14.
It can be inferred, from Table (2) to (5)15, referring to the VaRdown estimates obtained by the MC methodology that there is a behavior pattern in the
estimates of the probabilities of failure *, in which as the decompositions are removed of the rebuilt returns, the number of failures increase, both in the estimated
VaRdown (Tables (2) to (5)) and VaRup (Tables (A.1) to (A4)) calculations. This trend of growth is more gradual, between the components EF and EFz , and
more increased between the rebuilt components EFz and E . This effect is clearly due to the increase of the variance in the lower-level components (higher
frequencies of oscillations), as shown in Figures (4) and (5), for BM&F-BOVESPA shares. This pattern can be observed throughout all the tables presented
throughout this section and appendices (see, for example, the BBAS3 share, the first share of Table (2)).
However, this fact observed in the results of Tables (2) to (5) has importance, since the tendency of the increase in the number of estimated failures
does not occur in a linear way, as more lower frequency decompositions are removed (of average and long-term). For example, by removing the octave and
seventh decompositions of the reconstructed series, the number of faults remains practically stable in relation to the original return (EF ). In fact, for the majority
of actions, the number of failures only begins to grow significantly, when the decompositions of lower levels with high frequencies are removed. The intermediate
decompositions between levels four and six also practically do not affect the quality of the VaRs, because when these are removed from the series of
reconstructed returns, the VaR failure numbers remain very little affected, not differing much from those obtained by the original returns.
In summary, the number of failures only begins to change significantly for most actions, only when the third and/or second decompositions are
removed, due to the withdrawals of the high frequency components, whose oscillations approach cyclic behavior, or stochastic noise, due to the stochastic
innovations exogenous to the financial system. This behavior can be clearly verified for most of the actions, observing the last three columns of Tables (2) to (5),
referring to the reconstructed components, EFz , EF3 and E . Effect occurs regardless of the level of significance used in the estimation of VaR; be it 1% or 5%.
To better analyze these results, the last column was introduced in Tables (2) to (5), which show the reconstructed series that caused the smallest
deviations in the estimates of the failures, in relation to the level of significance. As there are several series that presented the same deviations with respect to the
level of significance, we selected those that presented higher levels of independence and lower levels of concentration (factors measured by the probabilities
estimated by the statistics A> and , as will be explained later). It is clearly seen from these results that for most stocks and for all VaR models, that there
is a set of BM&F-BOVESPA shares, for the most part, where their behaviors are due to the interactions of medium and long-term frequency components
(possibly characterized as series of returns with cyclical long wavelength characteristics). This behavior for BM&F-BOVESPA shares is understandable given
that financial markets of developing countries are more susceptible to the effects of economic and political crises, and besides, of course, are more difficult to
recover from these effects. Therefore, it is conjectured that shares that are more influenced by medium and low frequency components, the majority of BM&F-
BOVESPA shares, exhibit less efficient behavior in the sense of FAMA (1970), in accordance, at least, with the semi-strong form of the market efficiency
hypothesis16.
However, there are a few shares, as can be seen in Tables (2) to (5), which are more influenced by short-term components, EFz , EF3 e E , and higher
frequencies. These actions are dominated by stochastic noise effects, following the "random path" hypothesis, according to Fama (1970), whose returns data of
these series presented a consequent behavior of independence with lower concentrations that determine effects of correlations between samples closely grouped.
This can be verified by observing that, practically all series of reconstructed stock returns, which can be classified as EFz , EF3 e E , presented lower
independence and lower concentrations, as observed by the probabilities associated with the statistics A> and , as can be seen in the Tables From (2) to
(5).
It is also noted that comparing Tables (2) and (5), respectively, are compared with Tables (A.1) to (A.4), Appendix A , that the results obtained by the
VaRdown and VaRup models obtained by the MC model were very similar, which demonstrates that the VaR estimated by the Monte-Carlo method best
addresses the asymmetry of the return series, using point averages, using those performed by the Monte-Carlo method. Also, using estimates made but not
presented in the article, we conclude that the MC model presented more robust estimates than the estimates obtained by the Parametric Method.
Finally, we will analyze the results of the back-testing presented in Tables (2) to (5). It begins with the back-testing of Semaphore (traffic light), which
is characterized by three zones: (i) the "red" zone, in which the VaRs models estimate the number of faults that classifies it as an incorrect VaR model; (ii) the
"yellow" zone, which states that even if there is a high number of violations, the violation count is not too high to reject the estimated VaR model; (iii) the "green"
zone, which indicates that it can get very few failures with the VaR model, and the VaR model is correct. According to this procedure, it is observed in Tables (2)
to (5) that most of the estimated VaRs are characterized as models that estimate with few failures, and are likely to be consistent with an exact VaR model, the
possibility of mistakenly accepting such models as imprecise being low. Therefore, it can be conjectured that, at least, The estimated VaR, for the reconstructed
series that presented smaller deviations of the levels of significance, can be accepted as correct.
The back-testing of independence between failures supports the null hypothesis that the time between failures remains approximately constant
throughout the out-of-sample return series, during the forecasting process. Therefore, when the statistic A> produces a level of probability 1 * close to the
reliability level established in the VaR estimates, then the probability of failure demonstrates a behavior of homogeneity, which allows to accept the null
hypothesis. The same reasoning applies to the conditional concentration back-testing, given by the statistic , which states in the null hypothesis that the VaR
probability of failure should be approximately equal to the level of significance used in the estimation of the VaR model. In this case, when 1 * approaches
the reliability level established in the VaR estimation, the probability of failure proves to be homogeneous (independent), and the probability of failure must be
constant and equal to the estimated coverage rate *. These arguments can be verified, if we analyze formulas (37), from (41) to (44). It is observed that
allows estimated * = - to converge to the probability of the level of significance, if converges to zero. Likewise, if A> converges to zero, * = -
converges to the probability of the level of significance. However, for this to happen, it is necessary that the period numbers without failure become equal, for all
periods without failures among failures. Finally, the assumed assumptions for the statistics emphasized above induce the same conclusion for LRCC ,
according to Equation (44).
In Tables (2) to (5) in the lines specified by A> and , in a first column under the indication of the reconstructed series (for example, EF ),),
they present the values of the statistics A> and , and in the column beside the probability level 1 * . Therefore, the conclusions drawn from the results
of these statistics are that when 1 * obtained by A> approaching the confidence level established in the VaR estimation, the probability of failure shows to
be homogeneous (independent). The same reasoning is valid in analyzing the estimate 1 * obtained by ; that is, when 1 * approximates the
confidence level applied in the VaR estimation, the probability of failure can be considered constant and equal to the coverage rate *. It can be seen from the
results presented in Tables (2) to (5) that the best estimates of probability of failure occur when the 1 * are closest to (1-). This type of behavior also holds
for VaRup estimates (Tables (A.1) to (A.4), Appendix A), and for the estimates obtained by the Parametric method, not presented in the article.
3.2 Results related to the shares of the DJIA index
Tables (6) to (9), below, present the values, in percentages of failures, obtained in the tests applied to VaRdown for the MC method, with 99% and
95% confidence. The values contained in these tables were obtained for the shares selected from the shares comprising the DJIA Index. It is observed in these

14
The Basel Committee on Banking Supervision recommends the use of a database of the last twelve months, approximately 250 Observations, to analyze the effects
of back-testing.
15
Table (3) is a continuation of Table (2) and Table (5) is the continuation of Table (4).
16
The weak form of the efficient market hypothesis claims that prices reflect Implicit information in the past price sequence. The semi-strong hypothesis states that
prices reflect all the relevant information that is publicly available, while the strong form of market efficiency states that the information known to any participant is
reflected in the market prices of Fame (1970).
tables that they follow the same presentation scheme for the tables shown above, referring to the BM&F-BOVESPA Ibovespa shares, involving twenty-two
shares chosen.
It can be seen in Tables (6) to (9)17, below, that the results obtained are similar to those obtained for the Ibovespa assets, for which estimates of the
probabilities of failure *,follow the same pattern of behavior that as the decompositions of the reconstructed returns of lower frequencies are removed, the number
of failures increases, both in the VaRdown calculations (Figures (6) and (9)) and VaRup (Figures of (B.1) To (B4), Appendix B). As previously, this tendency of
increase in the number of estimated failures does not occur in a linear way, as for the most of actions, the number of failures only begins to grow significantly,
when the decompositions of higher levels with lower frequencies are removed. Intermediate level decompositions between levels four and six also practically do
not affect the quality of the VaRs, as when they are removed from the series of reconstructed returns, the VaR failure numbers remain very little affected,
practically unchanged from the VaR of the original returns series, EF . However, the growth trend is well established among the reconstructed components EFz
and E , whose oscillations approach to cyclical behavior, or stochastic noise, due to the stochastic innovations exogenous to the financial system. These effects
cause higher volatilities in these reconstructed components, since the wavelet decomposition procedure retains in the components E , 3 , and z , the frequencies
with effects of greater variance (as highlighted in Figure (3), above). Observe in Figures (6) to (9) that this pattern is practically systematic throughout all of these
tables for most of the actions and also in Tables (B.1) to (B.4), appendix B, referring to VaRup results.
However, there is an important aspect in the results of the shares obtained from the DJIA index, different from those observed in Tables (2) to (5) of
the Ibovespa shares, with few exceptions. The difference (as can be seen in Tables (6) to (9) and Tables (B.1) to (B.4), appendix B) is that the reconstructed series
that caused the smallest deviations in the estimates of the failures, in respect to the level of significance, are mostly found in the reconstructed components EFz ,
EF3 or E , and with higher levels of independence and lower levels of concentration (factors measured by the probabilities estimated by the statistics A> and
, when 1 * estimated, approximates the levels of confidence applied in the VaR estimates). These results demonstrate that most of the shares, and for all
VaRs models, that the shares of the DJIA Index, for the most part present a dynamic, whose behaviors are due to the interactions of components with medium and
short-term frequencies, or stochastic noise. This behavior for shares of the DJIA is different from those observed for the shares of BM&FBOVESPA Ibovespa.
This pattern is best observed in the references in the last column of Tables (6) to (9) which present for each action the components reconstructed with smaller
deviations in the expected failure percentages, relative to the probabilities of significance of the VaR estimates.
These effects can be associated with the characteristics of the US financial market, given that the financial markets of developed-countries, unlike
those of developing markets, are less likely to be affected by economic and political crises, and of course present less difficulty in recovering from the effects of
crises. Therefore, it is conjectured that shares more influenced by components of medium and high frequency, most of the shares that are part of the DJIA Index,
present more efficient behaviors, in the Fama (1970) sense, according, at least, to the semi-strong type from the efficient market hypothesis, which states that
prices reflect all the relevant information that is publicly available.
Regarding the analyzed aspects, it is concluded that the results showed that the DJIA index shares have a more efficient market behavior than those
of Ibovespa. This is not surprising, since the New York Stock Exchange is generally more stable and robust than the So Paulo Stock Exchange. This is
evidenced by the volume of daily trading, which increases the liquidity of the assets present in the stock exchange, and by the relative stability of the returns that
the NYSE presents compared to BM&FBOVESPA. BM&FBOVESPA, though being a stock exchange in an emerging country, fluctuates more significantly in
every economic or political report, whether positive or negative, whether national or international. However, the NYSE, a stock exchange from a developed
country, with greater solidity in its economic and political foundations, and for its size, suffers less impact of instabilities and in turn presents relatively smaller
oscillations. These fluctuations become significant only in moments of severe crisis.
Following, we will analyze the results of back-testing presented in Tables (6) to (9). The results presented in these tables demonstrate that the traffic
light approach for back-testing characterizes that most of the estimated VaRs are characterized as models that estimate with few failures, and are highly likely to
be consistent with an accurate VaR model, being low the possibility of erroneously accepting these models as low inaccurate. Therefore, one can conjecture that
in the least, the estimated VaR for the reconstructed series that presented smaller deviations of the confidence levels can be accepted as correct.
The backtesting of independence between failures ( ) supports in the null hypothesis that the time between failures remains approximately
constant throughout the out-of-sample return series, during the forecasting process. The same is true for conditional concentration backtesting, given by the
statistic that establishes in the null hypothesis that the probability of VaR estimated failures is homogeneous (independent), and the probability of failures
should be constant and equal to the estimated coverage rate *. In Tables (6) to (9), in the lines specified by A> and , in a first column under the indication
of the reconstructed series, are presented the values of the statistics A> and , and in the next column, the respective levels of probability 1 * .
Therefore, the conclusions drawn from the results of these statistics are that when 1 * obtained by approaches the confidence level established in the
VaR estimation, the probability of failure shows to be homogeneous (independent). And, likewise, when the estimate 1 * obtained by approaches the
confidence level applied in the VaR estimation, the probability of failure can be considered constant and equal to the coverage rate, *. The results obtained in
Tables (6) to (9) show that the best estimates of probability of failure occur when the 1 * are closest to (1-). This type of behavior also holds for the VaRup
estimates (Tables (B.1) to BA.4), Appendix B), and for the estimates obtained by the Parametric method, not presented in the article.
Table 4: VaRdown failure test results, with 95% confidence, estimated by the MC method, for the return of each rebuilt series of BM & F-BOVESPA shares.
4.286 5.714 4.286 6.190 6.667 5.238 7.619 12.381

TL (Green) (Green) (Green) (Green) (Green) (Green) (Yellow) (Red)


D1-3

4.00 0.05 1.97 0.16 4.00 0.05 1.49 0.22 1.09 0.30 2.53 0.11 0.50 0.48 0.22 0.64
BBAS3

4.23 0.12 2.18 0.34 4.23 0.12 2.07 0.35 2.21 0.33 2.56 0.28 3.13 0.21 17.61 0.00

2.857 2.857 2.857 2.857 2.857 3.333 3.810 5.714

TL (Green) (Green) (Green) (Green) (Green) (Green) (Green) (Green)


BBDC3

D1

0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.14 0.71

2.39 0.30 2.39 0.30 2.39 0.30 2.39 0.30 2.39 0.30 1.38 0.50 0.68 0.71 0.36 0.84

0.476 0.952 0.952 0.952 0.952 1.905 2.857 5.238

TL (Green) (Green) (Green) (Green) (Green) (Green) (Green) (Green)


BRAP4

D1

0.00 0.97 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00

14.74 0.00 10.72 0.00 10.72 0.00 10.72 0.00 10.72 0.00 5.49 0.06 2.39 0.30 0.02 0.99

1.905 1.905 1.905 2.381 2.381 3.810 4.762 7.619

TL (Green) (Green) (Green) (Green) (Green) (Green) (Green) (Yellow)


BRFS3

D1-2

0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 1.11 0.29 0.29 0.94 0.00 1.00

5.49 0.06 5.49 0.06 5.49 0.06 3.73 0.15 3.73 0.15 1.79 0.41 0.22 0.92 2.63 0.27

4.762 4.762 4.762 4.762 4.762 6.667 7.143 10.476

TL (Green) (Green) (Green) (Green) (Green) (Green) (Yellow) (Yellow)


BRKM5

D1-8

3.20 0.07 3.20 0.07 3.20 0.07 3.20 0.07 3.20 0.07 1.09 0.30 0.76 0.38 3.14 0.08

3.23 0.20 3.23 0.20 3.23 0.20 3.23 0.20 3.23 0.20 2.21 0.33 2.56 0.28 13.36 0.00

17
Table (7) is continuation of Table (6) and Table (9) is continuation of Table (8).
3.810 4.286 4.762 4.762 5.714 6.190 6.190 14.762

TL (Green) (Green) (Green) (Green) (Green) (Green) (Green) (Red)

BRML3

D1-6
0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.11 0.74

0.68 0.71 0.24 0.89 0.03 0.99 0.03 0.99 0.22 0.90 0.58 0.75 0.58 0.75 28.42 0.00

3.333 3.333 3.333 3.333 3.810 5.238 4.762 7.619

TL (Green) (Green) (Green) (Green) (Green) (Green) (Green) (Yellow)


BVMF3

D1-3
0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.29 0.59 0.00 1.00 0.50 0.48

1.38 0.50 1.38 0.50 1.38 0.50 1.38 0.50 0.68 0.71 0.31 0.86 0.03 0.99 3.13 0.21

5.714 5.714 5.714 6.190 6.190 9.524 10.000 11.905

TL (Green) (Green) (Green) (Green) (Green) (Yellow) (Yellow) (Red)


CCRO3

D1-8
0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 2.16 0.14

0.22 0.90 0.22 0.90 0.22 0.90 0.58 0.75 0.58 0.75 7.23 0.03 8.67 0.01 17.62 0.00

5.143 5.143 5.143 5.143 5.143 8.095 11.429 13.810

TL (Green) (Green) (Green) (Green) (Green) (Yellow) (Red) (Red)


CPFE3

D1-8
0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.29 0.59 0.38 0.54

0.35 089 0.35 089 0.35 089 0.35 089 0.35 089 3.60 0.17 13.90 0.00 24.07 0.00

1.429 0.952 0.476 0.952 1.905 3.333 4.286 4.286

TL (Green) (Green) (Green) (Green) (Green) (Green) (Green) (Green)


CPLE6

D1
0.00 1.00 0.00 1.00 0.00 0.97 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00

7.76 0.02 10.72 0.00 14.74 0.00 10.72 0.00 5.49 0.06 1.38 0.50 0.24 0.89 0.24 0.89

5.238 5.238 5.238 5.238 5.238 5.238 7.619 14.286

TL (Green) (Green) (Green) (Green) (Green) (Green) (Yellow) (Red)


CSAN3

D1-8
0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.50 0.48 0.84 0.36

0.02 0.99 0.02 0.99 0.02 0.99 0.02 0.99 0.02 0.99 0.02 0.99 3.13 0.21 26.80 0.00

2.857 2.857 2.857 2.857 2.857 5.714 6.190 9.524

TL (Green) (Green) (Green) (Green) (Green) (Green) (Green) (Yellow)


CYRE3

D1-3
0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.05 0.83 0.66 0.41

2.39 0.30 2.39 0.30 2.39 0.30 2.39 0.30 2.39 0.30 0.26 0.94 0.63 0.73 7.90 0.02

Table 5: (Continuation of Table 4): VaRdown failure test results, with 95% confidence, estimated by the MC method, for the return of each rebuilt series of BM
& F-BOVESPA shares.
6.190 8.095 8.095 8.095 8.571 9.524 8.571 11.905

TL (Green) (Yellow) (Yellow) (Yellow) (Yellow) (Yellow) (Yellow) (Red)


ECOR3

D1-8
0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 0.99

0.38 0.85 3.60 0.17 3.60 0.17 3.60 0.17 4.69 0.10 7.23 0.03 4.69 0.10 15.46 0.00

2.381 2.381 2.381 2.381 2.857 4.286 5.238 5.238

TL (Green) (Green) (Green) (Green) (Green) (Green) (Green) (Green)


EMBR3

D1-3

0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.29 0.59 0.29 0.59

3.73 0.15 3.73 0.15 3.73 0.15 3.73 0.15 2.39 0.30 0.24 0.89 0.31 0.86 0.31 0.86

6.190 6.190 6.190 6.667 6.190 6.190 7.619 3.810

TL (Green) (Green) (Green) (Green) (Green) (Green) (Yellow) (Green)


ESTC3

D1-8

1.49 0.22 1.49 0.22 1.49 0.22 1.09 0.30 1.49 0.22 1.49 0.22 0.50 0.48 0.00 1.00

2.07 0.35 2.07 0.35 2.07 0.35 2.21 0.33 2.07 0.35 2.07 0.35 3.13 0.21 0.68 0.71

0.952 0.952 0.952 0.952 0.952 3.810 1.905 5.238

TL (Green) (Green) (Green) (Green) (Green) (Green) (Green) (Green)


FIBR3

D1

0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00

10.72 0.00 10.72 0.00 10.72 0.00 10.72 0.00 10.72 0.00 0.68 0.71 5.49 0.06 0.02 0.99

5.238 5.238 5.238 5.714 5.238 6.190 6.667 10.000

TL (Green) (Green) (Green) (Green) (Green) (Green) (Green) (Yellow)


LREN3

D1-8

0.29 0.59 0.29 0.59 0.29 0.59 0.14 0.71 0.29 0.59 0.05 0.83 0.00 0.95 0.01 0.93

0.31 0.86 0.31 0.86 0.31 0.86 0.36 0.84 0.31 0.86 0.63 0.73 1.12 0.57 8.68 0.01

4.762 4.762 4.762 4.762 4.762 4.762 9.048 9.048

TL (Green) (Green) (Green) (Green) (Green) (Green) (Yellow) (Yellow)


MULT3

D1-8

0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.42 0.51 0.42 0.51

0.03 0.99 0.03 0.99 0.03 0.99 0.03 0.99 0.03 0.99 0.03 0.99 6.33 0.04 6.33 0.04
5.190 5.190 5.190 5.667 5.667 8.571 11.429 17.619

TL (Green) (Green) (Green) (Green) (Green) (Yellow) (Red) (Red)

NATU3

D1-8
0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.26 0.61 0.03 0.87 1.62 0.20

0.02 0.99 0.02 0.99 0.02 0.99 0.02 0.99 0.02 0.99 4.95 0.08 13.64 0.00 45.51 0.00

5.667 5.143 6.095 6.667 6.667 7.143 7.143 10.476

TL (Green) (Green) (Green) (Green) (Green) (Yellow) (Yellow) (Yellow)


PETR3

D1-8
1.34 0.25 0.42 0.52 0.24 0.63 0.97 0.33 0.97 0.33 0.42 0.52 0.42 0.52 0.10 0.76

6.03 0.05 9.10 0.01 10.46 0.01 6.87 0.03 6.87 0.03 9.10 0.01 9.10 0.01 19.50 0.00

0.952 0.952 0.952 0.952 1.429 4.286 14.286 6.667

TL (Green) (Green) (Green) (Green) (Green) (Green) (Green) (Green)


UGPA3

D1-3
0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00

10.72 0.00 10.72 0.00 10.72 0.00 10.72 0.00 7.76 0.02 0.24 0.89 0.24 0.89 1.12 0.57

5.238 3.810 3.810 5.238 5.238 5.238 7.619 11.905

TL (Green) (Green) (Green) (Green) (Green) (Green) (Yellow) (Red)


USIM5

D1-8
0.29 0.59 0.00 1.00 0.00 1.00 0.29 0.59 0.29 0.59 0.29 0.59 5.08 0.02 1.52 0.22

0.31 0.86 0.68 0.71 0.68 0.71 0.31 0.86 0.31 0.86 0.31 0.86 7.71 0.02 16.98 0.00

2.857 2.857 2.857 2.857 3.333 3.810 6.190 4.762

TL (Green) (Green) (Green) (Green) (Green) (Green) (Green) (Green)


VIVT4

D1
0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00

2.39 0.30 2.39 0.30 2.39 0.30 2.39 0.30 1.38 0.50 0.68 0.71 0.58 0.75 0.03 0.99

Table 8: VaRdown failure test results, with 95% confidence, estimated by the MC method, for the return of each rebuilt series of DJIA shares.
0.476 0.476 0.476 0.476 2.381 2.381 3.381 4.286

TL (Green) (Green) (Green) (Green) (Green) (Green) (Green) (Green)


MMM

D1
0.00 0.97 0.00 0.97 0.00 0.97 0.00 0.97 2.83 0.09 2.83 0.09 0.83 0.79 0.26 0.88

14.74 0.00 14.74 0.00 14.74 0.00 14.74 0.00 6.56 0.04 6.56 0.04 1.56 0.54 10.20 0.91

0.952 0.952 0.952 0.952 0.952 0.952 1.429 4.762

TL (Green) (Green) (Green) (Green) (Green) (Green) (Green) (Green)


AXP

D1
0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00

10.72 0.00 10.72 0.00 10.72 0.00 10.72 0.00 10.72 0.00 10.72 0.00 7.76 0.02 0.03 0.99

2.857 2.857 2.857 2.857 2.857 3.333 3.810 5.714

TL (Green) (Green) (Green) (Green) (Green) (Green) (Green) (Green)


AAPL

D1
0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00

2.39 0.30 2.39 0.30 2.39 0.30 2.39 0.30 2.39 0.30 1.38 0.50 0.68 0.71 0.22 0.90

1.429 1.429 1.429 1.429 1.429 1.429 2.857 4.333

TL (Green) (Green) (Green) (Green) (Green) (Green) (Green) (Green)


BA

D1

0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00

7.76 0.02 7.76 0.02 7.76 0.02 7.76 0.02 7.76 0.02 7.76 0.02 2.39 0.30 0.18 0.90

0.952 1.429 1.429 1.429 1.429 1.905 2.381 4.762

TL (Green) (Green) (Green) (Green) (Green) (Green) (Green) (Green)


CAT

D1

0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00

10.72 0.00 7.76 0.02 7.76 0.02 7.76 0.02 7.76 0.02 5.49 0.06 3.73 0.15 0.03 0.99

1.429 1.429 1.429 1.429 1.429 1.905 2.857 4.286

TL (Green) (Green) (Green) (Green) (Green) (Green) (Green) (Green)


CVX

D1

0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 3.77 0.05 2.11 0.15 0.26 0.78

7.76 0.02 7.76 0.02 7.76 0.02 7.76 0.02 7.76 0.02 9.26 0.01 4.50 0.11 0.20 0.81

2.381 2.381 2.381 2.381 2.381 2.381 2.857 5.238

TL (Green) (Green) (Green) (Green) (Green) (Green) (Green) (Green)


CSCO

D1

0.00 0.97 0.00 0.97 0.00 0.97 0.00 0.97 0.00 0.97 0.00 0.97 0.00 0.97 0.00 1.00

14.74 0.00 14.74 0.00 14.74 0.00 14.74 0.00 14.74 0.00 14.74 0.00 14.74 0.00 0.13 0.92

3.333 3.333 3.333 3.333 3.333 3.333 4.286 6.190

TL (Green) (Green) (Green) (Green) (Green) (Green) (Green) (Green)


XOM

D1-2

0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00

1.38 0.50 1.38 0.50 1.38 0.50 1.38 0.50 1.38 0.50 1.38 0.50 0.24 0.89 0.58 0.75

1.429 1.905 1.905 1.905 2.381 2.381 4.286 5.238


TL (Green) (Green) (Green) (Green) (Green) (Green) (Green) (Green)

HD
5.08 0.02 3.77 0.05 3.77 0.05 3.77 0.05 2.83 0.09 2.83 0.09 0.76 0.38 0.13 0.91

D1
12.84 0.00 9.26 0.01 9.26 0.01 9.26 0.01 6.56 0.04 6.56 0.04 1.00 0.61 0.16 0.88

0.476 0.476 0.952 0.476 0.476 0.476 0.952 4.286

TL (Green) (Green) (Green) (Green) (Green) (Green) (Green) (Green)


IBM

D1
0.00 0.97 0.00 0.97 0.00 1.00 0.00 0.97 0.00 0.97 0.00 0.97 0.00 1.00 0.00 1.00

14.74 0.00 14.74 0.00 10.72 0.00 14.74 0.00 14.74 0.00 14.74 0.00 10.72 0.00 0.24 0.89

2.857 2.857 2.857 2.857 3.333 3.333 5.714 6.667

TL (Green) (Green) (Green) (Green) (Green) (Green) (Green) (Green)


INTC

D1-2
0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.0 1.00 0.00 1.00

2.39 0.30 2.39 0.30 2.39 0.30 2.39 0.30 1.38 0.50 1.38 0.50 0.36 0.84 1.12 0.57

Table 9 (Continuation of the Table 8): VaRdown failure test results, with 95% confidence, estimated by the MC method, for the return of
each rebuilt series of DJIA shares.
2.857 2.857 2.857 2.857 4.762 4.762 5.238 6.190

TL (Green) (Green) (Green) (Green) (Green) (Green) (Green) (Green)

D1-2
JNJ

0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.49 0.48 0.49 0.48 0.29 0.89 0.05 0.83

2.39 0.30 2.39 0.30 2.39 0.30 2.39 0.30 0.52 0.77 0.52 0.77 0.21 0.92 0.63 0.73

2.381 2.857 2.857 3.333 2.381 4.286 4.286 6.619

TL (Green) (Green) (Green) (Green) (Green) (Green) (Green) (Yellow)


MCD

D1-2
2.83 0.09 2.11 0.15 2.11 0.15 1.55 0.21 2.83 0.09 0.00 1.00 0.00 1.00 0.00 1.00

6.56 0.04 4.50 0.11 4.50 0.11 2.94 0.23 6.56 0.04 0.22 0.90 0.22 0.90 0.32 0.84

1.905 1.905 1.905 1.905 1.905 2.381 2.381 5.714

TL (Green) (Green) (Green) (Green) (Green) (Green) (Green) (Green)


MRK

D1
0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00

5.49 0.06 5.49 0.06 5.49 0.06 5.49 0.06 5.49 0.06 3.73 0.15 3.73 0.15 0.22 0.90

0.952 0.952 0.952 0.952 0.952 0.952 0.952 3.810

TL (Green) (Green) (Green) (Green) (Green) (Green) (Green) (Green)


MSFT

D1
0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00

10.72 0.00 10.72 0.00 10.72 0.00 10.72 0.00 10.72 0.00 10.72 0.00 10.72 0.00 0.68 0.71

(Green) (Green) (Green) (Green) (Green) (Green) (Green) (Green)

TL 2.857 2.857 2.857 2.857 2.857 4.762 5.238 6.190

D1-3
NKE

0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.05 0.83

2.39 0.30 2.39 0.30 2.39 0.30 2.39 0.30 2.39 0.30 0.03 0.99 0.02 0.99 0.63 0.73

0.952 0.952 0.952 0.952 1.429 2.381 2.381 5.714

TL (Green) (Green) (Green) (Green) (Green) (Green) (Green) (Green)


PFE

D1

0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00

10.72 0.00 10.72 0.00 10.72 0.00 10.72 0.00 7.76 0.02 3.73 0.15 3.73 0.15 0.22 0.90

1.905 1.905 1.905 2.381 2.381 3.333 3.333 4.286

TL (Green) (Green) (Green) (Green) (Green) (Green) (Green) (Green)


PG

D1

0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00

5.49 0.06 5.49 0.06 5.49 0.06 3.73 0.15 3.73 0.15 1.38 0.50 1.38 0.50 0.24 0.89

3.810 4.286 4.286 4.286 4.762 4.762 5.238 6.190

TL (Green) (Green) (Green) (Green) (Green) (Green) (Green) (Green)


D1-2
TRV

1.11 0.29 0.76 0.38 0.76 0.38 0.76 0.38 0.49 0.48 0.49 0.48 0.29 0.59 0.05 0.83

1.79 0.41 1.00 0.61 1.00 0.61 1.00 0.61 0.52 0.77 0.52 0.77 0.31 0.86 0.63 0.73

1.429 1.429 1.429 1.905 1.905 1.905 3.810 5.238

TL (Green) (Green) (Green) (Green) (Green) (Green) (Green) (Green)


UTX

D1

0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.29 0.59

7.76 0.02 7.76 0.02 7.76 0.02 5.49 0.06 5.49 0.06 5.49 0.06 0.68 0.71 0.31 0.86

2.857 2.857 2.857 3.333 3.810 4.286 4.286 4.286

TL (Green) (Green) (Green) (Green) (Green) (Green) (Green) (Green)


UNH

D1

2.11 0.15 2.11 0.15 2.11 0.15 2.11 0.15 1.55 0.21 0.00 1.00 0.00 1.00 0.00 1.00

4.50 0.11 4.50 0.11 4.50 0.11 4.50 0.11 2.94 0.23 0.24 0.89 0.24 0.89 0.24 0.89

2.381 2.381 2.381 2.381 2.381 2.857 4.286 6.190


TL (Green) (Green) (Green) (Green) (Green) (Green) (Green) (Green)

D1-2
0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.36 0.88 0.49 0.82

VZ
3.73 0.15 3.73 0.15 3.73 0.15 3.73 0.15 3.73 0.15 2.39 0.30 0.31. 0.91 0.47 0.75

5. Conclusion
The main objective of this study was to measure the impacts of short, medium, and long-term cycles on daily Value-at-Risk estimates. For this, we
used the wavelet technique to separate each frequency component, intrinsically, inserted in the series of shares returns analyzed in the study. Bearing in mind that
financial assets undergo an extremely dynamic process, subject to several levels of volatility, we applied the wavelet analysis to verify how the impacts of
different cyclical frequencies on series of returns of financial assets affect the dynamics of the evolutionary process, of listed shares of the BM&FBOVESPA
Ibovespa and shares of the DJIA Index. To do so, we attempt to analyze the dynamics of these assets by structuring a VaR model from conditional volatilities
forecasts obtained from calibrated parameters using a FIGARCH (1,d,1) model and applying the rolling window technique which, according to ZIYOT (2006),
makes it possible to significantly increase the precision in the forecast of conditional volatilities, and consequently obtain series of longer conditional volatilities
and in an extensive way, increase the series of estimates of daily VaRs. The structured model made it possible to analyze the VaR model dynamics, from which it
was possible to demonstrate that the dynamics of most shares traded on the BM & FBOVESPA are relatively different from the stock dynamics of the DJIA
Index. It is concluded that these results demonstrate that the DJIA stock have a more efficient market behavior than those of Ibovespa. This result is not
surprising, since the New York Stock Exchange is generally more stable and robust than the So Paulo Stock Exchange. BM&FBOVESPA, as a stock exchange
in an emerging country, fluctuates more significantly in every economic or political report, whether positive or negative, whether national or international.
However, the NYSE, a stock exchange of a developed country, with greater solidity in its economic and political foundations, suffers less unpactos of instabilities
that, in turn, presents relatively smaller oscillations.
The results obtained were conclusive and, in some respects, are similar to those obtained by Berger (2016), who did, from a conceptual point of view,
a study similar to the one developed in this study, but only using data from NYSE's DJIA shares, and with considerably larger sample sizes, larger especially in
the forecasting process and the rolling window application.
With respect to the fact of the analysis amplification, involving two different financial markets, this study brings an additional contribution, in
relation to the article by Berger (2016). First, because the analysis of assets was amplified involving two financial markets, in addition to studying the behavior of
VaRs for shares in NYSE's DJIA (a financial market in a developed country), also there was included in the analysis actions of Ibovespa of BM&FBOVESPA, a
financial market of a developing economy. The differences in behavior identified between these two markets imply that the VaRs to be used in the analysis of
financial assets, coming from different markets, with different governance premises, must be estimated by series of reconstructed returns, by aggregations of
different frequency components.
Another important fact explored in the study was the effect of the size of the samples considered in the study: in the calibration process, in the rolling
window, in the process of qualifying the forecast of failures and, consequently, in the procedures of back-testing. Larger samples, especially in the forecasting and
qualification process of VaR models, may distort the forecasting capacity of these models, in relation to their applications in capital management in financial
institutions. In this study we tried to work with relatively small samples, compared to similar studies in the related literature (as seen in section 1) we used a total
of 890 samples, of which 470 were for the calibration process, 210 for the rolling window, and 210 for the forecast. Normally, most of the studies use
considerably larger time series, which may facilitate the models calibration process, and provide better performing VaRs. Certainly, larger samples improve the
performance of Autoregressive Heteroscedastic models, absorbing effects of instabilities, due to their frequency repeatability, and in particular, improve the
processability of forecasts of failures by the VaRs models, since the probabilities of failures are measured in intervals of larger samples, which allows to absolve
the effects of dependence of failures and concentration. These effects are due to the impacts of stochastic innovations, as a consequence of economic or political
crises, effects that can strongly influence the performance of VaR models in small samples. However, according to the Basle Committee on Banking Supervision
report (1996), it suggests that backtesting be conducted on the basis of samples of approximately 250 observations whose failure numbers, based on 250 samples,
should be used in banking supervision. Therefore, the structuring of VAR models with finite samples (small) that meet the requirements of the Basel Committee
is important, since it can make it possible to have a more realistic understanding of the performance of the model, under specific conditions of the requirements
for application in the daily activities of financial institutions.
Finally, the study performed a set of backtesting, which was presented together with the probabilities failures. The traffic light approach back-testing
results demonstrated that most of the estimated VaRs are characterized as models that estimate with few failures, and are highly likely to be consistent with the
exact VaRs model, and the possibility of mistakenly accepting such models as accurate is low. In addition, the other back-testing, failure independence and
conditional concentration, have demonstrated that the best estimates of probability of failure occurred for those series of reconstructed returns, for which the
probabilities of failure are approximately equal to the established significance levels for the VaRs models, thus demonstrating that at least for these series, the
estimated VaRs models presented the requirements to be accepted as correct models (with characteristics of failure times independence and without concentration
effect), therefore, with potential to be used for risk analysis in financial markets.

References

Andries A. M., Ihnatov I., Tiwari A. K. (2014). Analyzing time frequency relationship between interest rate, stock price and exchange rate through continuous
wavelet. Economic Modelling,v. 41, p. 227-238.
BASEL COMMITTEE ON BANKING SUPERVISION. (1996). Supervisory framework for the use of "backtesting" in conjunction with the internal models
approach to market risk capital requirements. Basle Committee on Banking Supervision.
Baillie, R.T., Bollerslev, T., Mikkelsen, H.O. (1996). Fractionally Integrated Generalized Autoregressive Conditional Heteroskedasticity. Journal of
Econometrics, v. 74, n.1, p. 330.
Belkhouja, M., Boutahary, M. (2011). Modeling volatility with time-varying FIGARCH models. Economic Modelling, v. 28, p. 11061116.
Berger, T. (2016). On the impact of long-run seasonalities on daily Value-at-Risk forecasts. Word Finance Conference, New York- USA, 7/29-7/31/2016.
https://www.world-finance-conference.com/papers_wfc2/282.pdf
Bollerslev, T. (1986). Generalized autoregressive conditional heteroskedasticity. Journal of Econometrics, v.31, n.3, p.307-327.
Christoffersen, P. (1998). Evaluating interval forecasts. International Economic Review, v.39, p. 841-862.
Chakrabarty, A., De, A. and Bandyopadhyay, G. (2015). A Wavelet-based MRA-EDCC-GARCH Methodology for the Detection of News and Volatility
Spillover across Sectoral IndicesEvidence from the Indian Financial Market. Global Business Review, v. 16, p. 3549.
Dimson, E., Mussavian, M. (2000). Market Efficiency. The current state of business disciplines, v.3, p. 959-970.
Dufour, J.M. (2006). Monte carlo tests with nuisance parameters: A general approach to finite-sample inference and nonstandard asymptotics. Journal of
Econometrics v. 133, p. 443477.
Engle, R. F., Bollerslev, T. (1986). Modelling the persistence of conditional variances. Econometric Reviews, v. 5, 1-500.
Fama, E. (1970). Efficient Capital Markets: A Review of Theory and Empirical Work. Journal of Finance, v. 25, p. 383-417.
Frey, R. and Michaud, P. (1997). The Effect of GARCH-type Volatilities on Prices and Payoff-Distributions of Derivative Assets - a Simulation Study. Working
Paper. http://statmath.wu.ac.at/~frey/publications/garch-sim.pdf.
Garbade, K. (1986). Assessing risk and capital adequacy for Treasury securities. Topics in Money and Securities Markets. v. 22, New York: Bankers Trust.
Genay R., Seluk F., Whitcher B. (2002). An Introduction to Wavelets and Other Filtering Methods in Finance and Economics. San Diego, CA, Academic
Press.
Genay, R., Seluk, F. (2004). Extreme value theory and value-at-risk: Relative performance in emerging markets. International Journal of Forecasting, v. 20, n.
2, p. 287-303.
Giot, P., Laurent, S. (2003). Value-at-risk for long and short trading positions. Journal of Applied Econometrics. v. 18, p. 641664.
Godlewski, C. and Merli, M.(2012). Gestion des risques et Institutions financires. 3e ed., Pearson.
Hoppe, R. (1998). VAR and the unreal world. Risk, v. 11, p. 4550.
Huang S.C. (2011). Wavelet-based multi resolution GARCH model for financial spillover effects. Mathematics and Computers in Simulation. v. 87, p. 2529-
2539.
Jin, H. J., Frechette, D. L. (2004). Fractional Integration in Agricultural Futures Price Volatilities. American Journal of Agriculture Economics, v. 86, p. 432-443.
Klein, T., Walther, T. (2016). On the Application of Fast Fractional Differencing in Modeling Long Memory of Conditional Variance Simulation Study and
Rolling Window Estimations of Crude Oil. SSRN Working Paper, doi:10.2139/ssrn.2754102.
https://www.researchgate.net/publication/299395324
Kupiec, P. (1995). Techniques for Verifying the Accuracy of Risk Management Models. Journal of Derivatives. v. 3, p. 7384.
Nieppola, O. (2009). Backtesting value-at-risk models. Master's Thesis, Helsinki School of Economics, Finland.
Martin-Barragan, B., Ramos, S., Veiga, H. (2013). Correlations between oil and stock markets: a wavelet-based approach. Paper presented at European
Economic Association Annual Congress 2013, Gothenburg, Sweden.
Pajhede, T. Backtesting Value-at-Risk: A Generalized Markov Framework. SSRN Electronic Journal, doi:10.2139/ssrn.2693504.
Percival, D. B., Walden, A. (2000). Wavelet methods for time series analysis. Cambridge, Cambridge University Press.
Schleicher, C. (2002). An Introduction to Wavelets for Economists. Ottawa, Bank of Canada.
Tayefi, M., Ramanathan, T. V. (2012). An Overview of FIGARCH and Related Time Series Models. Austrian Journal of Statistics, v. 41, n. 3, p. 175196
Taylor, S. (1986). Modelling Financial Time Series. New York: Wiley.
Zivot, E., Wang G. J. (2006). Modeling Financial Time Series with S_PLUS. 2nd ed. New York: Springer Science+Business Media, Inc.

Вам также может понравиться