Вы находитесь на странице: 1из 17
Applied Soft Computing Journal 82 (2019) 105553 Contents lists available at ScienceDirect Applied Soft Computing

Contents lists available at ScienceDirect

Applied Soft Computing Journal

journal homepage: www.elsevier.com/locate/asoc

Journal journal homepage: www.elsevier.com/locate/asoc Conference Analysis of temporal pattern, causal interaction

Conference

Analysis of temporal pattern, causal interaction and predictive modeling of financial markets using nonlinear dynamics, econometric models and machine learning algorithms

Indranil Ghosh a , Rabin K. Jana b, , Manas K. Sanyal c

a Department of Operations Management & IT, Calcutta Business School, WB 743503, India b Operations & Quantitative Methods Area, Indian Institute of Management Raipur, CG 493661, India c Department of Business Administration, University of Kalyani, WB 741235, India

h i g h l i g h t s

Predictive analytics models are developed to predict future movements of stock markets.

Machine learning algorithm are used to enhance the predictive accuracy.

Temporal patterns and causal interactions of the stock markets are studied.

a r t i c l e

i n f o

Article history:

Received 19 August 2018 Received in revised form 2 May 2019 Accepted 2 June 2019 Available online 7 June 2019

Keywords:

Predictive modeling

Machine learning

Econometric models

Nonlinear dynamics

Financial market

a b s t r a c t

This paper presents a novel predictive modeling framework for forecasting the future returns of financial markets. The task is very challenging as the movements of the financial markets are volatile, chaotic, and nonlinear in nature. For accomplishing this arduous task, a three-stage approach is proposed. In the first stage, fractal modeling and recurrence analysis are used, and the efficient market hypothesis is tested to comprehend the temporal behavior in order to investigate autoregressive prop- erties. In the second stage, Granger causality tests are applied in a vector auto regression environment to explore the causal interaction structures among the indexes and identify the explanatory variables for predictive analytics. In the final stage, the maximal overlap discrete wavelet transformation is carried out to decompose the stock indexes into linear and nonlinear subcomponents. Seven machine and deep learning algorithms are then applied on the decomposed components to learn the inherent patterns and predicting future movements. For numerical testing, the daily closing prices of four major Asian emerging stock indexes, exhibiting non-stationary behavior, during the period January 2012 to January 2017 are considered. Statistical analyses are performed to ascertain the comparative performance assessment. The obtained results prove the effectiveness of the proposed framework.

1. Introduction

A closer inspection of evolutionary dynamics and inherent patterns of stock markets for estimating future movements have garnered tremendous attention due to its importance on the overall economic growth of any nation [1,2]. Stock markets are often characterized as highly complex nonlinear dynamic sys- tems [3]. This makes the evaluation of temporal characteristics and predictive modeling very challenging. The confluent sensi- tiveness of stock markets to chaotic events result in a higher degree of volatility and nonlinearity in their movements that

Corresponding author.

E-mail addresses:

(R.K. Jana), manas_sanyal@klyuniv.ac.in (M.K. Sanyal).

1568-4946/

further complicate the decoding of the trend [1]. As a result, stock market forecasting occasionally bears a certain degree of uncertainties that eventually creep into the mind of investors. This motivates us to analyze the temporal pattern for extracting meaningful insights pertinent to dynamics of stock markets, and to propose a robust predictive modeling framework to predict the future movements. There are many variables, parameters, and sets in the paper. Before highlighting the review of literature, nomenclatures used in this paper are summarized in top of the next page. There exist various traditional statistical and econometric models for understanding the behavior of financial markets ex- hibiting high degree of volatility and forecasting their future movements [27]. Majority of these models assume that the

2

I. Ghosh, R.K. Jana and M.K. Sanyal / Applied Soft Computing Journal 82 (2019) 105553

Nomenclature

ADF

Augmented Dickey Fuller

 

AIC

Akaike Information Criterion

ANN

Artificial Neural Network

ANFIS

Adaptive Neural Fuzzy Inference Sys- tem

ARDL

Autoregressive Distributed Lag

 

BSE

Bombay Stock Exchange

CCF

Cross-Correlation Function

CCI

Channel Commodity Index

D

Fractal Dimensional Index

DCC-GARCH

Dynamic Conditional Correlation — Generalized Autoregressive Conditional Heteroscedasticity

DET

Determinism Rate

 

DM

Diebold–Mariano

DNGC

Does not Granger Cause

DNN

Deep Neural Network

EMA

Exponential Moving Average

EN

Elastic Net

EPA

Equal Predictive Ability

 

ERS

Elliott–Rothenberg–Stock

ERT

Extremely Randomized Trees

FA

Firefly Algorithm

FPE

Final Prediction Error

FFNN

Feed Forward Neural Network

GA

Genetic Algorithm

H

Hurst Exponent

HQ

Hannan–Quinn Information Criterion

IR

Impulse Response

 

IA

Index of Agreement

JSX

Jakarta Stock Exchange

KOSPI

Korea Composite Stock Price Exchange

LAM

Laminarity

LR

Likelihood Ratio

 

LSTMN

Long Short-Term Memory Network

MA

Moving Average

 

MACD

Moving

Average

Convergence

Diver-

gence

MAD

Mean Absolute Deviation

 

MAPE

Mean Absolute Percentage Error

MCS

Model Confidence Set

M-Boosting

MODWT — Boosting

MRS

Markov Regime Switching

M-SVR

MODWT — Support Vector Regression

M-EN

MODWT — Elastic Net

 

M-RF

MODWT — Random Forest

M-ERT

MODWT

Extremely

Randomized

Trees

MODWT

Maximal

Overlap

Discrete

Wavelet

Transformation

M-DNN

MODWT — Deep Neural Network

M-LSTMN

MODWT — LSTMN

NSC

Nash Sutcliffe Coefficient

PP

Philip–Perron

PSY

Psychological Line

REC

Recurrence Rate

RF

Random Forest

RMSE

Root Mean Squared Error

RP

Recurrence Plot

RQA RSI RW SADE SC SGD SPA STOC SVR TT TI TWSE VAR WR ZA
RQA
RSI
RW
SADE
SC
SGD
SPA
STOC
SVR
TT
TI
TWSE
VAR
WR
ZA
Recurrence Quantification Analysis
Relative Strength Index
Random Walk
Stacked Denoising Auto Encoder
Schwarz Information Criterion
Stochastic Gradient Descent
Superior Predictive Ability
Stochastic Oscillator
Support Vector Regression
Trapping Time
Theil Inequality
Taiwan Stock Exchange
Vector Auto Regression
Williams Overbought/Oversold Index
Zivot Andrews

variance remains unchanged while explaining the predictabil- ity. Empirical studies confirm that this is not a comprehensive assumption and often lead to serious deviation [810]. The RP and RQA are used to identify different types of crashes in the stock market [3] as well as to test the existence of Fractional Brownian Motion. The findings of these research works suggest that the markets are inefficient. Econometric techniques like DCC-GARCH [11], Granger causality [12,13], conditional Granger causality [14], ARDL [15], Granger causality and nonlinear ARDL [16] are used to evaluate the association and interac- tion among homogeneous and heterogeneous assets. Wavelet- based techniques [17,18] are also used to explore the such re- lationships. These studies are restricted to identification of direc- tion of volatility spillovers and generating insights for portfolio diversification. The major drawbacks of conventional statistical and econo- metric models have spurred a rapid development of artificial intelligence, machine and deep learning techniques for stock market prediction [9,1924]. Empirical ensemble mode decom- position, least square SVM optimized through PSO and GARCH are combined for generating final forecasts [9]. In a similar man- ner, SVR optimized through chaotic FA is used for prediction of daily closing prices of Intel, National Bank shares and Microsoft. However, the entire prediction exercise was carried out on the aggregate series itself [22]. A granular framework comprising of wavelet NN and rough set for automatic attribute selection and obtaining future predictions of five global stock markets [24]. This approach outperformed traditional forecasting models. A multivariate deep learning approach comprising of SADE and bagging produced superior forecasts compared to RW, MRS, FFNN, and SVR [19]. Another deep learning approach is used for predictive modeling of five-minute intraday data of KOSPI index [20]. Fractal modeling and machine learning algorithms are used to generate statistically significant forecasts of NIFTY 50, Hangseng, NIKKEI and NASDAQ indexes [21]. Another ma- chine learning based study utilized ANFIS in conjunction with GARCH and Markov switching regime for successful predictive analysis of emerging economies in Latin America [23]. These models either use a univariate framework comprising of historical lagged information or deploy a set of technical indicators for performing forecasting. However, stock markets are prone to be deeply interlinked with each other for cross country trades, foreign policies, etc. Hence, including other markets having pro- found causal influence as explanatory variables can augment the quality of forecasts. The present work considers this fact by care- fully delving the association apart from using standard technical indicators.

I. Ghosh, R.K. Jana and M.K. Sanyal / Applied Soft Computing Journal 82 (2019) 105553

3

This study explores the temporal dynamics, interrelationship, and proposes a predictive analytics framework for forecasting future movements of four stock indexes — KOSPI, BSE, JSX, and TWSE. The random walk hypothesis is tested first. Then fractal modeling and recurrence analysis are used to detect the presence of Brownian motion in the evolutionary dynamics of the respec- tive time series. The magnitude of the trend, periodicity and random components are determined with the help of RP and RQA. The causal nexus among the considered indexes are determined at different lags by using the CCF. The Granger causality analysis in a VAR environment is applied to explore the causal interactions and the dependence structure. The MODWT is used to decompose the stock indexes into linear and nonlinear subcomponents. Then machine learning algorithms — SVR, EN, RF, ERT, boosting, DNN, and LSTMN are applied for recognizing patterns and predicting future movements. This research contributes to the literature by proposing a novel research framework for exploring the temporal dynamics, inter- relationship, and predicting future movements with enhanced prediction accuracy. The predictive analytics component presents a granular forecasting structure comprising of wavelet decompo- sition and state-of-the-art machine learning and deep learning algorithms. Usually, the predictive analytics of stock market uses technical features in a multivariate setup and wavelet-based time series decomposition in a univariate setup separately. We predict the performance by combing them together in an integrated mul- tivariate setup that incorporates other significant independent variables discovered through the causality analysis. The remainder of this paper is structured as follows. Section 2 presents the research problem studied in this paper. Section 3 presents the data profile and emphasizes on the key statisti- cal properties of the dataset. Section 4 elucidates the details of research methods employed in this study. Section 5 presents and critically analyzes the overall findings in terms of empirical inspection for gaining deeper insights into temporal dynamics of considered market, association and causal interplay, and predic- tive performance. Finally, Section 6 concludes the article by high- lighting the overall contributions, limitations and future scope of work.

2. Problem studied

Predictive analysis of stock markets can broadly be categorized into two strands. When the objective is to estimate the absolute figures of stock prices or future returns, mostly regression-based forecasting models are deployed to accomplish the task [12,22]. Alternatively, the other aspect of predictive modeling typically attempts to determine the direction of future movements of stock prices. In general, classification algorithms are used to tackle directional predictive modeling [25]. The target variables of first and second categories are continuous and nominal in nature, respectively. Our work belongs to the first category of predictive modeling. The proposed research framework undertaken in this study aims to estimate one-day ahead forecasts of actual closing prices of four Asian stock indexes namely BSE, TWSE, JSX, and KOSPI in a multivariate setup. The chosen stock proxies represent develop- ing countries in Asia which in turn makes examination of tempo- ral characteristics of the said indexes, assessing causal interaction, and building predictive modeling frameworks extremely impor- tant. It is apparent that emerging markets are more attractable than the developed counterparts to traders and other market players. Hence, there lies an excellent opportunity for profit mak- ing which eventually enhances the financial health of the nations in long term. Thus, portfolio management and algorithmic trading can be immensely benefitted if precise future projections of the

Table 1

Descriptive statistics.

Measures

BSE

 

TWSE

 

JSX

KOSPI

Mean Median Maximum Minimum

 

23772.52

 

8561.78

 

4791.51

1983.32

25313.74

8573.72

4836.03

1983.80

30133.35

9973.12

5726.53

2209.46

15948.10

6894.66

3654.58

1769.31

Std. Dev.

4057.22

740.10

444.10

72.24

0.2994

0.1656

0.1242

0.0375

Skewness Kurtosis JARQUE BERA TEST Weisberg–Bingham Test

1.4090

122.390*

0.90493*

 

0.9167

 

0.9757

0.4532

49.769*

53.085*

53.085*

0.97592*

0.97499*

0.99285*

Frosini Test

2.5252*

0.8311*

1.1093*

0.5649*

Hegazy Green Test Shapiro Wilk Test

 

0.095506*

0.022994*

0.024118*

0.007253*

0.90381*

 

0.97492*

0.97419*

0.99260*

*Significant at 1% significance level.

 

Table 2

Stationarity check.

Stationarity check: Unit root tests

 

Stationarity Test

BSE

 

TWSE

 

JSX

KOSPI

ADF

0.1815#

 

0.0224#

0.2710#

1.8359#

PP

3.9397#

4.4078#

5.0133#

5.7931#

ZA

4.8844#

4.7650#

4.2407#

5.5179#

ERS

0.1468#

 

0.3758#

0.1253#

0.4659#

Stationarity check: First order difference

 

ADF

32.6301*

34.4738*

22.7578*

34.7960*

PP

32.5902*

34.4751*

32.8546*

34.9926*

ZA

32.7404*

34.5996*

32.8307*

35.0533*

ERS

8.8358*

4.3611*

13.7197*

5.5534*

# Not significant. *significant at 1% significance level.

said markets can be achieved. On the other hand, the problem is extremely challenging as external events occasionally hinder the growth of emerging economies which is easily transmitted to stock indexes of countries, thus, reflecting the chaotic behavior. The endeavor of the present study is to present an integrated framework for critically evaluate evolutionary temporal patterns of considered financial time series, comprehend the structure of interrelationship and yielding one-day ahead forecasts of prices.

3. Data profile and characteristics

Daily closing prices of JSX, BSE, KOSPI, and TWSE for the period January 2012 to June 2017 are collected from ‘Metastock’ data repository for investigation. The following figures depict the tem- poral movement of the respective indexes during the considered period (see Fig. 1 ) The descriptive statistics of the respective financial time series are shown in Table 1. The test statistic values confirm that none of the series follow normal distribution. Therefore, the use of nonparametric research framework for making predictions is justified. The stationarity of the stock indexes is examined using ADF, ERS, PP, and ZA unit root tests. ZA test finds the presence of structural breaks in the time series while examining the existence of unit roots. For traditional econometric analysis, it is essential to perform this exercise. Table 2 presents the outcome of unit root tests at level and first order difference. It is revealed that the series is first order stationary. Identification of the presence of unit roots is essential for an- alyzing causal interactions through Granger causality tests. Since, all the four markets are found to be first order stationary, I(1) return series are considered for econometric analysis to assess the direction of causation. Evidences from the statistical tests imply that the daily closing prices of the considered stocks are

4

I. Ghosh, R.K. Jana and M.K. Sanyal / Applied Soft Computing Journal 82 (2019) 105553

Sanyal / Applied Soft Computing Journal 82 (2019) 105553 Fig. 1. Temporal evolution. characterized by their

Fig. 1. Temporal evolution.

characterized by their nonparametric nature and nonstationary behavior. It may be noted here that the presence of these prop- erties justifies the deployment of advanced machine learning and deep learning algorithms for predictive modeling exercise. However, the use of such sophisticated algorithms may not result in a good success if the time series follows a Brownian motion. Therefore, it is important to test the random walk hypothesis.

4. Research methods

This section enunciates the research framework deployed to accomplish the objectives. Broadly, the research methods can be segregated into three categories — nonlinear dynamics for empir- ical investigation, econometric modeling for delving interaction, and predictive analytics for carrying out forecasting. Detailed procedures of these approaches are explained sequentially.

4.1. Nonlinear dynamics

Nonlinear dynamics tools are used to check random walk hypothesis and gain deeper insights about temporal evolutionary patterns of the considered financial time series. Fractal modeling and recurrence analysis are performed for empirical investiga- tions.

4.1.1. Fractal modeling It is often used to test the efficient market hypothesis [21,25]. Fractal dimensional index (D) and Hurst exponent (H) are calcu- lated by using the rescaled range (R/S) analysis to discover the presence of long or short memory structures in the time series.

4.1.1.1. R/S analysis and Hurst exponent. It was originally ideated and formulated by Hurst [26] to study the level of river Nile for construction of reservoirs. Thereafter, Mandelbrot and Wallis [27] have proposed improvements to this method. The method is briefly mentioned below:

a. Decompose the time series {R N } with N observations to d

, n), where n is the length of

subseries R (i , d) , (i = 1 , 2,

the subseries.

b. Calculate E d , the average of the decomposed series.

c. Estimate the accumulated deviation

X (i, d ) =

i

k=1

{R(k, d) E d }

d. Determine the range

R d = max{X (i , d)} − min {X (i, d) }

e. Determine the standard deviation

S d =

(1 /n )

n

i=1

{R(k, d) E d } 2

(1)

(2)

(3)

f. Determine the rescaled range of all the subseries

(R/ S ) n = (1/ A)

D

d = 1

(R d /S d )

(4)

The Hurst exponent and the R/S statistic have the following relationship:

(5)

Kn H = (R /S ) n , K is a constant.

I. Ghosh, R.K. Jana and M.K. Sanyal / Applied Soft Computing Journal 82 (2019) 105553

5

The magnitude of H is estimated through curve fitting:

(6)

The values of H vary between 0 and 1. If H = 0.5, then the series elements are uncorrelated and hence there exists pure random walk. The long memory trend is detected when the value of H is significantly greater than 0.5 and a short memory trend is detected when the value of H is less than 0.5.

4.1.1.2. Fractal dimensional index. Empirical evidences suggest that apparently random looking financial assets have some degree of predictability embedded in their temporal dynamics. D, a non-integer dimension, represents the working principle of any chaotic system [21,28]. By estimating the magnitude of D, the underlying evolutional characteristics of four stock indexes can be sieved. The relationship between D and H is as follows:

(7)

The values of D vary between 1 to 2. A value of D equal to 1.5 implies the existence of pure random walk. Long and short memory trends are detected corresponding to the range of 1 < D < 1. 5 and 1. 5 < D < 2, respectively. A value of D closer to 1 signifies the long-memory dependence, also known as the ‘Joseph’s Effect’, while a value closer to 2 implies short-memory dependence, also known as ‘Noah’s Effect’.

D = 2 H

log K + H log n = log ( R/ S ) n

4.1.1.3. Correlation between periods (c N ). It is a measure for quan- tifying the magnitude of a persistent or anti-persistent trend [10,25]. C N is estimated as:

(8)

For an ideal random time series, the value of C N is 0. A persistent time series is characterized by positive values of C N , while an anti-persistent time series is characterized by negative values of C N . If C N = 0. 85, then 85% of the dataset under investigation is influenced by its own historical information.

C N = 2 (2H 1) 1

4.1.2. Recurrence plot (RP)

It is a graphical tool that accounts for the recurrence in higher dimensional phase space. The ideation was further exploited and numerically developed by Eckmann [29]. Sometimes visualiza- tion with the help of RP may not lead to clear interpretation. To overcome this obstacle, recurrence quantification analysis is proposed [30].

4.1.3. Recurrence quantification analysis (RAQ)

This is a data modeling process capable of catering valuable insights beyond the capability of RP through quantification of the true nature of dynamical systems by computing the mea- sures like REC, DET, TT, and LAM. The REC represents recurrent points percentages. A higher REC implies the existence of periodic pattern and a smaller REC implies random behavior. The overall deterministic structure of a system is measured by DET that estimates the recurrence points and helps in building diagonal lines parallel in RP. The TT reflects the average length of the vertical line structure. It is an estimate of average duration that the system stays in a state. LAM is a measure of the quantity of recurrence points. The smaller LAM values indicate the chaotic nature of the dataset.

4.2. Econometric modeling

To comprehend the nature of association structure, conven- tional Pearson’s correlation and cross-correlation tests are ap- plied. To inspect the causal interrelationship for identifying pre- dictors, Granger causality test is used.

4.3. Predictive modeling

This subsection presents a granular level forecasting approach for predictive modeling of the respective stock indexes in a mul- tivariate framework. MODWT model is used for decomposition of series into linear and nonlinear components. Seven pattern recognition algorithms belonging to machine and deep learning paradigm, namely, SVR, EN, RF, ERT, Boosting, DNN, and LSTMN are then applied to obtain component wise forecasted figures. These combined models are denoted as M- SVR, M-EN, M-RF, M-ERT, M-Boosting, M-DNN, and M-LSTMN. The final forecast is obtained by adding the forecasted figures of individual decomposed components. The multivariate frame- work uses the stock indexes as input variable that significantly impact a particular index and a set of technical indicators as other independent variables.

4.3.1. MODWT

This decomposition method segregates a signal into a time varying scale and can efficiently separate nonlinearity and other random components that are embedded in the financial data while preserving the inherent features likes spillovers, heteroscedasticity, and volatility clustering [19,29,31]. MODWT

is a highly redundant transformation technique that has several

advantages over the traditional discrete wavelet transform. It translates and dilates an original function f (t) onto a father

wavelet ϕ (t) and a mother wavelet ψ (t) at predefined scales. If

P it is the i-th time series value at time t, then it can be written

by as follows:

P

P it = V

J

i

( t ) + W (t ) + W 1 (t ) + · · · + W

J

J

P

i

P

i

P

1

i

(t ) ,

(9)

P

where V

J

i

P

(t ) = ϕ

J

,

k

i

k φ J , k (t ) , W

j

P

i

P

( t ) = ψ

J

,

k

i

k ω j , k (t ) , φ J , k (t )

=

2 J /2 φ ( t 2 J k

J

2

j , k (t ) = 2 j /2 ω ( t 2 j k

ω

j

2

) .

) ,

and

The inverse wavelet transform can be written as:

f

(t ) =

∞ ∞

∑ ∑

l =−∞ k=−∞

˜

W l , k × ψ l , k (t )

(10)

where

form of f (t ). The MODWT technique is adopted to decompose daily closing prices of BSE, TWSE, JSX and KOSPI. Unlike the traditional DWT, MODWT does not need dyadic dataset. It is invariant to a circular shift as well. Overall, the MODWT performs a robust and non- orthogonal transformation keeping down the sampled values at the respective levels of decomposition. The empirically observed MODWT estimates are more efficient than the estimates obtained from other techniques [21]. The study uses ‘haar’ wavelets to ob- tain six levels of decomposition for the respective stock indexes. Hence, the six wavelet and one scaling coefficients are generated.

f (t ) ψ l, k (t ) dt is the discrete wavelet trans-

W ˜ l ,k =

4.3.2. Support vector regression (SVR)

SVR is a popular machine learning algorithm used for various challenging predictive modeling problems [32]. It performs the regression through nonlinear pattern mining and discovering the linear separation boundary through quadratic optimization. The

R package ‘kernlab’ is used for implementation of the model.

6

I. Ghosh, R.K. Jana and M.K. Sanyal / Applied Soft Computing Journal 82 (2019) 105553

4.3.3. Elastic net (EN)

This is a regression approach which dynamically mixes Lasso

and Ridge regressions for feature selection and regularization to solve problems related to predictive modeling [33]. The approach is built upon the principle of OLS estimation by optimizing the sum of squared residuals. In a dataset containing N samples

and p predictors (x 1 ,

, x p ), the response (y i ) in a multiple

regression framework is expressed as:

(11)

y i = β 0 + β 1 x 1 + β 2 x 2 + · · · + β p x p ,

ˆ

and the coefficient vector β = ( β 0 , β 1 , as:

, β p ) are determined

R p+ 1 [

ˆ

β = argmin

β 0 i

1

2N

N

i=1

( y i β 0 x

T

i

β i ) 2 + λ P α (β ) ]

(12)

where, P α (β ) = (1 α ) 1 2 β 2 2 + α β l 1 , β 2

l

l 2 =

p

j =1 β j

2

, and

β l 1 =

p j =1 β j .

The expression P α (β ) is termed as the penalty of the elastic net algorithm. For α = 1, the algorithm works as the Ridge regression while for α = 0, it works as the Lasso regression. The degree of shrinkage is monitored by λ which is responsible for feature selection and shrinkage operation simultaneously. The R package ‘glmnet’ is used for simulating the model.

4.3.4. Random forest (RF)

RF is an ensemble machine learning tool that garnered tremendous attention for its high precision, robustness to out- liers, and applicability in predictive modeling tasks [34]. It has been extensively used for modeling arduous classification and regression tasks. The best feature for the branching operation at each node of an individual decision tree is determined from the randomly selected subset of features. The number of decision trees may vary in the range of one hundred to one thousand depending upon the complexity of the problem. The final assign- ment of class label information for classification or estimation of aggregate values for continuous output is determined through majority voting or arithmetic averaging scheme. The ‘rattle’ GUI package of R is used for simulation of RF.

4.3.5. Extremely randomized trees (ERT)

Like the RF, ERT acts as an ensemble predictive analytics tool where the thresholds for branching operations in base learners are selected randomly apart from selecting random subset of features [35]. Thus, it takes randomness to the next level to avoid overfitting. To implement the algorithm, the R package ‘extraTrees’ is used.

4.3.6.

Boosting is an ensemble predictive modeling algorithm in

which each calibration tuple is consigned as a weight value [36]. As base learners, a series of decisions are used for pattern recog- nition. After the successful completion of learning process in one learner, weights are updated to enable the rest of the learners to carry out deeper investigation on the training samples that account for a marginally higher rate of error. It computes the weighted average of outcomes of constituent base modelers for the final prediction. In this work, the AdaBoost (Adaptive Boost- ing) algorithm [37] is adopted. Here, ‘GAMBoost’ package of R is used for realizing the model. To perform regression tasks, the following steps are executed.

,<x N , y N >) be the N

training samples in a set D. Initialize the weights of each sample to 1/d, where d is the cardinality of D. Step 2: For each base learner (i) perform:

Boosting

Step 1: Let, (<x 1 , y 1 >, <x 2 , y 2 >,

2.1: Draw bootstrapped samples to generate D i .

2.2: Use the training set D i to train the model M t . 2.3: Compute the error of M t (Err(M t )). 2.4: If the computed error is greater 0.5 repeat 2.1 to 2.3, else go to 2.5. 2.5: Update the weights of the samples in set D i . 2.6: Normalize weight values. Step 3: Find the accuracy weights of each base learner. Step 4: Combine the weighted outcomes of base learners and obtain the final outcome.

4.3.7. Deep neural network (DNN)

The recent surge in the deep learning has resulted in a plethora of models. DNN is an extension of the traditional swallow multi-

layered feed forward neural network which allows to incorporate

multiple hidden layer for deep learning [38]. There exists a se- ries of activation functions and optimization algorithms for the

learning process of DNN [4,39]. It has lately been applied suc- cessfully in predictive analysis of time series observations. In this study, a DNN of three hidden layers comprising one hundred nodes in each is chosen. Rectified linear activation function is deployed. Stochastic gradient descent is used as the learning al- gorithm. ‘Tensorflow’ platform and Python programming language are utilized for executing the algorithm.

4.3.8. Long short-term memory network (LSTMN)

LSTM is another deep learning technique used for forecast- ing the future trends of the time series in granular framework. It is a variant of traditional RNN that can effectively thwart the vanishing gradient problem [39]. LSTM comprises of several memory cells to keep records of states and a series of gates (input, forget and output gates) to control the flow of information. By regulating the information traffic and memory structure, it can effectively model long-range dependence. For learning the archi- tecture, BPTT algorithm is applied. Python Programming language is used in ‘Tensorflow’ framework for practical implementation of the model.

4.4. Performance assessment

The forecasting models M-SVR, M-EN, M-RF, M-ERT, M-DNN, and M-Boosting are used for predicting future figures. To eval- uate the efficiency of the respective models, the following six measures are used:

NSC = 1

N

i =1 { Y act (i ) Y pred (i) } 2

=1 { Y act (i) Y act } 2

N

i

,

N

=1 (Y act (i ) Y pred ( i))

i

2

IA = 1

TI =

N i=1 {⏐ Y pred (i ) Y act

[

+ | Y act (i ) Y act | } 2 ,

] 1/2

N N =1 ( Y act (i ) Y pred (i) ) 2

1

i

[

N N =1 Y act (i ) 2 ] 1/2 + [

1

i

N N =1 Y pred (i ) 2 ] 1/2 ,

1

i

RMSE = [

1

N

N

i=1

( Y act (i ) Y pred (i) ) 2 ] 1/2

MAD = [

1

N

∑ ⏐ Y act (i) Y pred (i ) ]

i

N

=1

MAPE =

1

N

N

i

=1

Y act (i) Y pred ( i)

Y act (i)

× 100

(13)

(14)

(15)

(16)

(17)

(18)

where Y pred (i) and Y act ( i) are predicted and actual values, respec- tively. For a good prediction, NSC values should vary between -

I. Ghosh, R.K. Jana and M.K. Sanyal / Applied Soft Computing Journal 82 (2019) 105553

7

Table 3 Findings of fractal inspection.

Series

H

D

C N

NoP

Effect

RWH

BSE

0.8907

1.1093

0.7189

Persistent

‘Joseph’s Effect’

Rejected

TWSE

0.8894

1.1116

0.7156

Persistent

‘Joseph’s Effect’

Rejected

JSX

0.8934

1.1066

0.7252

Persistent

‘Joseph’s Effect’

Rejected

KOSPI

0.8893

1.1107

0.7154

Persistent

‘Joseph’s Effect’

Rejected

NoP: Nature of Pattern, RWH: Random Walk Hypothesis.

to 1. If the value is close to 1, then the accuracy of prediction

is better. Similarly, IA values closer to 1 infer efficient predictions

while the values nearer to 0 suggest poor forecasts. Unlike other two metrics, TI should be close to 0 for efficient predictive mod- eling. The RMSE, MAD and MAPE should be minimum for efficient forecasting. Unlike the other measures, depending upon the range of target variables, RMSE, MAD and MAPE can be greater than 1 as well. To keep uniformity, while computing RMSE, MAD and MAPE, the actual and predicted values are rescaled in between 0 to 1. Fig. 2 represents a flowchart of the proposed research frame- work.

5. Results and discussions

This section outlines the findings and key insights obtained from nonlinear dynamics, econometric and predictive modeling frameworks, respectively. Outcome of one segment of research method eventually assists in triggering the subsequent methods in a systematic manner.

5.1. Empirical investigation through nonlinear dynamics

As stated earlier, fractal dimensional index and Hurst expo- nent are estimated to test presence random walk entrenched in temporal dynamics in daily closing prices of four stock indexes and then meaningful behavioral properties of temporal evolution- ary pattern of the said indexes are analyzed through RP and RQA. Table 3 summarizes the outcomes of fractal analysis. H values in Table 3 are significantly higher than 0.5 and close to 0.9 for all the indexes. This implies that all the four indexes are driven by biased random walk. Therefore, the struc- tural presence of long memory dependence is established, and the Efficient Market Hypothesis is rejected for these indexes. Hence, the deployment of granular wavelet based advanced prediction model is duly justified. The higher values of C N further suggest the autoregressive characteristics of the time series; which in turn recommends employment of technical indicators for con- structing predictive framework for projecting future trend as the

markets are significantly driven by their past information. For the remaining empirical analyses, RP and RQA are used to validate the findings of fractal modeling and gaining deeper insights. The following figures display the RP of the respective stock indexes (see Fig. 3). Except KOSPI, thick diagonal lines can be observed in all the plots. The main diagonal of KOSPI comparatively thinner than its counterparts. Small disjoint segments can be observed in BSE and TWSE. It is apparent from the structure of RP that none of the financial time series exhibit pure chaotic properties. To ascertain more on structural behavior, it is essential to look at the RQA parameters that are tabularized in Table 4. It can be observed that the recurrence rates of all the four financial time series are not on a higher side, and hence indicate

a lower degree of periodicity. REC values of KOSPI are the lowest.

The higher values of DET and LAM support the deterministic structure. Therefore, the presence of the higher order determin- istic chaos can be implied. Like the recurrence rate, the degree

Table 4 Results of RQA.

Index

REC (%)

DET (%)

LAM (%)

TT (%)

BSE

1.5782

92.2571

95.7059

9.1077

TWSE

1.1106

87.0099

91.8132

6.8206

JSX

0.9208

88.3117

94.1126

6.8065

KOSPI

0.2460

70.4121

79.0691

2.8463

Table 5 Pearson’s correlation test.

 

Index

BSE

TWSE

JSX

KOSPI

BSE

TWSE

0.8560*

JSX

0.7955*

0.8774*

KOSPI

0.5912*

0.7180*

0.6289*

*Significant at 1% significance level.

of determinism in KOSPI is lower than other three series that justifies the usage of sophisticated forecasting framework for estimating future figures. Moderate TT values imply temporal movements are not restrained to a particular state for a longer duration. State changes are not highly abrupt as the values of DET and LAM are considerably higher than that of a complete chaotic signal. Overall, from the analysis it can be concluded that time series are not perfectly deterministic signals; but more of higher order deterministic chaos. The findings from recurrence analyses further justify the im- plications of fractal modeling. It should be noted that nonsta- tionary and nonparametric behavior were detected beforehand. This justified the use of wavelet based granular approach for fore- casting. However, the findings of nonlinear dynamics rejected the efficient market hypothesis for the considered markets by show- ing evidence of autoregressive behavior. Hence, the necessity of usage of technical indicators for building forecasting models is justified. It is an extremely important finding as adding insignifi- cant features may result in overfitting problem. It also presets the platform for examining causal nexus because meticulous inquiry of interrelationship among markets dominated by random walk is a futile exercise. Both fractal modeling and recurrence analy- ses are used for understanding the key nonparametric statistical properties of the selected stock markets and determining the need of technical indicators in the granular forecasting frame- work. Next, we have reported the outcome of tests of association and causal interaction among these indexes.

5.2. Outcome of association and causality tests

To study the interrelationship, Pearson’s correlation coeffi- cient, cross-correlation and Granger’s causality tests are per- formed on the considered indexes. The results are reported in Table 5 and CCF plots are shown in Fig. 4. CCF plots display the association between a pair of time series in various lags through computation of auto correlation func- tions. It is apparent from Fig. 4 that the pairs BSE-TWSE and BSE-JSX exhibit significantly high positive associations, and the pairs TWSE-JSX and TWSE-KOSPI exhibit moderately high pos- itive associations. However, the association through traditional correlation tests often fail to extract deeper insights. To further delve into the causal interplay, Granger causality tests are per- formed in a pairwise manner. The test framework requires the indexes under investigation to be stationary. In our case, the four indexes are strictly non-stationary, and their first order differ- ences are stationary. Therefore, the return series of the respective stock indexes (BSER, JSXR, TWSER, and KOSPIR) are calculated to accomplish this task.

8

I. Ghosh, R.K. Jana and M.K. Sanyal / Applied Soft Computing Journal 82 (2019) 105553

Sanyal / Applied Soft Computing Journal 82 (2019) 105553 Fig. 2. Proposed research framework. It is

Fig. 2. Proposed research framework.

It is also well known that the outcome of causality assessment in a VAR environment is highly susceptible. This phenomenon is also known as structural instability. To thwart this issue, the VAR framework is utilized, and lag length is chosen according to Akaike information criterion (AIC). Table 6 portrays the results. In Table 6, LR denotes the test statistic value of the sequential modified LR (Likelihood-ratio) test at 5% significance level, FPE denotes the final prediction error, AIC denotes the Akaike infor- mation criterion, SC denotes the Schwarz information criterion, and HQ denotes the Hannan–Quinn information criterion. The lowest value of AIC is obtained at lag order 8. So, causality analysis is carried out on this basis and the results are presented in Table 7.

It can be observed that BSE returns are only affected by KOSPI returns and the causal structure is unidirectional. TWSE returns are significantly influenced by both BSE and KOSPI returns. KOSPI returns significantly influence BSE returns, however, the direction of influence is not bidirectional. The JSX returns are influenced by TWSE and KOSPI returns. KOSPI returns are found to be affected by none of the other three returns. Next, the IR is estimated for assessing the impacts of shock from one asset on another. It can be noticed from Fig. 5 that in the short run, shocks of their own movement tend to play a significant role in volatility. To further justify the claim, variance decomposition is performed, and the results are presented in Table 8. The percentages of variance in returns of all the four stock indexes are largely explained by themselves in the short run.

I. Ghosh, R.K. Jana and M.K. Sanyal / Applied Soft Computing Journal 82 (2019) 105553

9

Sanyal / Applied Soft Computing Journal 82 (2019) 105553 9 Fig. 3. Recurrence plots. Table 6

Fig. 3. Recurrence plots.

Table 6 Results of lag selection.

 

Table 7

Causality analysis.

Lag

logL

LR

FPE

AIC

SC

HQ

Null Hypothesis

Chi-square

Probability

Result

1

16557.11 113.0513 3.20e17 26.62981 26.54729 26.59877

TWSER ‘DNGC’ BSER BSER ‘DNGC’ TWSER JSXR ‘DNGC’ BSER BSER ‘DNGC’ JSXR KOSPIR ‘DNGC’ BSER BSER ‘DNGC’ KOSPIR JSXR ‘DNGC’ TWSER TWSER ‘DNGC’ JSXR KOSPIR ‘DNGC’ TWSER TWSER ‘DNGC’ KOSPIR KOSPIR ‘DNGC’ JSXR JSXR ‘DNGC’ KOSPIR

9.7244#

0.1367

Accepted

2

16597.22 79.64191 3.08e17 26.66863 26.52010 26.61278 a

37.3059*

0

Rejected

3

16628.41 61.71862 3.00e17 26.69309 26.47854 26.61240

4.0971#

0.6635

Accepted

4

16656.50 55.40771 2.94e17 26.71255 26.43199 26.60704

10.4576#

0.1067

Accepted

5

16668.70 23.98722 2.96e17 26.70643 26.35985 26.57610

71.9597*

0

Rejected

6

16699.07 59.52533 a 2.89e17 a 26.72958 a 26.31698 26.57442

10.2708#

0.1137

Accepted

7

16712.06 25.37865 2.91e17 26.72474 26.24612 26.54475

4.2344#

0.6450

Accepted

8

16722.94 21.18071 2.93e17 26.71649 26.17186 26.51168

31.1601*

0

Rejected

9

16728.80 11.36934 2.98e17 26.70016 26.08952 26.47053

70.9776*

0

Rejected

10

16737.66 17.14174 3.02e17 26.68867 26.01201 26.43421

5.1313#

0.5271

Accepted

11

16745.25 14.61423 3.06e17 26.67511 25.93244 26.39583

16.2216**

0.0126

Rejected

12

16756.77 22.14324 3.08e17 26.66791 25.85922 26.36380

7.5716#

0.2712

Accepted

a Selected lag order.

Indexes that are found to possess impact on the others by Granger causality, have marginal impact on variance as well. Hence, the overall findings of causation analysis are validated through the outcomes of variance decomposition. Results of causal interac- tions help in comprehending the structure of interrelationships.

5.3. Predictive modeling performance

As explained earlier, MODWT based hybrid predictive mod- els are applied for making predictions. Proper selection of ex- planatory constructs plays a pivotal role in predictive modeling exercise. This paper uses few well-known technical indicators and stock indexes having significant causal influence discovered through Granger causality assessment as explanatory variables.

DNGC: ‘does not Granger Cause’. *Significant at 1% level. **Significant at 2% significance level. # Not significant.

Technical indicators are majorly computed based on historical prices and trading volume information. The estimated technical indicators assist the traders in making critical decisions regarding buying or selling of shares [4042]. Some widely used technical indicators are MA, bias, MACD, EMA, RSI, CCI, Bollinger band, mo- mentum, WR, PSY, and STOC. Table 9 summarizes the dependent and independent constructs settings. Initially, MODWT technique is used fir decomposing individ- ual series. Figs. 69 show the graphical illustration of MODWT decomposition of BSE, TWSE, KOSPI and JSX. For the predictive exercise, the original dataset is portioned into 85% training and 15% test data. However, unlike randomly splitting the entire data into training and test datasets, daily

10

I. Ghosh, R.K. Jana and M.K. Sanyal / Applied Soft Computing Journal 82 (2019) 105553

Sanyal / Applied Soft Computing Journal 82 (2019) 105553 Fig. 4. CCF plots. observations from January

Fig. 4. CCF plots.

observations from January 2012 to June 2016 are selected as training dataset and July 2016 to June 2017 as the test dataset. This alignment of training and test datasets assists in judging the

extrapolation capability of respective predictive models. Several process parameters of individual frameworks are varied to run twenty experimental trials for each model and the average values

I. Ghosh, R.K. Jana and M.K. Sanyal / Applied Soft Computing Journal 82 (2019) 105553

11

/ Applied Soft Computing Journal 82 (2019) 105553 11 Fig. 5. Impulse response plots. of performance

Fig. 5. Impulse response plots.

of performance indicators are computed to determine the pre- dictive accuracy. Instead of varying the process parameters and conducting trials, it is important to perform the hyper parameter tuning for the predictive algorithms to obtain higher prediction accuracy. The proposed framework resulted in predictions of good accuracy. Therefore, the process parameter tuning in combi- natorial optimization setup to traverse search space has not been performed in this study. Table 10 presents the average values of performance indicators for both training and test datasets for all the algorithms. The obtained results show that the NSE and IA values are considerably higher for all the methods for both datasets and at the same time TI values are considerably low. Therefore, the future values of the considered stocks can be estimated with a higher degree of precision. Table 11 summarizes performance of respective frameworks in terms of RMSE, MAD, and MAPE.

Table 11 shows that the RMSE, MAD, and MAPE figures are almost negligible. Therefore, likewise the previous performance evaluation using other three indexes, this assessment also implies accuracy of forecasts. This justifies the use of the predictive ana- lytics models for the selected stock indexes. Figs. 1013 portray the graphical representation of performance of respective models on selected test samples. It is important to ascertain the statistical significance of re- spective predictive models. This study adopts two statistical ap- proaches, namely, test for EPA and test for SPA to accomplish the task. To carry out the EPA of the forecasting models, DM pairwise test is carried out on the performance of the models on the test dataset, whereas the MCS [43] is used to execute SPA in order to rank the respective models according to their performance. Tables 12 to 15 present the outcome of the test for individual stock markets. Since the test operates on a pairwise manner and

12

I. Ghosh, R.K. Jana and M.K. Sanyal / Applied Soft Computing Journal 82 (2019) 105553

Table 8 Results of variance decomposition.

Variance decomposition of BSER

Variance Decomposition of TWSER

Period

S.E.

BSER

JSXR

KOSPIR

TWSER

S.E.

BSER

JSXR

 

KOSPIR

TWSER

1

0.008982

100.0000

0.000000

0.000000

0.000000

0.007875

0.205620

0.722181

1.785545

97.28665

2

0.009036

98.99794

0.082237

0.867757

0.052062

0.007999

2.916552

0.742326

1.837086

94.50404

3

0.009229

95.52101

0.089382

4.339593

0.050018

0.008043

2.904503

0.825656

2.436984

93.83286

4

0.009303

94.02251

0.355465

5.563872

0.058151

0.008121

2.918850

0.852707

4.085395

92.14305

5

0.009335

93.56958

0.398789

5.912645

0.118989

0.008237

2.841315

0.828921

6.082163

90.24760

6

0.009341

93.44576

0.426611

5.915991

0.211634

0.008259

2.831366

0.866105

6.320251

89.98228

7

0.009369

92.89871

0.429150

5.912064

0.760073

0.008329

3.003217

0.968815

7.434779

88.59319

8

0.009373

92.84583

0.430303

5.956518

0.767349

0.008339

3.004913

0.970754

7.569805

88.45453

9

0.009375

92.80273

0.430084

5.997269

0.769920

0.008348

2.999829

0.969569

7.725106

88.30550

10

0.009375

92.79953

0.430862

5.997610

0.772002

0.008352

2.998472

0.969604

7.792235

88.23969

 

Variance decomposition of JSXR

 

Variance Decomposition of KOSPIR

 

1

0.009630

0.028592

99.97141

0.000000

0.000000

0.007723

0.591027

0.052602

99.35637

0.000000

2

0.009708

0.336855

98.46987

0.419170

0.774108

0.007744

0.671935

0.249709

98.83037

0.247983

3

0.009814

0.614418

96.57287

0.709657

2.103056

0.007754

0.917619

0.249152

98.58560

0.247628

4

0.009975

0.702574

95.22223

1.993675

2.081518

0.007762

0.916224

0.448145

98.37640

0.259235

5

0.010013

0.697248

94.71827

2.505915

2.078566

0.007772

0.919757

0.471848

98.32993

0.278463

6

0.010047

1.128872

94.08723

2.703251

2.080653

0.007794

0.991360

0.522083

98.05634

0.430213

7

0.010062

1.139295

93.96809

2.738251

2.154362

0.007817

1.300085

0.604327

97.66169

0.433896

8

0.010069

1.150219

93.88164

2.753659

2.214481

0.007817

1.300001

0.609531

97.65406

0.436412

9

0.010071

1.158017

93.85959

2.760678

2.221711

0.007818

1.308093

0.609341

97.64107

0.441496

10

0.010076

1.166081

93.80624

2.803346

2.224330

0.007820

1.312007

0.610393

97.63421

0.443389

 

Table 9 Dependent and predictor variables.

 

Dependent

Predictor

 

BSE

MA (10 days), bias (20 days) EMA (10 days and 20 days), MACD (9 days), upper Bollinger band (20 days), lower Bollinger band (20 days), RSI (14 days), momentum (10 days), WR (10 days), CCI (14 days), KOSPI.

 

TWSE

MA (10 days), bias (20 days) EMA (10 days and 20 days), MACD (9 days), upper Bollinger band (20 days), lower Bollinger band (20 days), RSI (14 days), momentum (10 days), WR (10 days), CCI (14 days), BSE, KOSPI.

JSX

MA (10 days), bias (20 days) EMA (10 days and 20 days), MACD (9 days), upper Bollinger band (20 days), lower Bollinger band (20 days), RSI (14 days), momentum (10 days), WR (10 days), CCI (14 days), TWSE, KOSPI.

KOSPI

MA (10 days), bias (20 days) EMA (10 days and 20 days), MACD (9 days), upper Bollinger band (20 days), lower Bollinger band (20 days), RSI (14 days), momentum (10 days), WR (10 days), CCI (14 days).

Table 10 Performance assessment in terms of NSC, IA and TI.

 
 

Performance of M-SVR

 

Performance of M-EN

 

Training dataset

 

Test dataset

 

Training dataset

 

Test dataset

 

NSC

IA

TI

NSC

IA

TI

NSC

IA

TI

NSC

IA

TI

BSE

0.9902

0.9968

0.0067

0.989

0.994

0.008

0.990

0.995

0.0081

0.988

0.993

0.010

TWSE

0.992

0.9970

0.0069

0.990

0.994

0.009

0.991

0.996

0.007

0.988

0.994

0.008

JSX

0.991

0.9961

0.006

0.988

0.993

0.009

0.990

0.994

0.007

0.989

0.993

0.009

KOSPII

0.9871

0.994

0.009

0.983

0.991

0.012

0.988

0.994

0.010

0.984

0.990

0.012

 

Performance of M-ERT

 

Performance of M-Boosting

 

Training dataset

 

Test dataset

 

Training dataset

 

Test dataset

 

BSE

0.995

0.999

0.003

0.992

0.997

0.006

0.9959

0.9994

0.003

0.993

0.998

0.006

TWSE

0.996

0.999

0.003

0.993

0.998

0.006

0.997

0.999

0.003

0.994

0.998

0.005

JSX

0.995

0.999

0.004

0.991

0.997

0.008

0.995

0.999

0.003

0.992

0.997

0.007

KOSPI

0.992

0.998

0.006

0.989

0.997

0.009

0.993

0.999

0.006

0.9900

0.9979

0.0083

 

Performance of M-DNN

 

Performance of M-RF

 

Training dataset

 

Test dataset

 

Training dataset

 

Test dataset

 

BSE

0.996

0.9993

0.003

0.994

0.997

0.006

0.996

0.999

0.004

0.993

0.998

0.006

TWSE

0.997

0.999

0.003

0.995

0.998

0.005

0.997

0.999

0.003

0.994

0.997

0.006

JSX

0.996

0.999

0.004

0.993

0.997

0.006

0.995

0.998

0.003

0.992

0.997

0.007

KOSPI

0.992

0.999

0.005

0.9895

0.9978

0.0094

0.992

0.999

0.006

0.989

0.996

0.009

 

Performance of M-LSTMN

 

Training dataset

Test dataset

BSE

0.995

0.998

0.004

0.992

0.996

0.006

TWSE

0.996

0.999

0.003

0.994

0.998

0.005

JSX

0.996

0.998

0.004

0.992

0.996

0.007

KOSPI

0.993

0.998

0.005

0.990

0.996

0.009

I. Ghosh, R.K. Jana and M.K. Sanyal / Applied Soft Computing Journal 82 (2019) 105553

13

Table 11 Performance assessment in terms of RMSE, MAD and MAPE.

 

Performance of M-SVR

 

Performance of M-EN

 

Training dataset

 

Test dataset

 

Training dataset

 

Test dataset

 

RMSE

MAD

MAPE

RMSE

MAD

MAPE

RMSE

MAD

MAPE

RMSE

MAD

MAPE

BSE

0.032

0.027

0.101

0.037

0.033

0.113

0.033

0.028

0.102

0.038

0.034

0.113

TWSE

0.029

0.025

0.114

0.035

0.028

0.124

0.030

0.026

0.115

0.037

0.029

0.125

JSX

0.040

0.033

0.123

0.047

0.040

0.132

0.042

0.034

0.124

0.047

0.041

0.132

KOSPI

0.036

0.030

0.120

0.041

0.034

0.127

0.036

0.031

0.121

0.042

0.035

0.128

 

Performance of M-ERT

 

Performance of M-Boosting

 

Training dataset

 

Test dataset

 

Training dataset

 

Test dataset

 

BSE

0.027

0.024

0.093

0.031

0.025

0.106

0.026

0.022

0.092

0.030