Вы находитесь на странице: 1из 7

524 IEEE TRANSACTIONS ON POWER SYSTEMS, VOL. 25, NO.

1, FEBRUARY 2010

A Hybrid ARIMA and Neural Network Model for


Short-Term Price Forecasting in Deregulated Market
Phatchakorn Areekul, Student Member, IEEE, Tomonobu Senjyu, Senior Member, IEEE, Hirofumi Toyama, and
Atsushi Yona, Member, IEEE

Abstract—In the framework of competitive electricity markets, dynamics of prices, and partly because electricity prices exhibit
power producers and consumers need accurate price forecasting complex volatility patterns. A wide variety of models have been
tools. Price forecasts embody crucial information for producers proposed in the last two decades, such as linear regression,
and consumers when planning bidding strategies in order to maxi-
mize their benefits and utilities, respectively. The choice of the fore- time-series techniques based upon autoregressive integrated
casting model becomes the important influence factor on how to moving average (ARIMA), Kalman filtering, data mining
improve price forecasting accuracy. This paper provides a hybrid approach, and state-space methods [1]–[7].
methodology that combines both autoregressive integrated moving One of the most important and widely used time-se-
average (ARIMA) and artificial neural network (ANN) models for
predicting short-term electricity prices. This method is examined
ries models is the ARIMA model. The popularity of the
by using the data of Australian national electricity market, New ARIMA model is due to its statistical properties, as well as
South Wales, in the year 2006. Comparison of forecasting perfor- the well-known Box-Jenkins methodology [8] in the model
mance with the proposed ARIMA, ANN, and hybrid models are building process. In addition, various exponential smoothing
presented. Empirical results indicate that a hybrid ARIMA-ANN models can be implemented by ARIMA models [9], i.e., a
model can improve the price forecasting accuracy.
linear correlation structure is assumed among the time-series
Index Terms—Artificial neural networks (ANNs), autoregres- values, and therefore, no nonlinear patterns can be captured
sive integrated moving average (ARIMA), Australian national
electricity market (NEM), electricity, price forecasting. by the ARIMA model. The approximation of linear models to
complex real-world problem is not always satisfactory.
Recently, artificial neural networks (ANNs) have been exten-
I. INTRODUCTION sively studied and used in time-series forecasting. Zhang et al.
HE deregulated power market is an auction market, and [10] presented a recent review in this area. The major advantage
T the market clearing price (MCP) are volatile. On the other
hand, due to the upheaval of deregulation in electricity markets,
of neural networks is their flexible nonlinear modeling capa-
bility. With ANNs, there is no need to specify a particular model
price forecasting has become a very valuable tool. The compa- form. Rather, the model is adaptively formed based on the fea-
nies that trade in electricity markets make extensive use of price tures presented from the data. This data-driven approach is suit-
prediction techniques either to bid or to hedge against volatility. able for many empirical datasets where no theoretical guidance
When bidding in a pool system, the market participants are re- is available to suggest an appropriate data generating process.
quested to express their bids in terms of prices and quantities. Using hybrid model or combining several models has become
Since the bids are accepted in order of increasing price until the a common practice to improve the forecasting accuracy, since
total demand is met, a company that is able to forecast the pool the well-known M-competition [11] in which combination of
price can adjust its own price/production schedule depending on forecasts from more than one model often leads to improved
hourly pool prices and its own production costs. High-quality forecasting performance. The basic idea of the model combina-
MCP prediction and its confidence interval estimation can help tion in forecasting is to use each model’s unique feature to cap-
utilities and independent power producers submit effective bids ture different patterns in the data. Both theoretical and empirical
with low risks. findings suggest that combining different methods can be an ef-
Price forecasting has become increasingly necessary and fective and efficient way to improve forecasts [12]–[14]. How-
more complex for all kinds of market participants, as evident ever, combination of the proposed techniques in the framework
from the various approaches that exist today, partly because of the hybrid forecast method and especially its application for
ongoing market reforms create continuous changes in the MCP prediction can be considered as the contribution of this
paper.
This paper presents a successful application of using a hybrid
Manuscript received May 08, 2009; revised July 05, 2009. First published approach based for time-series price forecasting considering the
December 15, 2009; current version published January 20, 2010. Paper no. historical prices of the Australian national electricity market
TPWRS-00080-2009. (NEM), New South Wales, in year 2006 for both ARIMA and
The authors are with the Faculty of Engineering, Department of Electrical
and Electronics Engineering, University of the Ryukyus, Okinawa 903-0213, ANN models, i.e., this paper provides ARIMA and a three-lay-
Japan (e-mail: k088662@eve.u-ryukyu.ac.jp; phatokinawa@hotmail.com; ered feedforward ANN, trained by the Levenberg-Marquardt al-
b985542@tec.u-ryukyu.ac.jp). gorithm. A case study presented with the historical data for hy-
Color versions of one or more of the figures in this paper are available online
at http://ieeexplore.ieee.org. brid model by real market data in the year 2006 is used for fore-
Digital Object Identifier 10.1109/TPWRS.2009.2036488 casting the next 168 h (one week) electricity prices.
0885-8950/$26.00 © 2009 IEEE
AREEKUL et al.: HYBRID ARIMA AND NEURAL NETWORK MODEL 525

The rest of this paper is organized as follows. In Section II, Based on the earlier work of Yule [19] and Wold [20], Box
review the ARIMA and ANN modeling approaches to time-se- and Jenkins [8] developed a practical approach to building
ries forecasting is presented. The hybrid methodology is intro- ARIMA models, which has the fundamental impact on the
duced in Section III. Empirical results from three real datasets time-series analysis and forecasting applications. This method-
are reported in Section IV. Section V contains the concluding ology includes three iterative steps of model identification,
remarks. parameter estimation, and diagnostic checking. The basic idea
of model identification is that if a time series is generated from
an ARIMA process, it should have theoretical autocorrelation
II. TIME-SERIES FORECASTING MODELS properties. By matching the empirical autocorrelation patterns
with the theoretical ones, it is often possible to identify one or
There are several different approaches to time-series mod- several potential models for the given time series.
eling. Traditional statistical models including moving average Box and Jenkins [8] proposed to use the autocorrelation func-
(MA), exponential smoothing, and ARIMA are linear in that tion (ACF) and the partial ACF (PACF) of the sample data as
predictions of the future values are constrained to be linear func- the basic tools to identify the order of the ARIMA best model;
tions of past observations. To overcome the restriction of the data transformation is often needed to make the time series sta-
linear models and account for certain nonlinear patterns ob- tionary. This involves selecting the most appropriate lags for the
served in real problems, several classes of nonlinear models AR and MA parts, as well as determining if the variable requires
have been proposed in the literature. These include the bilinear first-differencing to induce stationarity.
model [15], the threshold autoregressive (TAR) model [16], and Once a tentative model is specified, estimation of the model
the autoregressive conditional heteroscedastic (ARCH) model parameters is straightforward. The parameters are estimated
[17]. Although some improvement has been noticed with these such that an overall measure of errors is minimized. This
nonlinear models, the gain of using them for general forecasting usually involves the use of a least-squares estimation process.
problems is limited [18]. More recently, ANNs have been sug- The last step of model building is the diagnostic checking of
gested as an alternative to time-series forecasting. The main model adequacy. This is basically to check if the model assump-
strength of the ANNs is their flexible nonlinear modeling ca- tions about the errors are satisfied. Several diagnostic statis-
pability. In this section, we focus on the basic principles and tics and plots of the residuals can be used to examine the good-
modeling process of the ARIMA and ANN models. ness of fit of the tentatively entertained model to the historical
data. If the model is not adequate, a new tentative model should
A. ARIMA Model be identified, which is again followed by the steps of parameter
estimation and model verification. Diagnostic information may
In an ARIMA model, the future value of a variable is assumed help suggest alternative model(s).
to be a linear function of several past observations and random This three-step model building process is typically repeated
errors, i.e., the underlying process that generate the time series several times until a satisfactory model is finally selected. The
has the form final selected model can then be used for prediction purposes.

(1) B. ANNs Approach to Time-Series Modeling

where and are the actual value and random error When the linear restriction of the model form is relaxed, the
at time period , respectively, and possible number of nonlinear structures that can be used to de-
are model parameters, and and scribe and forecast a time series is enormous. A good non-
are integers and often referred to as orders of the model. linear model should be “general enough to capture some of the
Random errors are assumed to be independently and iden- nonlinear phenomena in the data” [18]. ANNs are one of such
tically distributed with a mean of zero and a constant variance models that are able to approximate various nonlinearities in the
of . data.
Equation (1) entails several important special cases of the ANNs are flexible computing frameworks for modeling a
ARMA family of models. If , then (1) becomes an AR broad range of nonlinear problems. One significant advantage
model of order . When , the model reduces to a MA of the ANN models over other classes of nonlinear model is
model of order . One extension to the ARMA class of that ANNs are universal approximators that can approximate a
processes, which greatly enhance their value as empirical de- large class of functions with a high degree of accuracy. Their
scriptors of nonstationary time series, is the class of ARIMA. power comes from the parallel processing of the information
Nonstationary time series processes can be transformed, by dif- from the data. No prior assumption of the model form is
ferencing the series once or more, to make them stationary. The required in the model building process. Instead, the network
number of times that the integrated process must be differ- model is largely determined by the characteristics of the data.
enced to make a time series stationary is said to be the order Network architecture of ANNs consists of the three-layer fully
of the integrated process. In this case, the series is called connected feedforward neural network, which is depicted in
an autoregressive (AR) integrated MA process of order Fig. 1 and is used here; it includes input layer, hidden layer,
and denoted by ARIMA . and output layer.
526 IEEE TRANSACTIONS ON POWER SYSTEMS, VOL. 25, NO. 1, FEBRUARY 2010

series is the selection of the number of lagged observations


and the dimension of the input vector. This is perhaps the
most important parameter to be estimated in an ANN model
because it plays a major role in determining the (nonlinear)
autocorrelation structure of the time series. However, there is
no theory that can be used to guide the selection of . Hence,
experiments are often conducted to select an appropriate , as
well as .
Once a network structure is specified, the network is
ready for training—a process of parameter estimation. As in
ARIMA model building, the parameters are estimated such that
an overall accuracy criterion such as the mean squared error
Fig. 1. Neural network model.
is minimized. This is again done with some efficient nonlinear
optimization algorithms other than the basic backpropagation
Single hidden-layer feedforward network is the most training algorithm. One of these is generalized reduced gradient
widely used models form for time-series modeling and fore- (GRG2), a general-purpose nonlinear optimizer [22]. A GRG2-
casting [10]. The model is characterized by a network of based training system is used in this study [23].
three layers of simple processing units connected by acyclic The estimated model is usually evaluated using a separate
links. The relationship between the output and the inputs hold-out sample that is not exposed to the training process. This
has the following mathematical repre- practice is different from that in ARIMA model building, where
sentation: one sample is typically used for model identification, estima-
tion, and evaluation. The reason lies in the fact that the general
(linear) form of the ARIMA model is prespecified, and then,
the order of the model is estimated from the data. The standard
(2)
statistical paradigm assumes that under stationary condition, the
model best fitted to the historical data is also the optimum model
where and for forecasting. With ANNs, the (nonlinear) model form, as well
are the model parameters, often called the con- as the order of the model, must be estimated from the data. It is,
nection weights, is the number of input nodes, and is the therefore, more likely for an ANN model to overfit the data.
number of hidden nodes. The logistic function is often used as There are some similarities between ARIMA and ANN
the hidden-layer transfer function, i.e. models: both of them include a rich class of different models
with different model orders, data transformation is often
necessary to get best results, a relatively large sample is re-
(3) quired in order to build a successful model, and the iterative
experimental is common to their modeling processes and the
Hence, the ANN model of (2), in fact, performs a non- subjective judgement is sometimes needed in implementing
linear functional mapping from the past observations the model. Because of the potential overfitting effect with both
to the future value , i.e. models, parsimony is often a guiding principle in choosing an
appropriate model for forecasting.
(4) III. HYBRID METHODOLOGY
where a vector of all parameters, and is a function deter- Both ARIMA and ANN models have achieved successes in
mined by the network structure and connection weights. Thus, their own linear or nonlinear domains. However, none of them
the neural network is equivalent to a nonlinear AR model. Note is a universal model that is suitable for all circumstances. The
that expression (2) implies one output node in the output layer approximation of ARIMA models to complex nonlinear prob-
that is used for one-step-ahead forecasting. The simple network lems may not be adequate. On the other hand, using ANNs to
given by (2) is surprisingly powerful in that it is able to ap- model linear problems have yielded mixed results. For example,
proximate arbitrary function as the number of hidden nodes using simulated data, Denton [24] showed that when there are
is sufficiently large [21]. In practice, simple network structure outliers or multicollinearity in the data, neural networks can sig-
that has a small number of hidden nodes often works well in nificantly outperform linear regression models. Markham and
out-of-sample forecasting. This may be due to the overfitting Rakes [25] also found that the performance of ANNs for linear
effect typically found in neural network modeling process. An regression problems depends on the sample size and noise level.
overfitted model has a good fit to the sample used for model Hence, it is not wise to apply ANNs blindly to any type of data.
building, but has poor generalization ability for data out of the Since it is difficult to completely know the characteristics of the
sample. The choice of is data-dependent and there is no sys- data in a real problem, hybrid methodology that has both linear
tematic rule in deciding this parameter. and nonlinear modeling capabilities can be a good strategy for
In addition to choosing an appropriate number of hidden practical use. By combining different models, different aspects
nodes, another important task of ANN modeling of a time of the underlying patterns may be captured.
AREEKUL et al.: HYBRID ARIMA AND NEURAL NETWORK MODEL 527

It may be reasonable to consider a time series to be composed confidence intervals determine the validity of forecasts and
of a linear autocorrelation structure and a nonlinear component, track model performance to detect out of control situations.
i.e. In the second step, a neural network model is developed to
model the residuals from the ARIMA model. Since the ARIMA
model cannot capture the nonlinear structure of the data, the
(5) residuals of linear model will contain information about the non-
linearity.
where denotes linear component and denotes the non-
The results from the neural network can be used as predic-
linear component. These two components have to be estimated
tions of the error terms for the ARIMA model. The hybrid model
from the data. First, we let ARIMA to model the linear compo-
exploits the unique feature and strength of ARIMA model, as
nent, then the residuals from the linear model will contain only
well as ANN model, in determining different patterns. Thus, it
the nonlinear relationship. Let denote the residuals at time
could be advantageous to model linear and nonlinear patterns
from the linear model, then
separately by using different models, and then combine the fore-
casts to improve the overall modeling and forecasting perfor-
(6) mance.
To evaluate the accuracy of forecasting electricity prices, dif-
where is the forecast value for time from the estimated re- ferent criterions are used. This accuracy is computed in func-
lationship (2). Residuals are important in diagnosis of the suffi- tion of the actual market prices that occurred. The average per-
ciency of linear models. A linear model is not sufficient if there centage error in the entire series is a general measure of fit useful
are still linear correlation structures left in the residuals. How- in comparing the fits of different models. This measure adds up
ever, residual analysis is not able to detect any nonlinear patterns all of the percentage errors at each time point and divides them
in the data. In fact, there is currently no general diagnostic statis- by the number of time points. This measure is sometimes abbre-
tics for nonlinear autocorrelation relationships. Therefore, even viated mean percentage error (MPE):
if a model has passed diagnostic checking, the model may still
not be adequate in that nonlinear relationships have not been
appropriately modeled. Any significant nonlinear pattern in the % (9)
residuals will indicate the limitation of the ARIMA. By mod-
eling residuals using ANNs, nonlinear relationships can be dis-
covered. With input nodes, the ANN model for the residuals Therefore, we would compare its forecasts with those of alter-
will be native methods. There are different alternative method for this
purpose : the mean absolute percentage error (MAPE criterion),
mean absolute error (MAE criterion) and rms error (RMSE cri-
(7) terion), which are defined as follows:

where is a nonlinear function determined by neural network % (10)


and is the random error. If the model is not an appropriate
one, the error term is not necessarily random. Therefore, the cor-
rect model identification is critical. Denoting the forecast from (11)
(7) as , the combined forecast will be

(12)
(8)

In summary, the proposed methodology of the hybrid system where


consists of two steps. In the first step, an ARIMA model is
used to analyze the linear part of the problem. ARIMA model actual value of price at hour;
building can be summarized in four steps, which are as follows. forecasted value of price at hour;
1) Model identification: Using graphs, statistics, ACF, PACF,
number of hour for forecasting.
transformations, etc., achieve stationary and tentatively
identify patterns and model components.
2) Parameter estimation: Determine the model coefficients IV. EMPIRICAL RESULTS
through the method of least squares, maximum likelihood
methods and other techniques. A. Datasets
3) Model diagnostics: Determine if the model is valid. If The proposed a hybrid ARIMA and neural network model is
valid, then use the model; otherwise, repeat identification, performed using data of Australian NEM, New South Wales,
estimation, and diagnostic steps. [26] in year 2006 is considered in this real-world case study.
4) Forecast verification and reasonableness: It is necessary The price curves is given in Fig. 2.
to revisit the question of identification to see if the selected Moreover, models similar to the one reported in this paper
model can be improved. Several statistical techniques and are routinely used by the power industry in Australia for price
528 IEEE TRANSACTIONS ON POWER SYSTEMS, VOL. 25, NO. 1, FEBRUARY 2010

TABLE I
COMPARISON OF FORECASTING RESULTS WITH ARIMA MODEL

Fig. 2. Electricity prices : January to December 2006 in the Australian NEM.

forecasting. It should be noted that the electricity market of Aus- TABLE II


tralia is a duopoly with a dominant player. This results in price COMPARISON OF FORECASTING RESULTS WITH ARIMA-ANN MODEL
changes related to the strategic behavior of the dominant player,
which are hard to predict.
The data have been extensively studied with a vast variety
of linear and nonlinear time-series models, including ARIMA
and ANN. To assess the forecasting performance of different
models, each dataset is divided into two samples of training and
testing. The training dataset is used exclusively for model de-
velopment, and then the test sample is used to evaluate the es-
tablished model.
To illustrate the behavior of the proposed technique, results
comprising four weeks corresponding to the four seasons of year
2006 are presented. In this manner, representative results for the
whole year are provided.
For the sake of a fair comparison, the fourth week in each
of the seasons of January, May, August, and October are se-
lected, i.e., weeks with particularly good price behavior are pur-
posely not sought. To build the forecasting model for each one of
the considered weeks, the information available includes hourly
price historical data of the four weeks previous to the first day
of the week, whose prices are to be predicted.

model in each season to estimate nonlinear composite. We


B. Results propose an ANN and hybrid (ARIMA-ANN) models to price
forecast and use MATLAB for training the ANN. The proposes
In this study, the approach for analysis and building a time-se- ANN model is a three-layered feedforward neural network
ries model is a method of finding, for a model (ARIMA) that which has the feature of memory and learning is constructed,
adequately represents a data generating process for a given set trained by the Levenberg-Marquardt algorithm its proposed
of data. The test time-series data was processed by taking the for forecasting the next 168 hour (one week) electricity prices.
first-order regular difference and the first seasonal difference in The hidden layer has 5, 10, 15, and 20 neurals for finding best
order to remove the growth trend and the seasonality charac- suitable accuracy and output layer has one unit.
teristics. We checked stationary of time series of each of the The same experience was repeated about testing and finding
seasons, identified ARIMA (p, d, and q) model by comparing the best number of neurals in hidden layer. Table II shows the
characteristic of ACF and PACF (Correlogram testing), esti- result of forecasting 168 h (one week), presented the values of
mated parameters by ordinary least-squares method (OLS), and RMSE, MAE, and MAPE from price forecasting performance
checked hypothesis of the model are validated by Akaike infor- of the proposed hybrid (ARIMA-ANN) model in each of the
mation criterion (AIC), which was used to determine the best seasons.
model and final forecasted 168 h (one week) in each of the sea- From Table II, the performance of 5, 10, 15, and 20 neurals
sons. Table I presented the values of RMSE, MAE, and MAPE in hidden layer are similar in values, in which 15 neurals in the
from price forecasting performance of the proposed ARIMA hidden layer have best suitable accuracy. Therefore, numerical
model. results for actual and forecasting prices in each of the seasons
In this section, artificial neural network (ANN) model is used with hybrid (ARIMA-ANN) proposed approach are shown in
to model the residuals obtained from forecasting by ARIMA Figs. 3–6 for the summer, fall, winter, and spring, respectively.
AREEKUL et al.: HYBRID ARIMA AND NEURAL NETWORK MODEL 529

TABLE III
COMPARISON OF FORECASTING RESULTS WITH
ARIMA, ANN, AND HYBRID MODELS

Fig. 3. Forecasting results of January 22–28, 2006.

are better in accuracy than ARIMA model. The MAPE values


obtained in every season for ARIMA model range from 10.46%
to 16.06%, ANN model range from 10.03% to 15.62%, and hy-
brid model range from 9.98% to 15.57%. The results of the hy-
brid (ARIMA-ANN) model show that by combining two models
Fig. 4. Forecasting results of May 21–27, 2006. together, the overall forecasting errors can be significantly re-
duced, which in terms of MAPE, the percentage improvements
of the hybrid model over the ARIMA and ANN, are 4.5% and
0.5%, respectively. Therefore, the hybrid model gives better pre-
dictions than either ARIMA or ANN forecasts; its overall fore-
casting capability is improved.

V. CONCLUSION
This paper presented an approach for short-term price fore-
cast problem based on hybrid correction method, which is a
combination of ARIMA and ANN. The choice of forecasting
model becomes the important influence factors on how to im-
prove price forecasting accuracy. The linear ARIMA model and
Fig. 5. Forecasting results of August 20–26, 2006. the nonlinear ANN model are used jointly, aiming to capture dif-
ferent forms of relationship in the time-series data. Hence, we
propose the price correction method for the generation of new
method in which the new price data that are suitable to train
the neural network are generated by correcting the historical
days data with the help of price correction rates. For learning
the neural network, back propagation algorithm along with on-
line learning are adopted that consist of correction and forecast
errors. To verify the predictive ability of the proposed method,
we performed simulations for three different cases as price fore-
casting by ARIMA, ANN, and hybrid model approach. The test
results showed that the proposed forecasting method could pro-
vide a considerable improvement of the price forecasting accu-
Fig. 6. Forecasting results of October 22–28, 2006. racy, especially the hybrid model gives better predictions than
either ARIMA or ANN forecasts, and its overall forecasting ca-
pability is improved.
Table III summarizes the numerical results, where the com-
parison of forecasting performance of the proposed ARIMA, REFERENCES
ANN, and hybrid models is presented.
[1] A. D. Papalexopoulos and T. C. Hesterberg, “A regression-based ap-
From Table III, it can be observed that the hourly MAPE, proach to short-term load forecasting,” IEEE Trans. Power Syst., vol.
MAE, and RMSE of both neural network and hybrid models 5, no. 4, pp. 1535–1550, Nov. 1990.
530 IEEE TRANSACTIONS ON POWER SYSTEMS, VOL. 25, NO. 1, FEBRUARY 2010

[2] H. S. Hippert, C. E. Pedreira, and R. C. Souza, “Neural networks for [26] Australian National Electricity Market. [Online]. Available:
short-term load forecasting: A review and evaluation,” IEEE Trans. http://www.nemmco.com.au.
Power Syst., vol. 16, no. 1, pp. 44–55, Feb. 2001.
[3] S. Rahman and O. Hazim, “A generalized knowledge-based short term
load-forecasting technique,” IEEE Trans. Power Syst., vol. 8, no. 2, pp.
508–511, May 1993.
[4] S. J. Huang and K. R. Shih, “Short-term load forecasting via ARMA Phatchakorn Areekul (S’08) was born in Trang,
model identification including non-Gaussian process considerations,” Thailand, on January 15, 1973. He received the B.S.
IEEE Trans. Power Syst., vol. 18, no. 2, pp. 673–679, May 2003. degree in electrical engineering from Rajamangala
[5] H. Wu and C. Lu, “A data mining approach for spatial modeling in University of Technology Thanyaburi, Pathumthani,
small area load forecast,” IEEE Trans. Power Syst., vol. 17, no. 2, pp. Thailand, in 1996 and the M.S. degree in electrical
516–521, May 2003. engineering from Prince of Songkla, Songkla, Thai-
[6] T. Senjyu, H. Sakihara, Y. Tamaki, and K. Uezato, “Next day peak land, in 2003. He is currently pursuing the Ph.D.
load forecasting using neural network with adaptive learning algo- degree in electrical and electronic engineering from
rithm based on similarity,” Elect. Mach. Power Syst., vol. 28, no. 7, the University of the Ryukyus, Okinawa, Japan.
pp. 613–624, 2000. His research interests include electric power
[7] D. Srinivasan, S. S. Tan, C. S. Cheng, and E. K. Chan, “Parallel neural system economics, power quality, security, and
network fuzzy expert system strategy for short-term load forecasting: application of intelligent systems.
System implementation and performance evaluation,” IEEE Trans. Mr. Phatchakorn is a student member of the Institute of Electrical Engineers
Power Syst., vol. 14, no. 3, pp. 1100–1105, Aug. 1999. of Japan and the Institute of Electronic, Information and Communication
[8] G. E. P. Box and G. Jenkins, Time Series Analysis, Forecasting and Engineers.
Control. San Francisco, CA: Holden-Day, 1970.
[9] E. D. McKenzie, “General exponential smoothing and the equivalent
ARMA process,” J. Forecasting, vol. 3, pp. 333–344, 1984.
[10] G. Zhang, E. B. Patuwo, and M. Y. Hu, “Forecasting with artificial Tomonobu Senjyu (M’02–SM’06) received the B.S.
neural networks: The state of the art,” J. Forecasting, vol. 14, pp. 35–62, and M.S. degrees in electrical engineering from the
1998. University of the Ryukyus, Okinawa, Japan, in 1986
[11] S. Makridakis, A. Anderson, R. Carbone, R. Fildes, M. Hibdon, R. and 1988, respectively, and the Ph.D. degree in elec-
Lewandowski, J. Newton, E. Parzen, and R. Winkler, “The accuracy trical engineering from Nagoya University, Nagoya,
of extrapolation (time series) methods: Results of a forecasting com- Japan, in 1994.
petition,” J. Forecasting, vol. 1, pp. 111–153, 1982. In 1988, he joined the Department of Electrical and
[12] R. Winkler, “Combining forecasts: A philosophical basis and some cur- Electronics Engineering, University of the Ryukyus,
rent issues,” J. Forecasting, vol. 5, pp. 605–609, 1989. where he is currently a Professor. His research inter-
[13] N. Kohzadi, M. S. Boyd, B. Kermanshahi, and I. Kaastra, “A compar- ests include stability of ac machines, advanced con-
ison of artificial neural network and time series models for forecasting trol of electrical machines, power electronics, appli-
commodity prices,” Neurocomputing, vol. 10, pp. 169–181, 1996. cation of artificial intelligence techniques in power system, renewable energy,
[14] G. P. Zhang, “Time series forecasting using a hybrid ARIMA and and power economics.
neural network model,” Neurocomputing, vol. 50, pp. 159–175, 2003. Prof. Senjyu is a member of the Institute of Electrical Engineers of Japan.
[15] C. W. J. Granger and A. P. Anderson, An introduction to bilinear time
series models. Göttingen, Germany: Vandenhoeck and Ruprecht,
1978.
[16] H. Tong, Threshold Models in Non-Linear Time Series Analysis. New
York: Springer-Verlag, 1983. Hirofumi Toyama was born in Okinawa, Japan, in
[17] R. F. Engle, “Autoregressive conditional heteroscedasticity with esti- 1984. He received the B.S. and M.S. degrees in elec-
mates of the variance of U.K.,” Econometrica, vol. 50, pp. 987–1008, trical and electronics engineering from the University
1982. of the Ryukyus, Okinawa, Japan, in 2007 and 2009,
[18] J. G. De Gooijer and K. Kumar, “Some recent developments in non- respectively.
linear time series modelling,” J. Forecasting, vol. 8, pp. 135–156, 1992. He is currently with the University of the Ryukyus.
[19] G. U. Yule, “Why do we sometimes get nonsense-correlations between His research interests include electricity price fore-
time series? A study in sampling and the nature of time series,” J. R. casting, neural network, and unit commitment
Statist. Soc., vol. 89, pp. 1–64, 1926. problem.
[20] H. Wold, A Study in the Analysis of Stationary Time Series. Stock- Mr. Toyama is a student member of the Institute of
holm, Sweden: Almgrist & Wiksell, 1938. Electrical Engineers of Japan.
[21] K. Hornik, M. Stinchicombe, and H. White, “Using multi-layer feed-
forward networks for universal approximation,” Neural Netw., vol. 3,
pp. 551–560, 1990.
[22] M. S. Hung and J. W. Denton, “Training neural networks with the Atsushi Yona (M’08) was born in Okinawa, Japan, in
GRG2 nonlinear optimizer,” Eur. J. Oper. Res., vol. 69, pp. 83–91, 1982. He received the B.S. and M.S. degrees in elec-
1993. trical and electronics engineering from the University
[23] V. Subramanian and M. S. Hung, “A GRG2-based system for training of the Ryukyus, Okinawa, Japan, in 2006 and 2008,
neural networks: Design and computational experience,” ORSA J. respectively.
Comput., vol. 5, pp. 386–394, 1993. He is currently an Assistant Professor with the Uni-
[24] J. W. Denton, “How good are neural networks for causal forecasting versity of the Ryukyus. His research interests include
?,” J. Forecasting, vol. 14, pp. 17–20, 1995. wind generator, photovoltaic, and neural network.
[25] I. S. Markham and T. R. Rakes, “The effect of sample size and vari- Mr. Yona is a member of the Institute of Electrical
ability of data on the comparative performance of artificial neural net- Engineers of Japan and the Institute of Electronic, In-
works and regression,” Comput. Oper. Res., vol. 25, pp. 251–263, 1998. formation and Communication Engineers.

Вам также может понравиться