Вы находитесь на странице: 1из 14

Iranian Journal of Chemical Engineering

Vol. 7, No. 4 (Autumn), 2010, IAChE

Research note

Product Yields Prediction of Tehran Refinery Hydrocracking


Unit Using Artificial Neural Networks
M. Bahmani1, Kh. Sharifi2, M. Shirvani2∗

1- Department of Chemistry, Applied Chemistry Group, Tarbiat Moalem University,


Dr. Mofatteh Street, Tehran, Iran.
2- Department of Chemical Engineering, Iran University of Science and Technology,
Narmak Street, Tehran, Iran.

Abstract
In this contribution Artificial Neural Network (ANN) modeling of the hydrocracking
process is presented. The input–output data for the training and simulation phases of
the network were obtained from the Tehran refinery ISOMAX unit. Different network
designs were developed and their abilities were compared. Backpropagation, Elman
and RBF networks were used for modeling and simulation of the hydrocracking unit.
The residual error (root mean squared difference), correlation coefficient and run time
were used as the criteria for judging the best network. The Backpropagation model
proved to be the best amongst the models considered. The trained networks predicted
the yields of products of the ISOMAX unit (diesel, kerosene, light naphtha and heavy
naphtha) with good accuracy. The residual error (root mean squared difference)
between the model predictions and plant data indicated that the validated model could
be reliably used to simulate the ISOMAX unit. A four-lumped kinetic model was also
developed and the kinetic parameters were optimized utilizing the plant data. The result
of the best ANN model was compared to the result of the kinetic model. The root mean
square values for the kinetic model were slightly better than the ANN model but the
ANN models are more versatile and more practical tools in such applications as fault
diagnosis and pattern recognition.

Keywords: Artificial Neural Networks, Hydrocracking Process, Product Yields

1- Introduction high molecular weight compounds are


1.1- Hydrocracking unit broken down to form low molecular weight
The growing demand for middle distillates compounds. The reaction takes place over a
and the increasing use of heavy crude oils as catalyst in a hydrogen–rich atmosphere and
feedstocks have made hydrocracking one of other reactions such as hydrodesulfurization
the most importance upgrading petroleum and hydrodemetallization occur simulta-
refinery processes. During hydrocracking, neously. [1]

∗ Corresponding author: bahmani@tmu.ac.ir

50
Product Yields Prediction of Tehran Refinery Hydrocracking
Unit Using Artificial Neural Networks

A simplified flow diagram for this directly used by the field engineer to predict
configuration is shown in Fig. 1. The fresh the product yields at different operation
feed combined with the recycled unconverted conditions. Fig. 2 shows one such reaction
fraction is passed downward through the network consisting of 10 lumps. This
reactor in the presence of hydrogen. Because reaction network requires determination of a
hydrogen is consumed during hydrocracking, large number of rate constants namely, 32.
make-up hydrogen has to be added to the alage Moreover, the models prepared have a
recycled hydrogen. The effluent of the number of physical parameters inherent to
hydrocracking reactor is passed through the process that are usually unknown and
high- and low-pressure separators where the have to be determined through a parameter
hydrogen is converted and recycled. The estimation methodology. [2, 3]
liquid product, denoted in Fig. 1 as a single-
stage product, is sent to the fractionator
where the final products are separated from
the unconverted fraction.
Ideally modeling of the hydrocracking
process should take into account various
reactions and mechanisms that are involved
with all the components in the petroleum
reaction mixture. However, in actual
practice, it is very difficult to keep track of
individual components and the reactions Figure 1. Simplified flow diagram of hydrocracking
involved. This leads to an approximate and unit
quite sophisticated model that cannot be

K1 D ie s e l

C1

C2
K2 K e ro s e n e
fe e d
C3
K9

C4

L ig h t C5
K3
n a p h ta
K11

H eavy
K4
n a p h ta

Figure 2. Hydrocracking reaction network based on discrete lumping approach

Iranian Journal of Chemical Engineering, Vol.7, No. 4 51


Bahmani, Sharifi, Shirvani

Kinetic modeling of heavy fractions input, and can be improved continuously as


hydrocracking has been investigated based more operational data are made available.
on two main approaches; lumped models [1, [12]. Backpropagation, Elman and Radial
4-8] and detailed molecular models [9]. The Basis Function (RBF) are networks that are
lumped models have been based on discrete usually used in the chemical process [13].
and continuous lumped models. In the The objective of this work was to explore the
discrete lumping approach, the individual use of Backpropagation, Elman and RBF
components in the reaction mixture are neural network architectures in creating a
divided into discrete pseudo components model of the hydrocracking process. Our
(lumps) based on the physical properties such approach is based on: (1) formulation of an
as; true boiling point (TBP), carbon number artificial neural network (ANN) with the
(CN), and/or molecular weight (MW). The input and output variable values obtained
success of this approach depends on the from a hydrocracking unit of an operating
number of lumps. A large number of lumps refinery, (2) preprocessing the data
would result in more accurate model including: (a) elimination of the out of range
predictions, but this increases the number of data, (b) normalization of input data and (c)
kinetic parameters that should be determined principal component analysis to reduce the
[10]. dimension of the input vectors, (3) dividing
the input data randomly to the training,
1.2-Artificial neural networks validation and the test sets, (4) training and
In the cases where a deterministic model testing the ANN with the refinery data and
cannot adequately describe a system, then the selecting the best architecture and parameters
use of a neural network, which is actually a for these networks.
‘black box’, can give better results. Deriving Backpropagation is the generalization of the
information from the biological counterparts, Widrow-Hoff learning rule for multiple-layer
neural networks are based on the networks and nonlinear differentiable
collaboration of highly interconnected simple transfer functions. Input vectors and the
functional units that express a complicated corresponding target vectors are used to train
cross-correlation between dependent and a network until it can approximate a function,
independent variables [11]. Artificial neural associate input vectors with specific output
networks (ANNs) offer a promising vectors, or classify input vectors in an
alternative to modeling for a number of appropriate way as defined by the user.
reasons. ANNs are able to capture the Networks with biases, a sigmoid layer, and a
nonlinearities in system behavior very linear output layer are capable of
effectively. They are based on plant data that approximating any function with a finite
is usually available from the operation of the number of discontinuities. Standard
unit in previous times. Additionally, a neural Backpropagation is a gradient descent
network model of the hydrocracking unit algorithm, as is the Widrow-Hoff learning
does not need complex analysis of the rule, in which the network weights are
feedstock, is tolerant to partial and noisy data moved along the negative of the gradient of

52 Iranian Journal of Chemical Engineering, Vol. 7, No. 4


Product Yields Prediction of Tehran Refinery Hydrocracking
Unit Using Artificial Neural Networks

the performance function. Properly trained operation, spanning the range of operating
Backpropagation networks tend to give conditions that may be encountered during
reasonable answers when presented with routine operating conditions. The data must
inputs that they have never seen [14, 15]. be pre-processed in order for the ANN to
RBF networks can require more neurons than effectively learn from it.
standard feedforward Backpropagation Due to the large number of data sets
networks, but often they can be designed in a collected, some statistical analysis was
fraction of the time it takes to train standard applied to the data, and data values that
feedforward networks. They work best when appeared to be scattered far away from the
many training vectors are available [16]. majority of values were considered as
Elman networks are two-layer Back- outliers and were thus excluded from the data
propagation networks, with the addition of a set. After removing the outliers the number
feedback connection from the output of the of data records in the database consisted of
hidden layer to its input. This feedback path 1408 data points. Then the data were
allows Elman networks to learn, recognize normalized linearly between -1 and 1,
and generate temporal patterns, as well as considering the minimum and maximum
spatial patterns. The Hopfield network is values. This normalization of variable values
used to store one or more stable target was carried out in order to avoid using data
vectors. These stable vectors can be viewed spanning over different orders of magnitude
as memories that the network recalls when and therefore they have zero mean and unit
provided with similar vectors that act as a variance.
cue to the network memory [17]. An effective procedure for reducing the
In this study, two different backpropagation dimension of the input vectors is principal
networks were initially designed and their component analysis. For this purpose using
results were compared to actual plant data. the neural network toolbox in Matlab those
The first model (model 1) is based on six components that contribute to less than 2%
input and four output variables and the of the overall variation in the data set were
second model (model 2) has nine input and discarded.
four output variables. Finally, the best The refined database was then randomly
backpropagation network was then compared divided into three distinct subsets. 60% of the
with the RBF and Elman networks. database was used to train the model.
Training a neural network is an iterative
2- Back propagation neural network process in which a nonlinear optimization
models algorithm, typically a gradient descent
2.1-Backpropagation neural network model method, is used to obtain the optimal values
development of the weights and biases.
In the present work extensive plant data from The second subset, which was a 20% cut
the ISOMAX unit of Tehran refinery has from the original database, was used for
been collected. The data used in the model validating the ANN. The error on the
development is representative of plant validation set is monitored during the

Iranian Journal of Chemical Engineering, Vol.7, No. 4 53


Bahmani, Sharifi, Shirvani

training process. The validation error as the Mean Squared Error (MSE) computed
normally decreases during the initial phase of on the training patterns comes to a minimum.
training, as does the training set error. The number of hidden nodes at that point is
However, when the network begins to over taken as the optimum.
fit the data, the error on the validation set In this work, for model 1 each neuron in the
typically begins to rise. When the validation input layer corresponds to a particular feed
error increases for a specified number of property or reactor operating conditions of
iterations, the training is stopped, and the the unit, and the output layer corresponds to
weights and biases at the minimum of the the product yields. Fig. 3 shows the neural
validation error are returned. network architectures for model 1, consisting
The third subset was 20% of the database and of an input layer, a hidden layer, and an
was used to test the ANN. This data set was output layer. Subsequently, several ANN
in fact used to compare different models. In structures with one hidden layer of sigmoid
this study the physical properties and neurons and one output layer of linear
operating conditions of the reactor and neurons were tested to identify the most
distillation section of the hydrocracking unit appropriate number of hidden nodes. Fig. 4
were taken as the inputs and the yields of the shows the effect of number of hidden nodes
products were taken as the outputs of the on the MSE. The best results were obtained
networks. for the network architecture of 6×36×4,
composed of 6 input nodes, 36 hidden nodes,
2.2- Designs of the backpropagation networks and 4 nodes in the output layer.
The performance of an ANN is very In model 1 the operating conditions of the
dependent on the number of hidden layers distillation section of the ISOMAX unit was
and the number of hidden nodes in each not utilized. Whilst, in model 2 the operating
layer. The feedforward networks often have conditions of the distillation section such as
one or more hidden layers of sigmoid the temperature of the middle tray, pressure
neurons followed by an output layer of linear of the column and the amount of used steam
neurons. Determination of the number of were also included as input data.
hidden nodes is one of main aspects of Fig. 5 shows the neural network architectures
designing the neural network architecture. for model 2. In Fig. 6 the effect of number of
Too small a number of hidden nodes limits hidden nodes on the MSE for model 2 can be
the ability of the ANN to model the problem seen. It can be concluded from this figure
and, furthermore, such a network may not that the best results were obtained for the
train well to an acceptable error. On the other network architecture consisting of 9 input
hand, too many hidden nodes would result in nodes, 43 hidden nodes, and 4 nodes in the
the network memorizing the data rather than output layer. The number of neurons in the
learning them for generalization. The number hidden layer in this model corresponded to
of hidden nodes is determined by varying the the lowest prediction error as was obtained
number of nodes, starting with only a few for model 1. It can be seen that model 2
hidden nodes and then adding up more nodes contains more hidden nodes than model 1

54 Iranian Journal of Chemical Engineering, Vol. 7, No. 4


Product Yields Prediction of Tehran Refinery Hydrocracking
Unit Using Artificial Neural Networks

and therefore, it is a more complex network. section of the hydrocracking process is also
Moreover, prediction error in model 2 is less considered. Prediction error was defined as
than model 1, since in model 2 the effect of MSE differences between measured values
the operating conditions of the distillation and those computed by the ANN model.

Figure 3. Network architecture for model 1

No. of hidden neurons

Figure 4. Mean square error as a function of number of neurons in the hidden layer for model 1

Figure 5. Network architecture for model 2

Iranian Journal of Chemical Engineering, Vol.7, No. 4 55


Bahmani, Sharifi, Shirvani

No. of hidden neurons

Figure 6. Root mean square as a function of neurons in the hidden layer for model 2.

2.3- Discussion of the performance of the predicted diesel yields for model 2. The
backpropagation ANN models continuous line represents the daily
In order to judge the effectiveness of the predictions of the diesel yield by the ANN,
neural networks obtained, the results whereas the actual values are indicated by
obtained were compared with actual symbols. It can be seen that the ANN
measurements obtained from the predictions are in good agreement with actual
hydrocracking plant. Fig. 7 shows the actual refinery daily measurements, except for some
and ANN predicted diesel yields for model 1 points scattered around the main trend of the
and Fig. 8 shows the actual and ANN mean measured values.

Figure 7. Comparison of actual and backpropagation ANN predicted diesel yield for model 1

Figure 8. Comparison of actual and backpropagation ANN predicted diesel yield for model 2

56 Iranian Journal of Chemical Engineering, Vol. 7, No. 4


Product Yields Prediction of Tehran Refinery Hydrocracking
Unit Using Artificial Neural Networks

Prediction performance of the ANN for the Kerosene: R=0.93032

kerosene yields for model 1 and model 2 are


depicted in Figs. 9 and 10, when applied to
the test data set. There is a close
correspondence between the measured and
predicted values. The equations of these
correlations are shown in the figures. In these
equations Output represents the predicted
and Target the measured values, respectively.
The correlation coefficient R is computed
and shown for each line; it is used to
determine the strength of the correlation
between the measured and predicted values.
While values of the slope of the fitted line Figure 10. Correlation between measured and
predicted values for model2.
show how much of the variation in the
predicted values is accounted for by the
Due to the large number of data sets
model, obviously a slope close to one
(typically 280 data points) used in the testing
indicates high model prediction accuracy.
phase of ANN, a number of outliers can be
Ideally the slope and the Y-axis intercept of
seen spread around the mean value. These
the fit should be 1 and 0 respectively. It can
outliers may be justified by reasoning that
be concluded from these figures that a good
the actual plant data could inherently contain
fit was observed for the data set, with better
some errors.
predictions for model 2.

3- Elman and RBF neural network models


Kerosene: R=0.90953 3.1- Elman and RBF neural network model
development
Similar to model 2 of the Backpropagation
network, the Elman networks were designed
with three layers consisting of an input layer,
a hidden layer, and an output layer. The best
architecture for these networks was achieved
by varying the number of nodes in the hidden
layer. Simplified network architecture of
Elman networks is presented in Fig. 11. But
in the RBF network the best results can be
obtained by varying the value of the
parameter of SPREAD. SPREAD determines
Figure 9. Correlation between measured and
the width of an area in the input space to
predicted values for model 1. which each neuron responds. SPREAD

Iranian Journal of Chemical Engineering, Vol.7, No. 4 57


Bahmani, Sharifi, Shirvani

should be large enough so that the neurons seconds, whereas the Elman network takes
can respond strongly to the overlapping 30 minutes to converge.
regions of the input space. For the RBF network, the variations of MSE
and run time with increments in the amount
3.2- Comparison of Backpropagation, Elman of SPREAD are shown in Table 2. Fig. 13
and RBF networks shows that the backpropagation network has
To evaluate the efficiency and abilities of lower MSE values. The lowest error for the
Backpropagation and Elman networks, model 2 of the Backpropagation network was
variations of MSE with a different number of obtained for a network with 9 input nodes, 43
hidden neurons for these networks are shown nodes in the hidden layer and 4 nodes in the
in Fig. 12, and additionally, Table 1 shows output layer. This result was also obtained
the variation of run time with the number of for the Elman network. When the parameter
hidden neurons for these two networks. The of SPREAD is equal to 45, the lowest error
MSE values for the backpropagation network for the RBF network was also achieved. The
are slightly lower than the Elman network, run time for the RBF network is less than the
but it can be seen from Table 1 that for the two other networks, but the error for this
same number of hidden nodes the network is more than the two other networks.
backpropagation network converges in

Figure 11. Network architecture for Elman networks

Backpropagation
Elman

No. of hidden neurons

Figure 12. Comparison of MSE of the model 2 of the backpropagation and the Elman network

58 Iranian Journal of Chemical Engineering, Vol. 7, No. 4


Product Yields Prediction of Tehran Refinery Hydrocracking
Unit Using Artificial Neural Networks

No. of hidden nodes


Figure 13. Root mean square as a function of hidden neurons for Backpropagation and RBF networks

Table 1. Run Time variation with number of hidden vectors are needed to properly define a
nodes for Backpropagation and Elman networks network, as is typically the case.

Backpropagation Table 2. MSE and Run Time variation with


SPREAD for RBF networks

Overall, amongst these networks the


backpropagation network proved to be the
best network for modeling the hydrocracking
unit. Elman network is a historical network
and it takes more CPU time and, furthermore, Correlation between measured and predicted
its error is more than the backpropagation diesel yield are shown in Figs. 14-16 for
network. The drawback to the RBF network Backpropagation, Elman and RBF networks
is that it produces a network with as many respectively. The equations of these
hidden neurons as there are input vectors. For correlations are shown in these figures. In
this reason, RBF does not return an these equations Output represents the
acceptable solution in cases that many input predicted and Target the measured values

Iranian Journal of Chemical Engineering, Vol.7, No. 4 59


Bahmani, Sharifi, Shirvani

respectively. The correlation coefficient, R,


for each line is computed and shown. It is
used to determine the strength of the
relationship between the measured and
predicted values. There is a close
correspondence between the measured and
predicted values for each of the three
networks, but the best correlation coefficient
was obtained for the Backpropagation
network.

Figure 16. Correlation between measured and


predicted diesel yield for RBF network.

4- Comparison of the ANN model with


classical hydrocracking reaction models
A reaction network involving four lumps as
shown in Fig. 17 was utilized to model the
plant data.

Figure 14. Correlation between measured and


predicted diesel yield for Backpropagation network.

Figure 17. Reaction network for hydrocracking 4-


lumped kinetic model

Mass balance involving first order kinetics


was used to model the hydrocracking process
as follows:

Figure 15. Correlation between measured and dyi


= ri = K i yi
predicted diesel yield for Elman network. ⎛ 1 ⎞
d⎜ ⎟
⎝ LHSV ⎠

60 Iranian Journal of Chemical Engineering, Vol. 7, No. 4


Product Yields Prediction of Tehran Refinery Hydrocracking
Unit Using Artificial Neural Networks

Where yi is the weight fraction of the lumps, better MSE values compared to the ANN
ri is the reaction rate for each lump and models. Although this clearly shows the
LHSV is the liquid Hourly Space Velocity. accuracy of the kinetic model in predicting
The reactions were assumed to be first order the yields, nonetheless this finding has no
as recommended by Quader [4] and the rate direct impact on the usefulness of ANN
constants were assumed to follow an models. The ANN's are widely accepted as
Arrhenius type dependency on temperature practical tools for simulations in
⎛ −E ⎞ circumstances in which the complex
K = K 0 EX P ⎜ ⎟ in which K0 is the
⎝ RT ⎠ interconnection of many input variables
frequency factor and E is the activation strongly correlate with many output
energy. The minimization of the objective variables. In this study the best ANN model
function, based on the sum of square errors is developed i.e., model 2 relates various
between the experimental and calculated input parameters as shown in Fig. 4. The
product compositions, was applied to find the effect of various operational parameters such
best set of kinetic parameters. This objective as fractionator pressure and temperature,
function was solved using the least-squares steam flowrate, H2 purity, recycle rate and
criterion with a nonlinear regression reactor temperature and pressure on the
procedure based on Marquardt’s algorithm product yield can be easily studied by means
available in MATLAB. The optimized of model 2, whereas to model such effects by
kinetic parameters are shown in Table 3. means of classical chemical engineering
techniques would require rigorously
Table 3. Optimized kinetic parameter values for the modeling the whole hydrocracking plant.
four lumped kinetic model Additionally, it is widely accepted that
K0 (1/hr) E (KJ/mol) pattern recognition and fault diagnosis
studies in chemical engineering can be easily
K1 (rate constants for
production diesel)
1.3618e10 19664.38 carried out by means of ANN derived
models, whereas similar studies utilizing
K2 (rate constants for
production of 3.068e11 52822.036 classical chemical engineering techniques
gasoline) would pose a formidable task.
K3 (rate constants for
1.225e11 20277.88 0.2
production of gases)
0.19
Y ie ld
Yield

Figs. 18 and 19 show the parity plot for the


Ga soline

0.18
Gasoline

kinetic model developed and, as it can be 0.17


m odel

seen, there is a good comparison between the


0.16
Model

model and plant data. The MSE values for


0.15
the kinetic model are shown in Table 4. 0.15 0.16 0.17 0.18 0.19 0.2

Table 5 shows the MSE values for the ANN Industrial Gasoline Yield

models developed in this study. Apart from Figure 18. Parity plot for the gasoline yield of the 4-
the gas yields, the kinetic model produces lumped kinetic model

Iranian Journal of Chemical Engineering, Vol.7, No. 4 61


Bahmani, Sharifi, Shirvani

0.76
predictions were in excellent agreement with
industrial refinery data. The error in model 2
0.75
Yield

was less than in model 1, but the number of


DieselYield

0.74
neurons in the hidden layer for model 2 was
model Diesel

0.73
more than in model 1. Accuracy of the
Model

0.72
predictions of the trained neural networks to
0.71
the refinery data was determined by the root
0.7
0.7 0.72 0.74 0.76 mean squared error and the coefficient of
Industrial Diesel Yield
correlation. Backpropagation, Elman and
RBF networks were tested by comparing
Figure 19. Parity plot for the diesel yield of the 4- their MSE, Run Time and correlation
lumped kinetic model
coefficient between measured and predicted
values. Among these networks
Table 4. Mean squared error values obtained for the
Backpropagation network has the lowest
4-lumped model developed
MSE Product value of MSE and the best correlation
0.0252 Gases
coefficient, and the RBF network has the
minimum Run Time. Overall,
0.0086 Gasoline
Backpropagation network is the best model
0.0091 Diesel
to predict the complex behavior of the
multivariate hydrocracking unit.
Table 5. Mean squared error values obtained for the
Finally, a four-lumped kinetic model was
4- lumped model developed
developed and its kinetic parameters were
ANN models developed MSE values
in this study optimized utilizing the plant data. Although
Backpropagation model1 0.035 the kinetic model predicted the product
Backpropagation model2 0.027
Elman 0.03 yields better than the ANN model, the ANN
RBF 0.054 models relate input variables from different
parts of the ISOMAX plant to the product
5- Conclusions yields and therefore the ANN models are
This work describes the methodology of more versatile than the kinetic model.
developing artificial neural networks for
predicting actual industrial outputs. The
methodology was used to develop and
References
validate neural networks, which were then
[1] Ancheyta, J., Sánchez, S., Rodríguez, M. A.
used to predict the yields of hydrocracking
"Kinetic modeling of hydrocracking of heavy
unit products. Backpropagation, Elman and oil fractions: A review", Catalysis Today,
RBF networks were used for this purpose. 109, 76–92 (2005).
Initially two different backpropagation [2] Balasubramanian, P., Pushpavanam, S.
networks were studied. The neural network "Model discrimination in hydrocracking of
models were trained using data from the vacuum gas oil using discrete lumped
Tehran refinery ISOMAX unit. The model kinetics", Fuel, 87, 1660–1672 (2008).

62 Iranian Journal of Chemical Engineering, Vol. 7, No. 4


Product Yields Prediction of Tehran Refinery Hydrocracking
Unit Using Artificial Neural Networks

[3] Chavarría-Hernandez, J.C., Ramírez, J. and [11]. Bellos, G.D., Kallinikos, L.E., Gounaris,
Baltanás, M. A. "Single-event-lumped- C.E. and Papayannakos, N.G. "Modeling of
parameter hybrid (SELPH) model for non- the performance of industrial HDS reactors
ideal hydrocracking of n-octane", Catalysis using a hybrid neural network approach",
Today, 130, 455–461 (2008). Chemical Engineering and processing, 44,
[4] Quader, S. A. and Hill, G. R. "Hydrocracking 505-515 (2005).
of gas oils". Ind. Eng. Chem. Process Des. [12]. Baghat, P. "An introduction to neural nets",
DeV., 8 (1), 98, (1969). Chem. Eng. Prog., 55-61 (1990)
[5] Laxminarasimahn, C.S., Verma, R.P. and [13]. Rallo, R., ferré-Giné, J., Arenas, A. and
Ramachandaran, P.A., "Continuous Giralt, F. "Neural virtual sevsor for the
lumping model for simulation of inferential prediction of product quality
hydrocracking", AIChEJ, Sept., 43, No.9, from process variables", Compute. Chem.
2645-2653, (1996). Eng, 26, 1735-1754, (2002).
[6] Callegas, M. A. and Martinez, M. T. [14]. Rumelhart, D., Mcclelland, J. L. and
"Hydrocracking of a Maya residue: kinetics Williams, R. J. "Learning representations
and products yield distribution". Ind. Eng. by back-propagation", Nature 323, 533-536
Chem. Res., 38, 3285, (1999). (1986).
[7] Mohanty, S., Saraf, D. N. and Kunzru, D., [15]. Hagiwara, K. and Kuno, K. "Regularization
"Modeling of a hydrocracking reactor". learning and early stopping in linear
Fuel Process. Technol., 29, 1, (1991). networks", International Joint Conference
[8] Baltanas, M. A. and Froment, G. F., on Neural Networks, IJCNN2000, pp. 511-
"Computer generation of reaction networks 516 (2000).
and calculation of product distributions in [16]. Chen, S., Cowan, C.F.N. and Grant, P.M.
the hydroisomerisation and hydrocracking "Orthogonal least squares learning
of paraffins on Pt-containing bifunctional algorithm for radial basis function
catalysts". Comput. Chem. Eng., 9, 77, networks", IEEE Transactions on Neural
(1985). networks, Vol. 2, No. 2, March, pp. 302-
[9] Liguras, D. K. and Allen, D. T., "Structural 309, (1991).
models for catalytic cracking: 1. Model [17]. Elman, J. L., "Finding structure in time",
compounds reactions". Ind. Eng. Chem. Cognitive Science, Vol. 14, pp. 179-211,
Res., 28, 655, (1989). (1990).
[10] Bhutani, N., Ray, A. K., and Rangaiah, G.
P., "Modeling, simulation, and multi-
objective optimization of an industrial
hydrocracking unit", Ind. Eng. Chem. Res.,
45, 1354-1372, (2006).

Iranian Journal of Chemical Engineering, Vol.7, No. 4 63

Вам также может понравиться