Академический Документы
Профессиональный Документы
Культура Документы
School of Electrical and Computer Engineering, University of Campinas, Campinas, SP, Brazil.
c
978-1-5090-6341-3/17/$31.00 2017 IEEE
observe, the results for synthetic seismic data are promising where W RLN gives the coefficients of such linear com-
and encourage the extension of this study. binations and L is the number of outputs.
Suppose that all the parameters of the network have been
2. PROBLEM STATEMENT predefined, except the output weight matrix W. Having ac-
cess to a training set {u(n), d(n)}, n = 1, . . . , T , where
In the case of zero offset, the seismic trace can be modeled d(n) represents the reference (target) signal for the network
as: output, the training process is reduced to the problem of
t(k) = rp (k) w(k) m(k) + (k), (1) adapting the coefficients in W, which can be formulated as
a linear regression task, so that a closed-form solution in
where m(k) denotes the impulse response of the multiple
the least-squares sense can be obtained with the aid of the
generating system, w(k) is the wavelet and (k) represents
Moore-Penrose pseudo-inverse.
an additive noise.
The possibility of keeping part of the neural structure un-
In order to recover the periodicity of the multiple events,
trained and fixed simplifies the adaptation process of the net-
the -p (also known as linear Radon) transform is applied to
work as well as avoids the use of iterative gradient-based al-
the set of collected traces.
gorithms. This attractive idea forms the essence of both ELMs
[5] and ESNs [6].
3. ESN AND ELM In the case of ELMs, the input weights (Wi ) can be ran-
domly created according to any continuous probability den-
Consider the neural network architecture depicted in Figure sity function, since the neural network preserves the universal
1, which represents the basic structure of echo state networks approximation capability, as shown in [5, 8]. Additionally,
and extreme learning machines. a broad class of nonlinear functions can be employed as the
activation function of the hidden neurons [9].
Hidden layer (Dynamical reservoir )
The presence of feedback connections in ESNs provides
W dynamical properties to the network processing, which can
be particularly useful when the input signals present tempo-
Wi ral dependency, as occurs in the context of adaptive filtering
and time series prediction [6, 7]. However, it also brings an
additional concern regarding the stability and dynamic behav-
x(n)
ior of the model. Hence, a fundamental result known as echo
Input Output
state property (ESP) is explored for an adequate, yet simple,
u(n) y(n)
W
r definition of the dynamical reservoir parameters [6, 10].
The ESP ensures that the effect of the initial state (x(0))
Primaries
t(k) t -0.4
1. tn (k) = , 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
t tau (s)
0.2 10-4
2. tn (k) = tn (k), 10
tmax
Autocorrelation
n
5
where t and t denote the mean and the standard deviation of
the trace, tn (k) is the normalized trace at instant n and tmax
n is
0
0.2
proach are the prediction lag (L) and the number of input
0
samples used by the deconvolution filter (K). With respect
to the prediction lag, the distance between the event at zero
-0.2
Multiples
time in the autocorrelation function and the first parallel event
Primary serves as an indicative of the period of the multiple events and,
-0.4
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 hence, can be used as an initial guess for L [2]. The number
tau (s) of inputs, on the other hand, is usually selected taking into
10-4 account the length of the wavelet and the width of the first
10
parallel peak in the autocorrelation. However, when the trace
Autocorrelation
trace, i.e., the prediction error at the output of the networks, tau (s)
10-4
considering an average of NE = 100 independent experi- 5
ments. Autocorrelation
0.2 FIR
Original trace 0.2
ESN
0 Original trace
0
-0.2
-0.2
-0.4
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
-0.4
tau (s) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
-4
10 tau (s)
10
-4
10
Autocorrelation
4
5
Autocorrelation
0 0
-2
-5
0 50 100 150 200 250 300
-4
lag 0 50 100 150 200 250 300
lag
0.02
tau (s)
10-4
10
0.015
Autocorrelation
5
0.01
0
0.005
-5
0 0 50 100 150 200 250 300
0 0.5 1 1.5 2 2.5 3 lag
tau (s)
Fig. 7. Standard deviation as a function of considering Fig. 8. Prediction error and the corresponding autocorrelation
NE = 100 independent experiments with ELM and ESN. function for the ELM.
6. REFERENCES