Академический Документы
Профессиональный Документы
Культура Документы
Karel Schmidt
Abstrakt
1 Introduction 9
2 Medical Background 10
2.1 Parkinson’s disease . . . . . . . . . . . . . . . . . . . . . . . . 10
2.1.1 Parkinson’s disease . . . . . . . . . . . . . . . . . . . . 10
2.1.2 Parkinson’s disease treatment . . . . . . . . . . . . . . 10
2.2 Deep brain stimulation . . . . . . . . . . . . . . . . . . . . . . 11
2.2.1 Neurosurgery . . . . . . . . . . . . . . . . . . . . . . . 11
2.2.2 Navigation . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2.3 Microrecording . . . . . . . . . . . . . . . . . . . . . . 13
3 Methods 14
3.1 State of Art . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.2 Spike sorting . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.3 Fractal analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.3.1 Time delay embedding . . . . . . . . . . . . . . . . . . 17
3.3.2 Estimate the time delay . . . . . . . . . . . . . . . . . 17
3.3.3 Estimate embedding dimension . . . . . . . . . . . . . 18
3.3.4 Correlation dimension . . . . . . . . . . . . . . . . . . 21
3.3.5 Multifractal spectrum of generalized dimensions . . . . 22
3.3.6 Spectrum of scaling indices . . . . . . . . . . . . . . . . 24
4 Algorithms 27
4.1 Embedding . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.1.1 Estimate the time delay . . . . . . . . . . . . . . . . . 27
4.1.2 Estimate embedding dimension . . . . . . . . . . . . . 28
4.2 Fractal dimensions . . . . . . . . . . . . . . . . . . . . . . . . 29
4.2.1 Average Pointwise Mass Algorithms . . . . . . . . . . . 30
4.2.2 k-Nearest-Neighbor (Fixed-Mass) Algorithms . . . . . . 32
4.3 Specific implementations . . . . . . . . . . . . . . . . . . . . . 33
4.3.1 k-Nearest-Neighbor . . . . . . . . . . . . . . . . . . . . 33
1
4.3.2 Generalized correlation sum . . . . . . . . . . . . . . . 33
4.3.3 Generalized dimensions . . . . . . . . . . . . . . . . . . 34
4.3.4 Maximal and minimal generalized dimensions . . . . . 36
4.4 Spectrum of scaling indices . . . . . . . . . . . . . . . . . . . . 38
5 Results 39
5.1 Golden Standard data . . . . . . . . . . . . . . . . . . . . . . 39
5.1.1 Types of data used . . . . . . . . . . . . . . . . . . . . 39
5.1.2 Embedding the golden standard signals . . . . . . . . . 41
5.1.3 Generalized dimensions using k-nearest neighbor algo-
rithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
5.1.4 Calculating the generalized dimensions . . . . . . . . . 43
5.2 Measured data . . . . . . . . . . . . . . . . . . . . . . . . . . 49
5.2.1 Embedding the measured data . . . . . . . . . . . . . . 50
5.2.2 Generalized correlation sums from measured data . . . 53
5.2.3 Statistical analysis . . . . . . . . . . . . . . . . . . . . 57
5.2.4 Generalized dimensions from measured data . . . . . . 57
5.2.5 Analyzing the spike train . . . . . . . . . . . . . . . . . 61
6 Conclusion 63
6.1 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Appendices 67
A State of Art 69
A.1 Fractal analysis of spinal dorsal horn neuron discharges by
means of sequential fractal dimension D . . . . . . . . . . . . 69
A.2 Multifractal statistics and underlying kinetics of neuron spike
time-series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
A.3 Fractal patterns in auditory nerve-spike trains . . . . . . . . . 70
A.4 Fractal character of the neural spike train in the visual system
of the cat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
A.5 Characterization of non-linear dynamics in rat cortical neu-
ronal networks . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
A.6 Bursting, spiking, chaos, fractals and universality in biological
rhythms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
A.7 Multiplicative multifractal modeling and discrimination of hu-
man neuronal activity . . . . . . . . . . . . . . . . . . . . . . 73
B Source codes 74
B.1 embed.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
2
B.2 findembedparam.m . . . . . . . . . . . . . . . . . . . . . . . . 75
B.3 knnGolden.m . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
B.4 multifrac.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
3
List of Figures
4
5.18 Spike train detection. . . . . . . . . . . . . . . . . . . . . . . . 57
5.19 Example, signal 18, patientP. Red line is the extraction thresh-
old. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
5.20 Example, the spike train extracted using the QSort algorithm
from the signal 18, patientP. . . . . . . . . . . . . . . . . . . . 58
5.21 Generalized correlation sums and Dq ’s for signals 6, 9, 13, 26
for patient P. . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
5.22 Plain signal, surrogate data and amplitude adjusted surrogate
data (top to bottom) for signal 13, patient P. . . . . . . . . . 60
5.23 Generalized correlation sums and Dq ’s for surrogate data from
signal 13, patient P. . . . . . . . . . . . . . . . . . . . . . . . . 60
5.24 Generalized correlation sums and Dq ’s for amplitude adjusted
surrogate data from signal 13, patient P. . . . . . . . . . . . . 61
5.25 Cao’s method functions E1 and E2 for a spike train extracted
from our signal. . . . . . . . . . . . . . . . . . . . . . . . . . . 62
5
Glossary
CT computed tomography.
PD Parkinson’s disease.
Th thalamus.
6
Nomenclature
d embedding dimension
D2 correlation dimension
µ population mean
7
tj jump time, time interval between succesive embedded vectors
w Theiler window
8
Chapter 1
Introduction
In this work we will analyze the micorecording signals from deep structures
of human brain. The signals were recorded from patients with Parkinson’s
disease. They were recorded during the stereotactic surgery performed for
implantation of the deep brain stimulation.
We will analyze the raw data and the detected spike trains. A multifractal
analysis will be done. Different numerical measures of chaos will be discussed.
This work will focus on the generalized dimensions and spectrum of scaling
indices.
Several algorithms for calculating these measures of chaos will be dis-
cussed. The more promising algorithms will be chosen and implemented. A
practical implementation will be thoroughly described.
The implementation will be tested on artificially generated data with
known properties. Later they will be used on real world signals. The results
and factors affecting the results will be discussed.
We will evaluate the suitability of the multifractal analysis for the mi-
crorecording signals.
9
Chapter 2
Medical Background
10
combined with carbidopa. Carbidopa delays the conversion of levodopa into
dopamine until it reaches the brain. Nerve cells can use levodopa to make
dopamine and replenish the brain’s dwindling supply. Although levodopa
helps at least three-quarters of parkinsonian cases, not all symptoms respond
equally to the drug. Bradykinesia and rigidity respond best, while tremor
may be only marginally reduced. Problems with balance and other symptoms
may not be alleviated at all. Anticholinergics may help control tremor and
rigidity. Other drugs, such as bromocriptine, pramipexole, and ropinirole,
mimic the role of dopamine in the brain, causing the neurons to react as
they would to dopamine. An antiviral drug, amantadine, also appears to
reduce symptoms.
Treating PD with surgery was once a common practice. But after the
discovery of levodopa, surgery was restricted to only a few cases. Studies in
the past few decades have led to great improvements in surgical techniques,
and surgery is again being used in people with advanced PD for whom drug
therapy is no longer sufficient. [20]
2.2.1 Neurosurgery
The stimulation electrodes are implanted by stereotactic surgery, a neuro-
surgeon who specializes in central nervous system function uses a stereotac-
11
tic head frame and imaging techniques such as magnetic resonance imaging
(MRI) or computed tomography (CT) scanning to map the brain and localize
the target within the brain.
The patient has the stereotactic head frame attached to the head prior to
the operation. Local anesthesia is used when attaching the frame. The frame
is providing three dimensional coordinate system for the brain mapping. Us-
ing MRI a three dimensional image of the brain is acquired. The calibrations
on the head frame are merged with the brain image to form a computerized
map of the brain. This map becomes the blueprint for planning and mea-
suring the trajectories of the electrodes into the deep brain structures. The
entire head frame structure is then bolted to the operating table to maintain
the head in a fixed position throughout the operation. Two burr hole are
made in the top of the skull. A strong topical anesthetic is used to numb the
skin while this hole is drilled. Since there are no pain receptors in the brain,
there is no need for deeper anesthetic. In addition, the patient must remain
awake in order to report any sensory changes during the surgery and later to
test the effect of the stimulation. [3]
2.2.2 Navigation
Targeting STN is problematic because of its variable location and relatively
small size (20–30 mm3). A combination of anatomic imaging with a stereo-
tactic frame, atlas coordinates, and intraoperative neurophysiology is cur-
rently considered the most reliable approach for STN targeting.
Anatomic targeting employing various imaging modalities, such as ven-
triculography, CT and MRI, and intraoperative electrophysiologic testing are
combined with information from time-tested autopsy-based atlases. These
atlases provide the coordinates of the STN and other structures relative to the
established anatomic landmarks, primarily the commissures and ventricles.
In case of thalamic localization, where borders of nuclei are variable and
essentially indeterminable with current anatomic imaging, this atlas-based
approach remains the gold standard.
The STN tends to change its borders, size, and shape significantly with
age, whereas the distance between the anterior and posterior commissures,
used as the main references for all atlas-based stereotactic procedures for
STN targeting, is almost constant during life. Therefor the combination of
MRI with intraoperative MER mapping of STN borders has proven to be
the most reliable technique for the best DBS targeting of STN. [24]
12
2.2.3 Microrecording
As an additional means to ensure the accuracy of the surgical probe electrical
brain mapping is performed during surgery. This technique uses micro elec-
trodes that can record electrical activity from individual brain cells within
deep brain regions. The micro-electrodes are much smaller and more delicate
than the electrodes that provide the deep brain stimulation. They are used
to identify cells within the thalamus, globus pallidus, subthalamic nucleus
and adjacent brain structures, and help steer the main probe towards the
desired surgical target.
These electrodes are inserted towards the target location estimated based
on the MRI. The signal that is being recorded is visually and acoustically
analyzed by the surgeon. The analysis is based on known characteristics of
the signals from different brain structures and on surgeons experience. If
the STN is not registered on the first pass of the microelectrode, the micro-
electrode is inserted parallel to the previous pass more in anterior,posterior,
medial and lateral covering a circle with approximately 4 mm diameter.
13
Chapter 3
Methods
Keywords
• chaos, fractal[s]
Search queries
Dialog. Dtb: INSPEC. Query:
– All years (1994—2008)
– Entire text: fractal? AND spik? AND neuron?
17 records found. Relevant: A.1, A.2, A.3, A.4, A.6, A.7
Dialog. Dtb: INSPEC. Query:
– All years (1994—2008)
14
– Entire text: chao? AND neur? AND time-series AND cell
13 records found. Relevant: A.5.
INSPEC Results
Long-duration power-law correlation was found in a spike train in the visual
system of the cat A.4. Low-dimensional deterministic chaos in extra-cellular
recordings of mature rat cortical cells was reported by constructing the return
map and calculating the Lyapunov exponent A.5. Long-term correlations,
self-similarity of neuronal firing rates, power-law behavior of the Fano-factor
time curve and change in the firing pattern were analyzed A.3. Probabilistic
and multifractal properties of probability distributions of interspike intervals
were analyzed in a more recent study A.2. With the bifurcating diagrams, it
was shown how spiking can be transformed to bursting via a complex type
of dynamic structure when the key parameter in the model varied A.6. In
the above mentioned works, long sequences of interspike intervals of some
thousands spikes were analyzed, but in this study there are only 10 seconds
long records with some hundred spikes and therefor these methods cannot
be used in this case.
Neural firing pattern in two brain areas, the globus pallidus externa (GPe)
and the globus pallidus interna (GPi), was observed and chracterized by
mulitfractals. The generalized dimension spectrum Dq effectively differenti-
ates the two brain areas. A.7. This work also used microrecording data from
the surgery of deep brain stimulation. Our work is inspired by this work.
15
detection than thresholding the raw signal. If p(t) is locally bigger than five
times the standard deviation of p(t) (or an other factor, referred to below as
the extraction threshold), a candidate spike is detected. For each threshold
crossing, a sample of 2.5 ms is extracted from the filtered signal.
The estimation of the number of neurons present, as well as the assign-
ment of each spike to a neuron, is based on a distance metric between two
spikes. Based on this distance, a threshold is used to decide: (i) how many
neurons are present and (ii) to assign each spike uniquely to one neuron or
to noise, if unsortable. A crucial element of this approach is the threshold,
which is calculated from the noise properties of the signal and is equal to
the squared average standard deviation of the signal, calculated with a slid-
ing window. The threshold is thus not a parameter as it is automatically
defined by the noise properties of the recording channel and is equal to (in
a theoretical sense) the minimal signal-to-noise ratio required to be able to
distinguish two neurons.
Each newly detected spike is sorted as soon as it is detected. The raw
waveform of a newly detected, as of yet unsorted spike, is used to calculate
the distance to all already known mean waveforms (clusters). The spike
is assigned to the existing cluster to which it has minimal distance if the
distance is smaller than a threshold value. If the minimal distance is larger
than the threshold, a new cluster is automatically created. Every time a
spike is assigned to a cluster, the mean waveform of that cluster is updated
by taking the mean of the last C spikes that were assigned to this cluster. This
causes the mean waveforms of each cluster to change as well, which might
result in two clusters which have mean waveforms whose distance is less than
the threshold. In this case, the two clusters become indistinguishable and
they are thus merged. The spikes assigned to both clusters will be assigned
to the newly created cluster. [21]
16
trajectory in state space. As the number of iterations goes to infinty the
trajectory approaches a set of points that is called an attractor. Dimension
of this attractor is measured. When the dimension is a non integer value it
is called a strange attractor and the dimension a fractal dimension. [29]
Short summary of our Fractal analyses...
17
Figure 3.1: Auto mutual information of the Lorentz Attractor, τs = 15.
The time delay between sampled values that form a vector τs can also
be estimated using the auto mutual information. The mutual information
I is the amount of information that is shared between two data sets. The
auto mutual information takes time delayed copies of one data set as the
second one. The first minimum of the auto mutual information is said to be
a prefered value for attractor reconstruction from time series, where one is
interested in independend coordinates, see Fig. 3.1.
X P (X, Y )
I(X, Y ) = P (X, Y )log , (3.1)
P (X)P (Y )
where P (X) is the probability of measuring a data value X, P (X, Y ) is
the joint probability of measuring X and Y at the same time. A low value
of the mutual information shows that there is little information common
between data sets. A normalized value of 1 shows the data sets are equal.
[8]
18
yi (d + 1) − yn(i,d) (d + 1)
a(i, d) =
yi (d) − yn(i,d) (d)
, i = 1, 2, ..., N − dτs , (3.2)
19
Before we perform numerical tests, it is necessary to define another quan-
tity which is useful to distinguish deterministic signals from stochastic sig-
nals. Let
NX−dτs
∗ 1
xi+dτs − xn(i,d)+dτs
E (d) = (3.5)
N − dτs i=1
where the meaning of n(i, d) is the same as above, i.e., it is the integer
such that yn(i,d) (d) is the nearest neighbor of yi (d). We define
Figure 3.2: Cao’s method functions E1(d) and E2(d) for the Henon map.
For time series data from a random set of numbers, E1(d), in principle,
will never attain a saturation value as d increases. But in practical compu-
tations, it is difficult to resolve whether the E1(d) is slowly increasing or has
stopped changing if d is sufficiently large. In fact, since available observed
data samples are limited, it may happen that the E1(d) stops changing at
some d although the time series is random. To solve this problem, we can
consider the quantity E2(d). For random data, since the future values are
independent of the past values, E2(d) will be equal to 1 for any d in this case.
However, for deterministic data, E2(d) is certainly related to d, as a result,
it cannot be a constant for all d; in other words, there must exist some d’s
such that E2(d) 6= 1.
20
Cao recommends calculating both E1(d) and E2(d) for determining the
minimum embedding dimension of a scalar time series, and to distinguish
deterministic data from random data [5].
C(d, r) ∝ rD2
21
Since one does not know the correlation-dimension before doing this com-
putation, one checks for convergence of the estimated values of D2 in d. The
relevant caveats and misconceptions are reviewed, for example, in [29]. The
most prominent precaution is to exclude temporally correlated points from
the pair counting by the so called Theiler window w [29]. In order to become
a consistent estimator of the correlation integral from which the dimension
is derived the correlation sum should cover a random sample of points drawn
independently according to the invariant measure on the attractor. Succes-
sive elements of a time series are not usually independent. In particular, for
highly sampled flow data subsequent delay vectors are highly correlated.
log C(r)
D2 = lim (3.8)
r→0 log r
22
Figure 3.3: Henon map with the data points partitioned in a grid of boxes.
Figure is from [4].
log i Piq
P
1
Dq = lim . (3.9)
q − 1 r→0 log r
q−1
Pi , one can associate bulk with the generalized average probability per
q−1 1/(q−1)
box ( Pi ) and identify Dq as a scaling of bulk with size. (h·i denotes
the expectation value of a function, for a function with uniformly distributed
parameter, like in this case the parameter i, it denotes the arithmetic mean.
For q = 2 the generalized average is the ordinary arithmetic average, and for
q = 3 it is a root mean square. It is not hard to show that the limit q → 1
leads to a geometric average. Finally, it is noted that q = 0 corresponds to
the Hausdorff or capacity dimension defined above.
For a uniform fractal, that is a fractal with all Pi equal, one obtains
a generalized dimension Dq that does not vary with q. For a nonuniform
fractal, however, the variation of Dq with q quantifies the nonuniformity. For
instance,
log (maxi Pi )
D∞ = lim . (3.10)
r→0 log r
23
log (mini Pi )
D−∞ = lim . (3.11)
r→0 log r
It is clear from Eq. (3.9) that Dq decreases with increasing q. From this
fact and the above equations, it is clear that the maximum dimension D−∞
is associated with the least-dense points on the fractal and the minimum
dimension D∞ corresponds to the most-dense points. This should not be
surprising: The densest set possible is a point, which has dimension zero.
The notion of generalized dimension first arose out of a need to under-
stand why various algorithms gave different answers for dimension. A fur-
ther motivation came from the need to characterize more fully fractals with
nonuniform measure. These sets are sometimes called multifractals and are
characterized by an a priori measure, that differs from the Hausdorff mea-
sure. The point is that, rather than measure just one dimension, one can
compute the full spectrum of generalized dimensions from D−∞ to D∞ .[29]
and
∂f
q= τ = αq − f. (3.13)
∂α
An example of these functions can be seen in Fig. 3.4. In case either τ (q)
or f (α) is not differentiable, a more robust formulation is given by
24
Figure 3.4: Generalized dimension Dq as a function of q, f (q) and α(q) for a
typical multifractal. Figure is from [29].
ith element of the cover. Then, the number n(α, r) of cover elements with
scaling index between α and α + ∆α scales as n(α, r) ∼ r−f (α) ∆α.
The curve f (α) in Fig. 3.5 is always convex upward, and the peak of the
curve occurs at q = 0. At this point f is equal to the fractal dimension D0 .
Also, the f (α) curve is tangent to the curve f = a, and the point of tangency
occurs at q = 1. In general, the left-hand branch corresponds to q > 0 and
the right-hand branch to q < 0.
To see that thisPinterpretation leads to Eqs. (3.12)-(3.15), consider the
fixed-box-size sum i Piq . The number of terms in this sum for which Pi = rα
is given by n(α, r). Thus
X q Z Z
Pi = n(α, r)r dα ∼ r−f (α) rqα dα ∼ rθ ,
qα
(3.16)
i
25
Figure 3.5: f (α) curve and significant values. Figure is from [29].
Sα = {X ∈ A; Dp (X) = α} (3.17)
is the set of all points in A for which the pointwise dimension is α. The
Hausdorff dimension of the set Sα is given by f (α).
The reader who is hoping for a dramatic picture of this fractal set Sα will
probably be disappointed. As Sakar [22] and others have pointed out, Sα is
not necessarily a closed set and may even be dense in the original fractal A.
Thus, although the set Sα may have a lower Hausdorff dimension than A, it
is possible for the box-counting dimension to be the same. And a picture of
Sα would look just like the picture of the original set A.
The f (α) formalism provides a tool for testing the notion of universality,
which states that a wide variety of dynamical systems should behave in a
similar way and should leave the same characteristic signatures. Indeed,
several researchers for example [11] have found physical systems whose f (α)
curves precisely matched the f (α) associated with a theoretical model of a
circle map undergoing a transition from quasi-periodicity to chaos. [29]
26
Chapter 4
Algorithms
4.1 Embedding
In this section a simple algorithm for the time delay embedding of a recon-
struction phase space is described. After the initial check if the input signal
is long enough for the required embedding, a simple loop runs over the em-
bedding dimensions and writes the values from the input signal with the time
delay τs and step of the jump time tj into the output, see source code 1.
for i=1:dim
start = (i-1)*delay;
d(:,i) = cin(start+1:shift:start+len);
end
27
3.3.2 and defined by Eq. (3.1). To calculate the auto mutual information
function itself we use the algorithm from TSTOOL package for Matlab [19].
To use this function the input signal has to be converted to rang values in
the interval < 0, 1), see source code 2. Then the amutual mex function from
the TSTOOL is used, syntax in source code 3.
[y,i] = sort(ts);
N=length(y);
y(i) = (0:N-1)/N;
The preferred value for the time delay τs is said to be the first minimum
of the auto mutual information function a. As we know from observations
the auto mutual information function is always decreasing from τs = 0 to the
first minimum. Because of this assumption we can find by searching for the
first zero crossing of the first derivate of the function, see source code 4.
da=diff(a);
tau=find(da>=0,1,’first’)-1;
Source code 4: searching the first minimum of the auto mutual information
function
At the end the auto mutual information function is plotted so that the
user can check the results, an example is in Fig. 3.1.
28
To calculate the functions E(d) from Eq. (3.3) and the function E ∗ (d)
from Eq. (3.5) we use a cao mex function from the TSTOOL package for
Matlab [19], syntax in source code 5.
E1=E(2:end)./E(1:end-1);
E2=Estar(2:end)./Estar(1:end-1);
The next part tries to estimate the embedding dimension as the point
where the function E1(d) stops changing as illustrated in the previous chapter
in Fig. 3.2. This point is found by looking for the second derivate of the
function E1(d), see source code 7.
ddE1=diff(diff(E1));
[dummy dim]=min(ddE1);
Since there are more factors affecting the choice of the embedding dimen-
sion the functions E1(d) and E2(d) are plotted and the user can make his
own evaluation of the results as suggested by Cao [5] .
29
The mostly used way to compute dimension is to use the correlation
algorithm, which estimates dimension based on the statistics of pairwise dis-
tances. The correlation algorithm is in the class of fixed-size algorithms
because it is based on the scaling of mass with size for fixed-sized balls (or
grids). An alternative approach uses fixed-mass balls, usually by looking
at the statistics of distances to kth nearest neighbors. Both fixed size and
fixed-mass algorithms can be applied to estimation of generalized dimension
Dq .
Correlation dimension
The most natural such averaging strategy was introduced by Grassberger
and Procaccia [10]. Here, a direct arithmetic average of the pointwise mass
function gives what Grassberger and Procaccia call the correlation integral:
30
where Θ is the Heaviside step function: Θ(x) is zero for x < 0 and one
for x ≥ 0. The importance of excluding i = j has been overlooked by some
authors, although Grassberger has stressed this point [9]. In fact, the case is
made in [29] for excluding all values of i for which |i − j| < W with W > 1.
From this, the correlation dimension D2 is defined:
log C(r)
D2 = lim as in Equation (3.8)
r→0 log r
It is now straightforward to approximate C(r) with a finite data set:
1 X
C(N, r) = Θ(R − kyi − yj k) similar to Eq. (3.7). (4.3)
N (N − 1) i6=j
In words,
number of distances less than r
C(N, r) =
number of distances altogether
Thus the correlation algorithm provides an estimate of dimension based
purely on the statistics of pairwise distances. Not only is this a particularly
elegant formulation but it has the substantial advantage that the function
C(r) is approximated even for r as small as the minimum interpoint distance.
For N points, C(N, r) has a dynamic range of O(N 2 ). Logarithmically
speaking, this range is twice that available to n(N,r) in the box-counting
method. It is also twice the range available in an estimate of the point-
wise mass function Bx (r) for a single point X. This greater range is the
one advantage that the correlation integral has over the average pointwise
dimension.
Generalized dimensions
A more general average than the direct arithmetic average used in the pre-
vious paragraphs of this section 4.2.1 is given by
Cq (r) = By (r)(q−l)
(4.4)
31
Note that q = 2 gives the direct arithmetic average that defines the
correlation dimension and that the q = 1 average (which is associated with
the information dimension).
From a finite set of points, generalized correlation sum Cq (N, r) can be
approximated by [16]
N
" N
#q−1
1 X 1 X
Cq (N, r) = Θ(R − kyi − yj k) (4.6)
N i=1 N − 1 i=1,i6=1
The scaling (rk ) ∼ k l/D defines the dimension. (Actually, what was com-
puted in [28] was hrk2 i, since for Euclidean distances this equation is com-
putationally more efficient. And the scaling was taken to be hrk2i ∼ k 2/D .
The author of [1] considers moments of the average distance to the kth near-
est neighbor and recommends keeping k fixed and computing a dimension
function from the scaling of average moments of rk with the total number of
points N :
32
This dimension function D(γ) is related to the generalized dimension by
the implicit formulas
[dims, moments]=gendimest(dists,gammas,kmin_low,kmin_high,kmax)
33
computation of By (r) as is. The part of the code that is responsible to raising
the By (r)’s to the power of q − 1 is in source code 9.
The highest for loop runs for all reference points. The original part
where the neighboring points within the distance r are searched is left out.
Each of the points found is added to the sum for the concrete distance, here
sums[ni]++. The number of found neighbors is divided by the number of
searched pairs and that gives the By (r). Each By (r) is raised to the power
of q − 1 using the C++ function pow. That is the implementation of the
inner member of the arithmetic average in Eq. (4.4).
To finish the computation of the Eq. (4.4) just the arithmetic average has
to be evaluated. The summation was done in the source code 9. The division
by the number of all reference points for which the distances to neighbors
were searched is done in the source code 10.
34
if (total_pairs > 0) {
double sum = 0;
for (long bin=0; bin < bins; bin++) {
sum = corrsums[bin];
corrsums[bin] = (sum / R);
}
}
correlations sums and the respective distances r for which the generalized
correlation sums were calculated. The distances are logarithmically spaced
in order to get a equal spacing in the log-log scaling plot. The limit in Eq.
(4.5) can be represented as scaling of the log Cq (r) to log r. In practical
realization there is need of user user-feedback since the log-log plot is not
linear everywhere. A linear scaling region has to be found manually [29] [15],
see source code 11.
The scale of log Cq (r) to log r is the Dq Eqs. (3.9) (4.5). It is fitted in the
scaling region using the least square method. Function lscov is a standard
Matlab function. The standard deviation of this process is also calculated to
know the accuracy of this method. The process is shown in source code 12.
figure; plot(logR,logC);
a=input(’scaling region from:’);
b=input(’scaling region to:’);
ScalingRegion=find((logR>a)&(logR<b));
Source code 11: requesting the user to select the scaling region
Source code 12: Fitting the scale using least square method
35
4.3.4 Maximal and minimal generalized dimensions
To estimate the D∞ and D−∞ we implemented the Eq. (3.10) and (3.11).
The practical implementation is a small modification of the source code 9
and 10.
for (long n=0; n < R; n++) {
...
sums[ni]++;
}
}
if (pairs > 1) {
double suml = 0;
double sumk = 0;
for (long bin=0; bin < bins; bin++) {
suml += sums[bin];
sums[bin] = 0;
if (suml > 0) {
sumk = suml / pairs;
if (sumk > corrsums[bin]) {
corrsums[bin] = sumk;
}
}
}
}
}
The different part of the function is shown in source code 13. The highest
for loop runs for all reference points. The original part of the TSTOOL mex
function where the neighboring points within the distance r are searched
is left out. Each of the points found is added to the sum for the concrete
distance, here sums[ni]++. The number of found neighbors is divided by the
number of searched pairs and that gives the By (r). Each By (r) is compared
to the already stored value in corrsum[bin] if the value is bigger it is stored
in the corrsum[bin]. This way we end up with the maximal By (r) stored
in corrsum[bin] and that is the output of this function.
The D∞ is the scaling of the previous function with r. In practical real-
ization there is need of user user-feedback since the log-log plot is not linear
everywhere. A linear scaling region has to be found manually [29] [15], see
36
source code 11 in previous section 4.3.3.
The scale of log max By (r) to log r is the D∞ Eqs. (3.10). It is fitted in
the scaling region using the least square method. The standard deviation
of this process is also calculated to know the accuracy of this method. The
process is shown in source code 12 in previous section 4.3.3.
The source code 14 shows the different part of the algorithm for finding
the minimal By (r). The only difference is the comparison of the new value
By (r) stored in sumk with the smallest value so far in corrsum[bin]. We are
looking for a smaller value, if it is found it will be stored corrsum[bin] and
sent to the output.
The process of the least square method slope fitting is same as in the case
for D∞ .
37
4.4 Spectrum of scaling indices
Our implementation also calculates the spectra of scaling indices f (α). The
implementation of the Legendre transformation of τ (q) (3.12) is shown in
source code 15.
alfa=diff(tau)’;
f=(-(qzero-1):(q-qzero-1)).*alfa-tau(1:numofqs-1)’;
38
Chapter 5
Results
dx
= σ(y − x) (5.2a)
dt
dy
= ρx − y − xz (5.2b)
dt
dz
= xy − βbz (5.2c)
dt
σ, ρ, β are parameters and we used the values σ = 10, ρ = 28, β = 8/3.
For the calculations we again took the x-component.
39
Figure 5.1: Attractor of the Henon map reconstructed from its x-component.
40
5.1.2 Embedding the golden standard signals
Figure 5.3: Auto mutual information of the x-component of the Henon map.
We calculated the auto mutual information using the amutual mex func-
tion from the TSTOOL package. The results for Lorentz attractor were
already shown in chapter 3 in Fig. 3.1 and τs = 15. The results are with
correspondence with the literature [5]. For the Henon map the results shown
in Fig. 5.3 do not show a distinct results and we used time delay τs = 1, this
is also with correspondence with the literature [5].
To find the embedding dimension we used the Cao’s method [5]. The
results are for the x-component of the Henon map d = 2 and the x-component
of the Lorenz attractor d = 3. The results for the Henon map are shown in
Fig. 3.2 and for Lorentz attractor in Fig. 5.4. These figures look exactly
same as in Cao’s article [5].
41
Figure 5.4: Cao’s method functions E1(d) and E2(d) for Lorentz attractor.
42
For the Henon map the Dq curve was decreasing only for q =< 1, 10 >,
outside of this interval the results contradicted the theory, that Dq is mono-
tonically decreasing function. Though the value D2 the correlation dimen-
sion was estimated to be D2 = 1.207 which is close to the theoretical value
D2 = 1.25 ± 0.02 [10].
For the Lorentz attractor the Dq curve was decreasing only for q =<
−3, 8 >, outside of this interval the results contradicted the theory, that Dq
is monotonically decreasing function. Though the value D2 the correlation
dimension was estimated to be D2 = 2.005 which is close to the theoretical
value D2 = 2.05 ± 0.01 [10].
Because of the limits of this algorithm we will further focus on the gen-
eralized correlation sum algorithm.
Figure 5.6: Scaling of the generalized correlation sum of the Henon map.
43
To estimate the generalized dimension we calculated the generalized cor-
relation sums for various q’s. We have chosen the interval q ∈< −10, 10 >
because all the tendencies can be seen on this interval.
To estimate the D∞ we calculated the scaling of the maximal measure
Pi , see Eq. (3.10) and for the D−∞ the scaling of minimal measure Pi , see
Eq. (3.11).
On Fig. 5.6 the scaling for various q’s is plotted. Significant q’s are
labeled. The linear scaling region is highlighted with red line for q = −∞
and q = ∞. It is important to note that we have plotted log(q−1) Cq (r)
versus log r.
The difference between slopes for different q’s would be bigger if we plotted
just log Cq (r) versus log r. Furthermore in such a plot slopes for q ≤ 0 would
be negative since always the measure is 0 < Pi < 1 and when raised to power
of negative exponent the smaller the value the bigger the result. We prefer
to directly compare the graphs in one plot and therefor we have chosen to
plot log(q−1)
Cq (r)
versus log r.
The graph for q = 2 is linear in almost the whole interval in which it was
calculated. We expected that since for q = 2 our algorithm works almost ex-
actly the same way as the original correlation sum algorithm from TSTOOL
package [19].
With increasing q the slope of the graph is decreasing as expected from
theory 3.3.5. The shape of the graph is becoming more like the shape of the
graph of the maximal measure Pi . More precisely it is the graph of logarithm
of this measure log Pi , in this figure it is labeled q = ∞.
The stair step on the graph q = ∞ for small log r < −4.5 is the effect
of discretization made while generating the Henon map with finite number
of points. Similar behaviour of correlation sum can be found in [29] as an
example of the effect of discretization.
The small intrinsic oscillation on the graph q = ∞ can be caused by the
lacunarity of the attractor. Lacunarity is a measure of how the fractal fills
space, if the fractal is dense the lacunarity is small, the lacunarity increases
with coarseness. [29]
The graph q = 0 is first decreasing and later increasing. This is due to the
fact mention at the end of section 4.2.1. For log r < −3 there is a significant
number of points, that have the measure Pi equal to zero. Because of that
these points contribute to the sum with a zero but theoretically they should
contribute with something like ∞. Practically that cannot be realized. With
r ≈ −3 the biggest amount of points have Pi = N1 , where N is the total
−1
number of points. These points contribute to the sum with N1 = N and
the correlation sum C0 (r) is at its maximum for r ≈ −3. In our figure it is
the minimum of the graph q = 0, because we plotted log(q−1) Cq (r)
versus log r.
44
As q → −∞ the shape of the graphs is getting similar to the graph
q = −∞. The graph q = −∞ is equal to N1 for log r < −2.35 this means
that up to this scale the least dense region of the attractor has only one
relevant neighboring point within the distance r.
For q smaller than approximately −4 there are two regions with oscil-
lations. This can be again caused by the lacunarity of the attractor of the
Henon map.
45
approximately as accurate as the k-nearest neighbor algorithm evaluated in
the previous section 5.1.3.
Lorentz attractor
We also analyzed the x-component of the Lorenz attractor embedded with
dimension 3 and with time delay τs = 15.
46
Figure 5.9: Scaling of the generalized correlation sum of the Lorenz attractor.
47
For q < 0 we can see a heavy fluctuation for 0.5 < log r < 1. We
do not think this can be be accounted to the points with Pi = 0 since for
r this large there should be such point any more. Theiler [29] mentions
an anomalous shoulder that can be caused by auto-correlation in the time
series. But the shape does not correspond with the shoulder illustrated by
Theiler.The authors of [6] have made a thorough multifractal analysis of the
Lorenz attractor, but they haven’t accounted abnormalities like this. This
abnormality is therefor without a explanation. Though the rest of the graphs
can be used for the estimation of the slope and we used it.
48
from [10].
The spectrum of scaling indices f (α) for the Lorenz attractor constructed
from τ = Dq (q − 1) using the Legendre transformation is shown in Fig. 5.11.
We have plotted f (α) calculated only from Dq ’s for q > −2. We have
again limited the number of q’s from which the Legendre transformation was
calculated, because of the limited interval of the correct shape of the τ curve.
The graph shows only the left part of the curve which represents q > 0.
This part of the f (α) curve corresponds to the theory and has the direction
toward the point α = D∞ , f (α) = 0 as in Fig 3.5.
Also peak of the f (α) curve has the value of D0 as expected from theory.
49
Figure 5.12: Raw data of a signal from STN.
50
Figure 5.13: Operation protocol from DBS implanting operation.
51
Table 5.1: Codes of the signals used for the analysis for reference to our
database.
52
(a) signal 17, patient P (b) signal 13, patient P
that the Cao’s method is not able to determine the embedding dimension in
this case. In fact the results shown in [5] for white noise look very similar.
That means that for the embedded signal 17, patient P with the time delay
τs = 4 there is probably no chaotic behaviour.
As did Cao in his work, we tried the time delay τs = 1. The results are
in Fig. 5.15b. The E1 and E2 functions are correlated and the E1 function
attains a saturation value for d = 4. We have applied the Cao’s method
on more randomly chosen signals and the results for all of them were very
similar to Fig. 5.15b.
After this analysis of embedding we have have chosen the following em-
bedding parameters τs = 1 and d = 4. We will embed all the signals in the
following analysis with these parameters.
53
(a) τs = 4 (b) τs = 1
Figure 5.15: Cao’s method functions E1 and E2 for the measured data signal
17, patient P.
f raclog Cq (r)q − 1 < 3.7. According to [29] this behaviour is typical for non
chaotic attractor. Also the graph is becoming saturated long before r reaches
the value equal to the size of the attractor. According to [15] this is typical
for a correlation sum of white noise. The graph q = 2 or the plain correlation
sum has aspects that do not support the chaotic behaviour of the measured
signal 17, patient P.
The graph q = ∞ has for small r a shape similar to the same graph for
Henon map in Fig. 5.6. For large r it is attaining saturation before the size
of the attractor is reached similarly to the graph q = 2. With increasing q
the slope of the graphs is decreasing and the shape of the graphs is becoming
more like the of graph q = ∞.
The graph q = 0 has two different slopes in its region and it was difficult
to find appropriate scaling region. The graph q = −∞ has a very long flat
part for r < 0.7. This means that the attractor is scattered much more than
the attractors the artificial signals. With decreasing q the shape is becoming
very irregular. We will discuss the properties of the generalized correlation
sum for q ≤ 0 after we look at the results for signal 13, patient P.
We calculated the generalized correlation sums of signal 13, patient P for
the interval q ∈< −6, 6 >. To estimate the D∞ we calculated the scaling of
the maximal measure Pi , see Eq. (3.10) and for the D−∞ the scaling of min-
imal measure Pi , see Eq. (3.11). We have used N = 5000 random reference
points from the reconstructed attractor to speed up the computation.
On Fig. 5.17 the scaling for all q’s is plotted. Significant q’s are labeled.
The linear scaling regions are again highlighted with red line for all q’s.
The graph q = 2 for signal 13, patient P has similar properties as the
54
Figure 5.16: Scaling of the generalized correlation sum of signal 17, patient
P.
same graph for signal 17, patient P. It seem to come from a non chaotic
attractor or a noisy signal. Also graph q = ∞ and and all graphs for q > 0
for signal 13, patient P look similar to these graphs for signal 17, patient P.
Graphs for q ≤ 0 are very irregular and finding a linear scaling region is
almost impossible. We will now focus on the graph q = −∞ and discuss the
possible source of the irregularity.
As we mentioned earlier in section 4.2.1 the main problem for q ≤ 0 are
the points where Pi = 0, this is the term in the generalized correlation sum
that is raised to the power of q − 1, see Eq. (4.5) and as By (r) in Eq. (4.4).
These zeros can figure in the sum for very large r’s. Our implementation
omits these terms, since the calculation could not be made at all with zeros
raised to power of q − 1 for q ≤ 0.
As long as the attractor is regular and dense in the phase space, eg. with
low lacunarity defined in section 5.1.4, there are no more points with the term
Pi equal to zero already for relatively small r. In such a case this problem
causes only small irregularities and only for r close to the minimal interpoint
distance.
55
Figure 5.17: Scaling of the generalized correlation sum of signal 13, patient
P.
However in this case the attractor is probably scattered a lot since the
original signal is noisy. Thus there are points with Pi = 0 for r much larger
than the minimal interpoint distance. The graph q = −∞ is calculated
according to Eq. (3.11) as the scaling of the minimal measure Pi with r.
When we follow this graph q = −∞ from the smallest r it has a constant
value of the minimal Pi = N1 . For log r ≈ 0.5 the smallest Pi found in the
attractor starts to increase. But there are some points in this attractor which
have the nearest neighbor point in a distance log r ≈ 1.2. It may be even a
single point. While computing the Cq (r) for q = −∞ only the smallest Pi
is counted to the Cq (r). For all log r < 1.2 these points with Pi = 0 where
omitted, but when one of these point gets the value Pi > 0 for log r ≈ 1.2 it
becomes the smallest Pi and the graph falls down to this value.
This graph is a perfect example of this problem and it is a proof that the
correlation algorithm is inaccurate for q ≤ 0. Therefor we will focus on the
analysis of the generalized dimension for q > 0.
56
5.2.3 Statistical analysis
In table 5.2 we can see features extracted from ISI spike trains by conventional
statistical methods. For description of the procedure of spike detection see
section 3.2. The extraction threshold used was 3.3 and the minimal number
of spikes to form a neuron during the clustering procedure was 100. The
procedure of spike detection and spike sorting can be see in Fig. 5.18. In
Fig. In Fig. 5.20 is an example of extracted spike train. From the table 5.2
can be seen that the parameters differ for different parts of the human brain.
Figure 5.19: Example, signal 18, patientP. Red line is the extraction thresh-
old.
57
Figure 5.20: Example, the spike train extracted using the QSort algorithm
from the signal 18, patientP.
Table 5.2: Features extracted from ISI spike trains by conventional statistical
methods. Features collected from 4 spike train data sets: number of spikes
detected, mean ISI interval and its standard deviation, coefficient of vari-
ance, percentage of ISI shorter than 3 ms, mean frequency and its standard
deviation, occurrence of a burst.
Dataset 6 9 13 26
part of brain Th n/a STN SNr
# of spikes 73 145 193 281
ISI (ms) 123.4 68.4 51.4 34.8
σ ISI (ms) 200.9 56.2 66.0 32.6
CV 1.6 0.8 1.3 1.0
< 3 ms (%) 0 0 3.1 20.0
f¯ (Hz) 8.1 14.6 19.5 28.7
σ f (Hz) 5.0 17.8 15.1 30.7
burst true 1 0 0 0
58
(a) scaling, signal 6 (b) Dq ’s, signal 6
Figure 5.22: Plain signal, surrogate data and amplitude adjusted surrogate
data (top to bottom) for signal 13, patient P.
Figure 5.23: Generalized correlation sums and Dq ’s for surrogate data from
signal 13, patient P.
The results for surrogate data for signal 13, patient P are in Fig. 5.23
and the results for amplitude adjusted surrogate data are in Fig. 5.24. We
60
(a) scaling (b) Dq ’s
can see that the surrogate data and amplitude adjusted surrogate data have
same characteristics as the measured signal in figure 5.21f and 5.21e. We
used the test with surrogate and amplitude adjusted surrogate data also for
signals 6,9 and 26, for results see. Appendix C figures C.5 and C.6.
The fact that the results of the fractal analysis of surrogate data is not
differentiable from the results of the real signal is another sign that this
analysis is not able to describe the complexity of our microrecording signals
of 10 seconds in duration.
61
Figure 5.25: Cao’s method functions E1 and E2 for a spike train extracted
from our signal.
62
Chapter 6
Conclusion
63
From that we concluded that the low dimensional chaos cannot describe the
dynamical system of neurons in deep structures of human brain
We tried to analyze the detected spike trains for the used data of 10 sec-
onds duration. The spike train was rather too short for any analysis. We
were not able to determine the right parameters for the state space recon-
struction using time delay embedding. We can not make any conclusion from
that since the analyzed spike train was too short.
64
Bibliography
[3] F. Blair (ed.), Deep brain stimulation for parkinson’s disease, Parkin-
son’s Disease Foundation, 2007.
[5] L. Cao, Practical method for determining the minimum embedding di-
mension of a scalar time series, Physica D 110 (1997), no. 1-2, 43–50.
[8] A.M. Fraser and H.L. Swinney, Independent coordinates for strange at-
tractors from mutual information, Phys. Rev. A 33 (1986), no. 2, 1134–
1140.
65
[11] E.G. Gwinn and R.M. Westervelt, Scaling structure of attractors at the
transition from quasiperiodicity to chaos in electronic transport in ge,
Phys. Rev. Lett. 59 (1987), no. 2, 157–160.
[12] T.C. Halsey, M.H. Jensen, L.P. Kadanoff, I. Procaccia, and B.I.
Shraiman, Fractal measures and their singularities: The characteriza-
tion of strange sets, Physical Review A 33 (1986), no. 2, 1141–1151.
[15] R. C. Hilborn, Chaos and nonlinear dynamics, second ed., ch. 9 Quan-
tifying Chaos, pp. 319–374, Oxford University Press, 2000.
[16] R.C. Hilborn, Chaos and nonlinear dynamics, second ed., ch. 10 Many
Dimensions and Multifractals, pp. 375–430, Oxford University Press,
2000.
[18] M.L. Kringelbach, N. Jenkinson, S.L. Owen, and T.Z. Aziz, Transla-
tional principles of deep brain stimulation, National Reviews Neuro-
science 8 (2007), no. 8, 623–635.
66
[23] T Sauer and J.A. Yorke, How many delay coordinates do you need?,
International Journal of Bifurcation and Chaos 3 (1993), no. 3, 737–
744.
[24] K.V. Slavin, K.R. Thulborn, C. Wess, and H. Nersesyan, Direct visual-
ization of the human subthalamic nucleus with 3t mr imaging, American
Journal of Neuroradiology 27 (2006), no. 1, 80–84.
[26] M.C. Teich, C. Heneghan, S.B. Lowen, T. Ozaki, and E. Kaplan, Fractal
character of the neural spike train in the visual system of the cat, Journal
of the Optical Society of America A (Optics, Image Science and Vision)
14 (1997), no. 3, 529–46.
[27] M.C. Teich and S.B. Lowen, Fractal patterns in auditory nerve-spike
trains, IEEE Engineering in Medicine and Biology Magazine 13 (1994),
no. 2, 197–202.
67
Appendices
68
Appendix A
State of Art
Abstract
Describes a new methods for converting a typical point process, such as a
train of neuronal action potentials (spikes), into a planar curve which is
then processed by means of a fast algorithm to calculate and display the
fractal dimension D values of each of a sequence of blocks having an equal
and preselectable number of interspike intervals, hence the term sequential
fractal dimension D (SFD). This method is fast, does not require special
computing facilities, and provides a continuous, high temporal resolution
display of the neuronal discharge complexity along the course of spontaneous
activity or event relating changes. The method affords insight into short
duration changes in neuronal behaviour in a way independent of its discharge
rate. SFD analysis of spike trains from spinal dorsal horn neurons suggests
that the neuronal response to a given stimulus can be expressed as changes
in the discharge pattern complexity, thus revealing a novel sensory coding
strategy. (28 References)
69
A.2 Multifractal statistics and underlying ki-
netics of neuron spike time-series
Bershadskii, A. and Dremencov, E. and Fukayama, D. and Yadid, G., Multi-
fractal statistics and underlying kinetics of neuron spike time-series, Physics
Letters A 289, (2001), no. 6, 337-42.
Abstract
Probabilistic and multifractal properties of spiking time-series obtained in
vivo from singular neurons belonging to red nuclei from a rat’s brain are
analyzed. Lognormal and −1 power-law probability distributions of inter-
spike intervals are observed for healthy and for genetically depressive rats,
respectively. A simple thermodynamic model is elaborated to interpret the
obtained results. Investigations of long-range interspike correlations (both
probabilistic and multifractal) gives indications that the genetically defined
depression is related to individual neuron kinetic problems rather than to
brain system disorder. (33 References)
Abstract
The authors discuss the following topics: traditional renewal point process
models; long-term correlations; self-similarity of neuronal firing rates; power-
law behavior of the Fano-factor time curve; change in the firing pattern
induced by the presence of a stimulus; neural information processing with
fractal nerve spikes; biophysical origin of the fractal behavior; fractal point-
process model; short-term correlations. (33 References)
70
A.4 Fractal character of the neural spike train
in the visual system of the cat
Teich, M.C. and Heneghan, C. and Lowen, S.B. and Ozaki, T. and Kaplan,
E., Fractal character of the neural spike train in the visual system of the
cat, Journal of the Optical Society of America A (Optics, Image Science and
Vision) 14, (1997), no. 3, 529-46.
Abstract
The authors used a variety of statistical measures to identify the point pro-
cess that describes the maintained discharge of retinal ganglion cells (RGC’s)
and neurons in the lateral geniculate nucleus (LGN) of the cat. These mea-
sures are based on both interevent intervals and event counts and include the
interevent-interval histogram, rescaled range analysis, the event-number his-
togram, the Fano factor, the Allan factor, and the periodogram. In addition,
the authors applied these measures to surrogate versions of the data, gener-
ated by random shuffling of the order of interevent intervals. The counting
statistics reveal 1/f-type fluctuations in the data (long-duration power-law
correlation), which are not present in the shuffled data. Estimates of the frac-
tal exponents measured for RGC- and their target LGN-spike trains are sim-
ilar in value, indicating that the fractal behavior either is transmitted from
one cell to the other or has a common origin. The gamma-r renewal process
model, often used in the analysis of visual-neuron interevent intervals, de-
scribes certain short-term features of the RGC and LGN data reasonably well
but fails to account for the long-duration correlation. The authors present a
new model for visual-system nerve-spike firings: a gamma-r renewal process
whose mean is modulated by fractal binomial noise. This fractal, doubly
stochastic point process characterizes the statistical behavior of both RGC
and LGN data sets remarkably well. (84 References)
71
Abstract
In this paper we report on the evidence for low-dimensional determinis-
tic chaos in extra-cellular recordings of mature rat cortical cells in culture
medium. In general, the available data sets are relatively short and are
heavily contaminated by noise making conclusive interpretations of chaotic
behaviour difficult to ascertain. In this study we describe two analysis tech-
niques used to detect chaotic behaviour, namely return map construction and
Lyapunov exponent calculation. Results show that both methods indicate
the presence of chaos which is consistent with other recent studies that have
also suggested chaotic behaviour in cultured cell networks. In addition, the
use of these two independent analysis techniques provides more rigorous ev-
idence for the existence of chaotic behaviour in cultured cell networks since
previous studios have relied solely on the method of return maps. (8 Refer-
ences)
Abstract
Biological systems offer many interesting examples of oscillations, chaos, and
bifurcations. Oscillations in biology arise because most cellular processes con-
tain feedbacks that are appropriate for generating rhythms. These rhythms
are essential for regulating cellular function. In this tutorial review, we treat
two interesting nonlinear dynamic processes in biology that give rise to burst-
ing, spiking, chaos, and fractals: endogenous electrical activity of excitable
cells and Ca2+ releases from the Ca2+ stores in nonexcitable cells induced
by hormones and neurotransmitters. We will first show that each of these
complex processes can be described by a simple, yet elegant, mathematical
model. We then show how to utilize bifurcation analyses to gain a deeper
insight into the mechanisms involved in the neuronal and cellular oscillations.
With the bifurcating diagrams, we explain how spiking can be transformed
to bursting via a complex type of dynamic structure when the key param-
eter in the model varies. Understanding how this parameter would affect
the bifurcation structure is important in predicting and controlling abnor-
72
mal biological rhythms. Although we describe two very different dynamic
processes in biological rhythms, we will show that there is universality in
their bifurcation structures. (84 References)
Abstract
Understanding neuronal firing patterns is one of the most important problems
in theoretical neuroscience. It is also very important for clinical neurosurgery.
In this Letter, we introduce a computational procedure to examine whether
neuronal firing recordings could be characterized by cascade multiplicative
multifractals. By analyzing raw recording data as well as generated spike
train data from 3 patients collected in two brain areas, the globus pallidus
externa (GPe) and the globus pallidus interna (GPi), we show that the neural
firings are consistent with a multifractal process over certain time scale range
(t1 ,t2 ), where t1 is argued to be not smaller than the mean inter-spike-interval
of neuronal firings, while t2 may be related to the time that neuronal signals
propagate in the major neural branching structures pertinent to GPi and
GPe. The generalized dimension spectrum Dq effectively differentiates the
two brain areas, both intraand inter-patients. For distinguishing between
GPe and GPi, it is further shown that the cascade model is more effective
than the methods recently examined by Schiff et al. as well as the Fano factor
analysis. Therefore, the methodology may be useful in developing computer
aided tools to help clinicians perform precision neurosurgery in the operating
room. [All rights reserved Elsevier]. (37 References)
73
Appendix B
Source codes
B.1 embed.m
function cout=embed(cin, dim, delay, shift, windowtype)
N = length(cin);
M = floor((N-1-(dim-1)*delay)/shift)+1;
if M < 1
error(’time series to short for chosen embedding parameters’)
end
d = zeros(M, dim);
len = (M-1)*shift+1;
for i=1:dim
start = (i-1)*delay;
d(:,i) = cin(start+1:shift:start+len);
end
if ~strcmp(windowtype, ’Rect’)
d = d .* repmat(window(dim, windowtype)’, M, 1);
end
cout=d;
74
B.2 findembedparam.m
function [tau,dim,E1,E2]= findembedparam(ts,tau)
maxtau=40;
partitions=128;
[y,i] = sort(ts);
N=length(y);
y(i) = (0:N-1)/N;
a = amutual(y, maxtau, partitions);
da=diff(a);
if nargin<2 || isempty(tau)==1
tau=find(da>=0,1,’first’)-1;
end
figure; plot(1:maxtau,a(2:end),’k’,tau,a(tau+1),’kx’);
title(’Auto mutual information function’);
maxdim=9;
Nref=5000;
NNR=3;
eData = embed(ts, maxdim+2, tau,1,’Rect’);
L = length(eData);
ref = randref(2, L-2, Nref);
[E, Estar] = cao(eData, ref, NNR);
E1=E(2:end)./E(1:end-1);
E2=Estar(2:end)./Estar(1:end-1);
ddE1=diff(diff(E1));
[dummy dim]=min(ddE1);
dim=dim+1;
figure; plot(E1,’kx-’); hold on; plot(E2,’ko:’);
title(’Cao’’s method results’);
75
B.3 knnGolden.m
%% create simulated time series
N=10000;
Hen=henon(N);
Lor=lorentz(N,0,10,28,8/3);
%% Embedding
eHen = embed(Hen, 2, 1 ,1,’Rect’);
eLor = embed(Lor, 3, 15,1,’Rect’);
76
B.4 multifrac.m
function [D, alfa2, f2]=multifrac(data ,dim,delay)
%% Embedding
edata=embed(data, dim, delay,1,’Rect’);
atria=nn_prepare(edata);
%% Calculate Dq
numofqs=13; % number of q’s
qzero=ceil(numofqs/2);
tau=zeros(numofqs,1);
stdx=zeros(numofqs,1);
mse=zeros(numofqs,1);
stdD=zeros(numofqs,1);
D=ones(numofqs,1);
n=5000;
[N,dim] = size(edata);
ref = sort(randsample(N, n));
for q=8:numofqs
if q~=(qzero+1)
[logC1 logR1]=...
gencorrsum(atria, edata,ref,1,50,(q-qzero),32,2);
logC=log10(logC1(logC1~=0))/(q-qzero-1);
logR=log10(logR1(logC1~=0));
lR=min(logR); hR=max(logR); lC=min(logC)+1;
figure; plot(logR,logC); axis([lR hR lC 0]);
a=input(’scaling region from:’);
b=input(’scaling region to:’);
ScalingRegion=find((logR>a)&(logR<b));
[limtau, limstdx, limmse]= ...
lscov([logR(ScalingRegion),...
ones(length(ScalingRegion),1)], logC(ScalingRegion));
D(q)=limtau(1); stdx(q)=limstdx(1); mse(q)=limmse(1);
stdD(q)=sqrt(mse(q));
tau(q)=D(q)*(q-qzero-1);
hold on;
plot(logR(ScalingRegion),logR(ScalingRegion)*tau(q),’r’);
end
end
77
D(qzero+1)=mean([D(qzero) D(qzero+2)]);
%% Calculate Dinf
[logC1 logR1]=maxpcorr(atria, edata,ref,1,50,1,32,2);
logCx=log10(logC1(logC1~=0));
logRx=log10(logR1(logC1~=0));
figure; plot(logRx,logCx); axis([-6 1 -5 0]);
a=input(’scaling region from:’);
b=input(’scaling region to:’);
ScalingRegion=find((logRx>a)&(logRx<b));
[limtau, limstdx, limmse]=...
lscov([logRx(ScalingRegion),...
ones(length(ScalingRegion),1)],logCx(ScalingRegion));
Dinf=limtau(1); stdx=limstdx(1); mse=limmse(1);
hold on; plot((a:0.1:b),(a:0.1:b)*Dinf,’r’);
stdDinf=sqrt(mse);
%% Calculate D-inf
[logC1 logR1]=minpcorr(atria, edata,ref,1,50,1,32,2);
logCn=log10(logC1(logC1~=0));
logRn=log10(logR1(logC1~=0));
figure; plot(logRn,logCn); axis([-6 1 -5 0]);
a=input(’scaling region from:’);
b=input(’scaling region to:’);
ScalingRegion=find((logRn>a)&(logRn<b));
[limtau, limstdx, limmse]=...
lscov([logRn(ScalingRegion),...
ones(length(ScalingRegion),1)],logCn(ScalingRegion));
D_inf=limtau(1); stdx=limstdx(1); mse=limmse(1);
hold on; plot((a:0.1:b),(a:0.1:b)*D_inf+limtau(2)+0.2,’r’);
stdD_inf=sqrt(mse);
%% Plot results
78
range=(-(qzero-1):(q-qzero));
figure(’Name’, ’tau’)
plot(range,tau);
ylabel(’tau’); xlabel(’q’)
figure(’Name’, ’Dq’)
plot(range,D, range,(D+stdD), range,(D-stdD));
ylabel(’Dq’); xlabel(’q’)
figure(’Name’, ’f(alfa)’)
plot(alfa(2:end-1),f(2:end-1),’x’);hold on;
plot(alfa2,f2,’r’)
plot(alfa2([1,end]),[0 0], ’or’);
xlabel(’alfa’); ylabel(’f’); hold on;
axis([min(alfa2)-0.1 max(alfa2)+0.1 -0.05 max(f2)+0.1]);
plot([min(alfa2)-0.1 max(alfa2)+0.1],[0 0],’k’);
text(Dinf+0.01,0.04,[’D _{\infty}=’ num2str(Dinf,4)])
text(D_inf-0.15,0.04,[’D _{-\infty}=’ num2str(D_inf,4)])
79
Appendix C
Generalized dimensions
analysis
Figure C.1: Generalized correlation sums and Dq ’s for signals 21 and 27 from
patient P.
80
(a) scaling, signal 5 patient P (b) Dq ’s, signal 5 patient P
Figure C.2: Generalized correlation sums and Dq ’s for signals 11, 15, 17, 28
from patient P.
81
(a) scaling, signal 7 patient J (b) Dq ’s, signal 7 patient J
82
(a) scaling, signal 5 patient K (b) Dq ’s, signal 5 patient K
Figure C.4: Generalized correlation sums and Dq ’s for signals 5, 10, 17, 27
from patient K.
83
(a) scaling, signal 6 (b) Dq ’s, signal 6
Figure C.5: Generalized correlation sums and Dq ’s for surrogate data from
different signals.
84
(a) scaling, signal 6 (b) Dq ’s, signal 6
85