Академический Документы
Профессиональный Документы
Культура Документы
Department of Environmental Biology and Fisheries Science, National Taiwan Ocean University, 2 Pei-Ning Road, Keelung 20224,
Taiwan
b Society of Streams, R.O.C., 1F., No. 15, Alley 70, Lane 12, Sec. 3, Bade Rd., Taipei City 10558, Taiwan
c Department of Computer Science and Information Engineering, National Cheng Kung University, No. 1, Ta-Hsueh Road, Tainan,
Taiwan, ROC
a r t i c l e
i n f o
a b s t r a c t
Article history:
Both sample entropy and approximate entropy are measurements of complexity. The two
methods have received a great deal of attention in the last few years, and have been suc-
cessfully veried and applied to biomedical applications and many others. However, the
28 November 2010
algorithms proposed in the literature require O(N2 ) execution time, which is not fast enough
for online applications and for applications with long data sets. To accelerate computation,
the authors of the present paper have developed a new algorithm that reduces the com-
Keywords:
putational time to O(N3/2 )) using O(N) storage. As biomedical data are often measured with
integer-type data, the computation time can be further reduced to O(N) using O(N) storage.
Computational geometry
The execution times of the experimental results with ECG, EEG, RR, and DNA signals show
a signicant improvement of more than 100 times when compared with the conventional
O(N2 ) method for N = 80,000 (N = length of the signal). Furthermore, an adaptive version of
the new algorithm has been developed to speed up the computation for short data length.
Experimental results show an improvement of more than 10 times when compared with
the conventional method for N > 4000.
2010 Elsevier Ireland Ltd. All rights reserved.
1.
Introduction
Both approximate entropy (AE ) [1] and sample entropy (SE ) [2]
are measurements of a systems complexity, which are important for the analysis of biomedical signals [3,4] and signals in
other elds [5,6]. Pincus introduced approximate entropy, a set
of measures of system complexity closely related to entropy,
which is easily applied to biomedical signals and others. However, AE has two deciencies. First, AE strongly depends on
the record length and is uniformly lower than expected for
short records. Second, AE lacks relative consistency. The sample entropy was introduced by Richman et al., which requires
Corresponding author. Tel.: +886 987 153 779; fax: +886 2 82366969.
E-mail addresses: D95310001@mail.ntou.edu.tw, schpeter99@gmail.com (Y.-H. Pan).
0169-2607/$ see front matter 2010 Elsevier Ireland Ltd. All rights reserved.
doi:10.1016/j.cmpb.2010.12.003
383
c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 0 4 ( 2 0 1 1 ) 382396
2.
Review of the computation of sample
entropy and approximate entropy
2.1.
Review of sample entropy, approximate entropy,
and multiscale entropy
Consider a time series of length N: X ={X1 . . . Xi . . . XN }. A pattern length m (length of sequences to be compared) is selected,
and, for each i, a vector (template) of size m is dened:
(1)
k, 0 k m 1
(2)
Nm+1
Furthermore,
n i =
m
(i, j, m, r) is dened
(3)
j=1
n m
i
Nm+1
Then,
Nm+1
log Cm
(r), which is the averm (r) = 1/N m + 1 i=1
i
(r).
age of the natural logarithms of Cm
i
Subsequently, the approximate entropy [1] is calculated as
follows:
AE (m, r, N) = m (r) m+1 (r)
(4)
Nm m
n
ni
n
i=1
SE (scale, m, r, N) = ln Nm
= ln
m+1
n
i=1
ni
(5)
where
Nm
nm
i =
(i, j, m, r)
(6)
j=i+1
384
c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 0 4 ( 2 0 1 1 ) 382396
Xj =
j
1
Xi ,
1j
N
,
N>
200
(7)
i=(j1)+1
150
2.2.
100
amplitude
kd tree algorithm
-50
yi = Xi+1 ,
50
zi = Xi+2
(8)
-100
-150
0
1000
2000
3000
4000
5000
6000
7000
8000
9000 10000
points
Then, nm
is equivalent to the number of points inside the
i
bounding box Wi :
250
(9a)
200
1d search
where the subscript LB and UB stand for the lower bound and
upper bound of the box, and
150
100
(xUB )i = xi + r
(yUB )i = yi + r
(zUB )i = zi + r
2d search
(9b)
...
Given an orthogonal range (bounding box) in the ddimensional space, the number of points in each box is
queried, which is called an orthogonal range-counting problem in the eld of computational geometry. Hence, for each
i , the computation
point Pi and its associated bounding box W
m+1
(n
)
is
equivalent
to
an
m
(m
+
1)
dimensional
orthogoof nm
i
i
nal range-counting problem. Once nm
and nm+1
are computed,
i
i
nn , nd , and SE can be calculated directly from Eq. (5). Comwith nm
, the time complexity
paring the dimension of nm+1
i
i
is dominated by the rst term. Hence, the time complexity
of computing SE is determined by nm+1
. Fig. 1 shows the rst
i
10,000 points of an ECG (electrocardiographic) signal. For a
given point Pi and distance r, Fig. 2 demonstrates the computation of n1i from a geometric point of view. In the geometric
view, the algorithm proposed in [3] can be interpreted as follows. For each point Pi , calculate its bounding box Wi . Traveling
is equivalent to the
each point pj with index j > i (Eq. (6)), nm
i
number of points in Wi . This algorithm requires double loops
i and j. Hence, it is an O(N2 ) algorithm and can be interpreted
as a brute force algorithm, because it does not preclude any
impossible queries (matches).
The kd tree [14,15] can be applied to the orthogonal rangecounting problem [13] and it showed that it is an effective
algorithm in computing SE . The fundamental concept is to
store the point sets {P} in a specially designed data structure;
subsequently for a given box, the query would be faster. The
kd tree, proposed by Bentley in 1975, is a binary tree, whose
each node v is associated with a rectangle Bv. If Bv contains
(xLB )i = xi r,
(yLB )i = yi r,
(zLB )i = zi r,
50
point i
-50
-100
-150
-150
-100
-50
50
100
150
200
250
x
Fig. 2 Demonstration of the geometric view for an ECG
signal for m = 1. Computing n1i is equivalent to a
two-dimensional search and computing n0i is equivalent to
a one-dimensional search.
O(N log N)
O(log N) for d = 1
O(N1(1/d) ) for d > 1
O(N)
c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 0 4 ( 2 0 1 1 ) 382396
3.
3.1.
385
(10a)
and
(Whd )i = [(yLB )i : (yUB )i ] [(zLB )i : (zUB )i ] . . .
(10b)
(11)
where iLB(i) and iUB(i) represent the lower and upper index
of points inside (Wx )i .
Step 3: Build the (d 1) dimensional kd tree. The tree initially
contains points in (Wx )i=1 and only the higher dimension
coordinates (y, z, . . .) are stored into the trees node.
Step 4: Begin with i = 1, nm
is equivalent to the number of
i
points inside (Whd )i which is already in (Wx )i as illustrated in
Fig. 2. The d 1 dimensional kd tree search (search y, z, . . .)
.
is applied to obtain nm
i
Step 5: Slide (move) from point i to i + 1, query the number
of points in (Whd )i+1 intersecting (Wx )i+1 . Firstly, report the
points in (Wx )i+1 using the points in (Wx )i as a clue. Because
the points within (Wx )i and (Wx )i+1 may be different, old
points (points in (Wx )i , but not in (Wx )i+1 ; that is, points with
indexes iLB(i 1) j iLB(i) 1) have to be removed from the
tree, and new points (points in (Wx )i+1 , but not in (Wx )i ; that is,
and xj (xLB )i
(12a)
and
(12b)
386
c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 0 4 ( 2 0 1 1 ) 382396
3.2.
d2
for
= O(N N1(1/m) )
m = 1,
for
m>1
(13)
3.3.
Integer-type data
For the integer-type data as seen in the digitalized biomedical signal, the time complexity of the SKD algorithm can be
further reduced to O(Bm1 N).
From Table 1, it can be noted that the time complexity for
one query by the kd tree algorithm is Tq (N) = O(N1(1/d) ) where
the subscript q stands for one query.
For the SKD algorithm, d = m, as discussed in Section 3.2.
Thus,
Tq (N) = O(N1(1/d) ) = O(N1(1/m) ) for m 2
(14)
Tq (N) = O(Ndiff
(15)
3.4.
c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 0 4 ( 2 0 1 1 ) 382396
Algorithm 1:.
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
18:
19:
20:
21:
22:
23:
24:
25:
26:
27:
28:
29:
29:
data as discussed in Section 3.2, the data compression technique is applied in this algorithm.
Input: max : the maximum scale, m: the pattern length, r:
the distance of accepting similarity, and the time series {X}
dened in Section 2.1.
Ouput: SE for scale = 1: max .
We explain the details of the implementation of Algorithm
1 in computing the MSE in natural language.
Line 1: Normalize r by the standard deviation of the original
time series in Eq. (1).
Line 3: Calculate the coarse-grained time series from the original time series by Eq. (7).
Line 4: For each (X )i , transform the template vector (Xi )m into
a d dimensional space point sets using Eq. (8).
Line 7: Input point array in space, output the indexed sorted
point array {p}. Points are stored in a structure containing
their (x,y,z. . .) coordinate. Quick sort algorithm is applied to
sort the point sets by the their x coordinate.
387
Line 8: Input {p} and r, Output the iLU and iUB array dened
Eq. (11). The detailed is discussed in step 2 in Section 3.1.
Line 9: compress data(): Input the sorted point array {p}. Output the new compressed point array. Compress the point
array. If points have the same coordinates, only the rst one
is stored in the database. The counter in the trees node
described in Step 5 in Section 3.1 is used to record the number
of points in a compressed point.
To operate the kd tree, the following tree functions are
needed:
(1) build kd tree(): Input (d 1) dimensional points; Ouput the
kd tree structure;
(2) delete old points(): this function is to delete points from
the tree. Input points to be deleted from the tree. The
tree nodes counters corresponding to these points are
implicitly subtracted from the tree and these points are
disjointed from the leaf node as discussed in Step 5 in
Section 3.1.
(3) insert new points(): similar to the delete old points().
(4) kd search(): Input the bounding box (Whd )i and the tree;
output: nm
i
(5) release tree memory(): this function is to release the trees
memory once the tree operations are nished. Input the
tree, and the memory is implicitly cleaned in this function.
The kd tree is a popular algorithm. Free computer programs
can be found in the public domain.
Line 10: build the kd tree structure.
Line 1113: for i = 1, insert the point sets into the tree.
Line 15: is the deletion operation.
Line 16: input the point array, m, and r. Output the bounding
box (Whd )i dened in Eq. (10b).
and nm+1
for each i by the (d-1) dimenLine 17: compute nm
i
i
sional kd tree algorithm as described in Step 4 in Section
3.1.
Line1822: compute nn (nd ) by taking the contribution from
(nm+1
) for each i.
nm
i
i
Line 2325: the insertion operation.
Line 28: compute SE from nn and nd using Eq. (5)
Line 29: release the memory after the computation of SE is
nished.
From Eq. (6), the index j starts from i + 1 in computing SE .
However, from Eq. (3), the index j starts from 1 in computing
AE . Then, line 15 must be modied as the following 3 lines:
for (j = iLB[i-1]:iLB[i] 1)
deleteOldPoints(pointArray[i]);
end do
3.5.
3.5.1.
388
c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 0 4 ( 2 0 1 1 ) 382396
3.5.2.
3.5.3.
(16)
4.
Experiments
4.1.
Experiment 1
This experiment has tested the performance of the SKD algorithm in computing SE (scale = 1, m = 2, r = 0.15 SD).
Fig. 4 shows the TN plot for the RR series, while Fig. 4a
shows the TN plots for the brute force, kd-tree, and the
SKD algorithm. It can be observed that both the kd and
the SKD algorithms are much faster than the brute force
algorithm. However, the comparison of the SKD algorithm
with the kd tree algorithm cannot be clearly observed from
Fig. 4a. Thus, they are re-plotted n Fig. 4b. For N = 80,000,
Tbruteforce = 55 s, Tkd = 2.1 s, and TSKD = 0.45 s. In other words,
389
c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 0 4 ( 2 0 1 1 ) 382396
RR
120
100
80
60
40
Brute
20
kd
0
SKD
10000
20000
30000
40000
50000
60000
70000
80000
90000
1e+05
1.1e+05
90000
1e+05
1.1e+05
RR
3.5
3
2.5
2
1.5
kd
1
SKD
0.5
0
10000
20000
30000
40000
50000
60000
70000
80000
N
Fig. 4 (a) Execution times versus N for RR intervals and SE (scale = 1, m = 2, r = 0.15 SD). The black squares and line represent
the execution times of the brute force algorithm, the blue crossed lines represent the corresponding values for the kd tree
algorithm, and the red crossed line represents the corresponding values for the SKD algorithm. (b) Execution times versus N
for RR intervals and SE (scale = 1, m = 2, r = 0.15 SD). The blue circles and line represent the corresponding values for the kd
tree algorithm, and the red crossed line represents the corresponding values for the SKD algorithm.
SKD = 122 and kd = 26, which shows that the SKD algorithm
is signicantly faster than the kd tree algorithm and the brute
force algorithm. Furthermore, Fig. 4b shows that the SKD is
approximately an O(N) algorithm, as can be seen that the SKD
curve is approximately a straight line.
Figs. 57 show the TN plot for the ECG, EEG, and the 8-bit
random signal. Similar results are obtained as in the RR series.
In these three cases, the SKD is found to outperform the kd tree
algorithm by 2.55 times and the brute force algorithm by more
than 100 times for N = 80,000. Note that for N = 80,000, TSKD for
the 8-bit random signal is about 3 times slower than the others.
For the random signal, x and y are not correlated; thus, the
points are evenly distributed in the Poincare plot (see Fig. 2).
Therefore, Ndiff for the random signal is much larger than that
of a biological signal, thereby taking longer execution times.
Fig. 8(a) shows the TN plot for different r. It shows that TSKD
increases slightly with r. As nm
increases with r, the depth of
i
the kd tree also increases. Therefore, (T)SKD increases with r.
4.2.
Experiment 2
This experiment tests the performance of the SKD in computing AE (scale = 1, m = 2, r = 0.2 SD). Fig. 9a and b shows the
TN plot for the RR series and the EEG signal, respectively. To
390
c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 0 4 ( 2 0 1 1 ) 382396
ECG Signal
120
100
80
60
40
Brute
20
kd
SKD
10000
20000
30000
40000
50000
60000
70000
80000
90000
1e+05
1.1e+05 1.2e+05
number of points(N)
ECG Singal
kd
SKD
0
10000
20000
30000
40000
50000
60000
70000
80000
90000
1e+05
1.1e+05 1.2e+05
N
Fig. 5 (a) Execution times versus N for the ECG signal and SE (scale = 1, m = 2, r = 0.15 SD). The black circles and line
represent the execution times of the brute force algorithm, the blue crossed lines represent the corresponding values for the
kd tree algorithm, and the red crossed line represents the corresponding values for the SKD algorithm. (b) Execution times
versus N for the ECG signal and SE (scale = 1, m = 2, r = 0.15 SD). The blue circles and line represent the execution times for
the kd tree algorithm and the red crossed line represents the corresponding values for the SKD algorithm.
SKD (s)
1
2
3
2.31
2.66
2.86
391
c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 0 4 ( 2 0 1 1 ) 382396
EEG signal
a
SampEn time (second)
120
100
80
Brute
60
40
20
kd
0
SKD
10000
20000
30000
40000
50000
60000
70000
80000
90000
1e+05
1.1e+05
1.2e+05
1e+05
1.1e+05
1.2e+05
EEG Signal
1.2
SampEn time(second)
1
0.8
0.6
kd
0.4
SKD
0.2
0
10000
20000
30000
40000
50000
60000
70000
80000
90000
N
Fig. 6 (a) Execution times versus N for the EEG signal and SE (scale = 1, m = 2, r = 0.15 SD). The black squares and line
represent the execution times of the brute force algorithm, the blue crossed lines represent the corresponding values for the
kd tree algorithm, and the red crossed line represents the corresponding values for the SKD algorithm. (b) Execution times
versus N for the EEG signal and SE (scale = 1, m = 2, r = 0.15 SD). The blue circles and line represent the execution times for
the kd tree algorithm and the red crossed line represents the corresponding values for the SKD algorithm.
4.3.
Experiment 3
algorithm is faster than the SKD algorithm for N < 20,000, and
is faster than the brute force algorithm for N approximately
greater than 400. Furthermore, the adaptive SKD algorithm
is faster than the kd tree algorithm for N < 8 105 . The result
is surprising, because the adaptive SKD algorithm has been
designed for only handling short data length. On comparing
Tbruteforce and Tadp skd , the adaptive SKD is consistently 1015
times faster than the brute force algorithm for N 4000. The
O(N2 ) property of the adaptive SKD can be veried as follows.
The execution time increases approximately by 4 times when
N is doubled as can be seen from Tables 3 and 4.
4.4.
Experiment 4
To test the SKD algorithm and its adaptive version in the handling of time-varying MSE, 16-bit EOG data collected overnight
are partitioned into nonoverlapping windows to analyze the
392
c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 0 4 ( 2 0 1 1 ) 382396
Random
140
120
100
80
60
Brute
40
20
kd
0
10000
20000
30000
40000
50000
SKD
60000
70000
80000
90000
1e+05
1.1e+05 1.2e+05
1e+05
1.1e+05 1.2e+05
Random: 8 bit
SampEn time(second)
kd
4
SKD
0
10000
20000
30000
40000
50000
60000
70000
80000
90000
N
Fig. 7 (a) Execution times versus N for the 8-bit uniform distributed random signal and SE (scale = 1, m = 2, r = 0.15 SD). The
black squares and line represent execution times of the brute force algorithm, the blue circle lines represent the
corresponding values for the kd tree algorithm, and the red crossed line represents the corresponding values for the SKD
algorithm. (b) Execution times versus N for the 8-bit uniform distributed signal and SE (scale = 1, m = 2, r = 0.15 SD). The blue
circles and line represent the execution times for the kd tree algorithm and the red crossed line represents the
corresponding values for the SKD algorithm.
relation between the MSE and the sleep stages. Each window contains 7680 (30 s 256 Hz) data points and the MSE
procedure is repeated for each window. The MSE analysis of
the rst 30,000 s of one subjects EOG is shown in Fig. 10.
Table 3 Execution times versus N for EEG for m = 2, r = 0.15 SD, scale = 1 for the brute force, kd tree, SKD, and adaptive
SKD algorithms.
Number of points
kd tree (s)
300
500
1000
2000
4000
8000
20,000
8.5 104
2.5 103
9.1 103
3.6 102
0.14
0.57
3.63
2.3 103
4.2 103
6.5 103
1.4 102
0.031
0.066
0.18
SKD (s)
1.8 103
1.8 103
4.0 103
7.3 103
0.018
0.035
0.077
393
c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 0 4 ( 2 0 1 1 ) 382396
EEG
0.7
0.6
r=0.3
0.5
r=0.2
0.4
r=0.15
0.3
r=0.1
0.2
r=0.05
0.1
20000
30000
40000
50000
60000
70000
80000
90000
1e+05
1.1e+05
1.2e+05
RR
5
m=8,10
3
m=4
m=3
m=2
0
m=1
20000
30000
40000
50000
60000
70000
80000
90000
1e+05
1.1e+05
1.2e+05
Random Signal
2
b=14,15
b=12
1.5
b=10
b=8
0.5
0
20000
b=6
b=1
30000
40000
b=2
50000
b=4
60000
70000
80000
90000
1e+05
1.1e+05
1.2e+05
Fig. 8 (a) Execution times versus N for the EEG signal for different r using the SKD algorithm in computing SE (scale = 1,
m = 2). (b) Execution times versus N for the RR series for different m using the SKD algorithm in computing SE (scale = 1,
r = 0.15). (c) Execution times versus N for the random signal for different number of bits using the SKD algorithm in
computing SE (scale = 1, m = 2, r = 0.15).
394
c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 0 4 ( 2 0 1 1 ) 382396
RR
a
7
Brute
5
4
3
2
1
0
SKD
10000
20000
30000
40000
50000
60000
70000
80000
90000
1e+05
1.1e+05 1.2e+05
EEG
b
7
6
5
4
3
Brute
SKD
1
0
10000
20000
30000
40000
50000
60000
70000
80000
90000
1e+05
1.1e+05 1.2e+05
Binary
30
25
20
15
10
5
SKD
0
1e+06
2e+06
3e+06
4e+06
5e+06
6e+06
7e+06
8e+06
9e+06
1e+07
Fig. 9 (a) Execution times of computing AE (scale = 1, m = 2, r = 0.2 SD) versus N for the RR intervals. The black squares and
line represent the execution times of the brute force algorithm, and the red crossed line represents the corresponding
values for the SKD algorithm. (b) Execution times versus N for the EEG intervals and AE (scale = 1, m = 2, r = 0.2 SD). The black
circle and line represent the execution times of the brute force algorithm, and the red crossed line represents the
corresponding values for the SKD algorithm. (c) Execution time versus N for the 1-bit random signal for AE (scale = 1, m = 2).
The red crossed line represents the corresponding values for the SKD algorithm.
395
c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 0 4 ( 2 0 1 1 ) 382396
Table 4 Execution times versus N for 1/f noise for m = 2, r = 0.15 SD, scale = 1:20 for the brute force, kd tree, SKD, and
adaptive SKD algorithms.
Number of points
kd tree (s)
SKD (s)
300
500
1000
2000
4000
8000
20,000
50,000
105
2 105
4 105
8 105
16 105
32 105
0.0019
0.0075
0.015
0.059
0.29
0.921
5.39
32.4
128.4
501.0
2063
8716
0.014
0.017
0.032
0.059
0.13
0.29
1.01
4.18
15.16
51.3
176.7
655.0
0.012
0.010
0.02
0.039
0.08
0.16
0.46
1.34
3.15
7.35
18.5
52.5
153
411
0.0021
0.00521
0.0066
0.0093
0.026
0.07
0.42
2.14
9.01
34.2
146
540
brute force method, the SKD, and the adaptive SKD algorithms to analyze 8-h EOG data (7,365,376 samples) were 320,
40.0, 24.5 s, respectively. With such a level of performance,
this experiment demonstrates that the SKD algorithm can be
integrated into a chip in a consumer product for online diagnosis.
Fig. 10 The relation between the MSE and the sleep stages. (a) The MSE values using EOG of different scales of one subject;
(b) results after averaging the 20 MSE values in each epoch; (c) manual sleep staging reviewed by the expert.
396
4.5.
c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 0 4 ( 2 0 1 1 ) 382396
Summary
The experiment veries that the SKD algorithm and its adaptive version are robust for all parameters including different
biomedical signals, data length (N), data resolution (B), pattern
length (m), distance (r), and scales.
The SKD algorithm is O(N3/2 ) for m = 2 and for the real-type
data, and is O(N) for the integer-type data as in the digitalized biological signal. From all the experiments, it is observed
that the SKD algorithm is signicantly better than the kd tree
algorithm, the bucket-assisted algorithm, and the brute force
algorithm in the literature by a factor of 26, more than 20, and
more than 100, respectively, for N = 80,000. For small N, the SKD
is observed to be inferior to the brute force algorithm. For this
reason, an adaptive SKD algorithm has been developed and it
is more than 10 times faster than the brute force algorithm for
r = 0.15 and N 2000.
5.
Conclusions
Conict of interest
None.
Acknowledgment
This work was funded in part by the Industrial Development
Bureau Ministry of Economic Affairs, Taiwan (ROC).
references