Вы находитесь на странице: 1из 17

Coronary Artery Tracking Challenge

June 26, 2008

Introduction
The purpose of this challenge is to evaluate and compare coronary artery cen-
tral lumen line extraction methods. Developers of coronary CTA processing
software, and companies selling products in this field, are invited to apply their
methods on the data provided in this challenge. Developers of generic methods
for extracting elongated tubular structures from 3D images or methods devel-
oped for other imaging modalities and anatomies are also very welcome to join
this competition and tailor their methods to this specific application.

Our definition of a central lumen line


We define the central lumen line (for convenience referred to as centerline) of
a coronary artery in a CTA scan as a curve (i.e. series of points, linearly in-
terpolated) that passes in each cross-section through the center of gravity of

1
the lumen. The start point of a centerline is defined in the aorta, and the end
point is the most distal point where the artery is still distinguishable from the
background. The centerline is smoothly interpolated if the artery is partly in-
distinguishable from the background, e.g. in case of a total occlusion or imaging
artifacts.

Challenges
Methods and algorithms will be divided in three different categories for evalua-
tion: automatic tracking methods, methods with minimal user interaction and
interactive tracking methods. Please note that the organizers keep the right to
combine challenges if one of the challenges has not enough submissions.

Challenge 1: automatic tracking


Automatic tracking methods should find the centerlines of coronary arteries
without user interaction. In order to evaluate the result of automatic coronary
artery tracking, two points will be provided per vessel to extract the coronary
artery of interest:
• Point A: a point inside the distal part of the vessel; this unambiguously
defines the vessel to be tracked

• Point B: a point approximately 3 cm (measured along the centerline) distal


of the start point of the centerline

Point A should be used for selecting the appropriate centerline. If the automatic
tracking result does not contain centerlines near point A, point B can be used
to select the appropriate centerline. The participants must report in the paper
how many times point A or point B are used, but can of course choose to report
it per vessel or summarize it per dataset. Point A and B are only meant for
selecting the right centerline, and may not be used as input for the automatic
tracking method.

Challenge 2: tracking with minimal user interaction


Tracking methods with minimal user interaction are allowed to use one point
per vessel as input for the algorithm. This can be either one of the following
points:

• Point A or B, as defined above

• Point S: the start point of the centerline

• Point E: the end point of the centerline


• Point U: any point manually defined by the participant

2
Points A, B, S and E will be provided with the data. The participants should
clearly describe which point was used by their method. Furthermore, in case
the method obtains a vessel tree from the initial point, a second point may
be used after the centerline determination to select the appropriate centerline.
This point can be either point A or B as defined in challenge 1, and the use
participants have to report how many times point A or point B are used.

Challenge 3: interactive tracking


All methods that require more than one point per vessel as input are part of
challenge 3. Methods could use e.g. both points S and E from challenge 2, or a
series of manually clicked positions. The participants must clearly describe in
their paper the type and amount of user-interaction needed (e.g. the number of
points clicked).

Challenges before and during the workshop


Each of the three challenges is organized before and during the workshop. A
Coronary Artery Tracking 2008 (CAT08) participant has to participate in one
of the pre-workshop challenges.
The testing during the workshop is not obligatory for participation, because
we understand that not all participants can run their algorithms easily outside
their laboratory in a short amount of time. Please note that if you wish to
participate in one of the challenges during the workshop, you must have sub-
mitted a paper with results on the pre-workshop data. More details about the
challenge during the workshop can be found at the end of this document.

Data
Coronary CTA data for this challenge was acquired in the Erasmus Medical
Center Rotterdam, The Netherlands. 32 datasets were randomly selected from a
series of patients that underwent coronary CTA. Twenty datasets were acquired
on a Siemens Somatom Sensation 64 and twelve datasets on a Siemens Somatom
Definition CT scanner. Diastolic reconstructions were used, with reconstruction
intervals varying from 250ms to 400ms before the R-peak. Three datasets were
reconstructed using a B46f kernel, all others were reconstructed using a B30f
kernel.

Training and testing datasets


The 32 scans were divided into two groups of 8 datasets and one group of
16 datasets. The first group of 8 datasets can be used for training, and will
be provided together with the reference standard created from the observer
centerlines. The group of 16 datasets (Testing 1) will be used for testing before
the workshop (Submission deadline: June 23, 2008) and the second group of 8

3
Table 1: Image quality and presence of calcium in the training and test sets
(currently 28 patients scored, 4 others evenly distributed).
Image quality Presence of calcium
Total Good Adequate Poor Low Moderate Severe
Training 8 3 3 1 3 3 1
Testing 1 16 7 5 2 6 6 2
Testing 2 8 4 1 2 2 4 1

datasets (Testing 2) will be used for testing during the workshop. To ensure
representative training and testing sets, each dataset was visually assessed on
image quality and presence of calcium by a 4th year radiology resident. Image
quality was scored as poor, adequate or good, based on the noise level, presence
of streaking artifacts, irregular heart rate artifacts and other artifacts. Presence
of calcium was scored as low, moderate, or severe. Based on these scorings the
data was distributed equally over the three groups. The patients and scanning
parameters were assesed to be representative for clinical practice. Image quality
and calcium scores for the training and test sets are listed in Table 1.

Reference standard
Three observers annotated points along the center of the lumen of four coronary
arteries, namely the RCA, LAD, LCX and one large side branch of the main
coronary arteries, in all 32 datasets, yielding 32 × 4 = 128 annotated centerlines.
The observers were instructed to use our definition of a centerline. The observers
also specified the radius of the lumen at least every 5 mm, where the radius was
chosen such that the enclosed area of the annotated circle matched the area of
the lumen. The radius was specified after the central lumen line was annotated.
After annotation the centerlines were sampled equidistantly using a sampling
distance of 0.03 mm enabling accurate comparison between centerlines. The
radii were linearly interpolated to obtain a radius estimation of the coronaries
at every point along the resampled centerlines.

Combining the manual annotations


The reference standard is created with an iterative algorithm. The centerlines
are averaged while taking into account the possibly varying accuracy of the
observers. The algorithm jointly estimates the reference standard and the ac-
curacy of the observers. Details about this novel algorithm will be presented
during the workshop and the MICCAI 2008 conference [1].

Error inspection
After creating a first weighted average, the observer centerlines were compared
with this average centerline. This comparison was used to create curved-planar-

4
Figure 1: An example of the color-coded curved-planar-reformatted images used
to detect possibile annotation errors.

reformatted images displaying the annotated centerline color-coded with the


distance to the reference standard and vice-versa (see Figure 1). The observers
used these images to detect and subsequentally correct any possible annotation
errors.
The corrected centerlines were afterwards used to create the reference stan-
dard, using the iterative algorithm. Note that the centerlines before correction
are used to calculate the inter-observer variability.
The start- and end point of the reference standard are defined as the points
where for the first time the centerlines of two observers lie within the radius
of the reference standard when respectively traversing over the centerline from
start to the end or vice versa.

Evaluation
In the evaluation we discern between tracking capability and tracking accuracy.
Three overlap measures are used to assess the ability of tracking centerlines
and three distance measures are used to determine the accuracy of centerline
tracking.
Each of these measures is related to the inter-observer variabilities. The
scores for each measure range from 100 to 0: 100 points implies that the result
of the method is perfect, 50 points implies that the performance of the method
is similar to the inter-observer variability, 0 points implies a complete failure.

5
Aorta

Not taken into


Automatic account
path
Clipping disc

Figure 2: Every point before the first intersection of a method centerline and
a disc that is positioned at the start of the reference standard centerline is not
taken into account during evaluation.

Clipping the proximal part


For a variety of applications the central lumen line should start somewhere in the
aorta, but the exact location is generally not of critical importance. Therefore
points before the start point of the reference standard are not taken into account
during evaluation. This is done by clipping the beginning of a centerline with
a disc that is positioned at the start of the reference standard centerline. The
radius of the disc is twice the annotated vessel radius, and the disc normal is
the tangential direction of the beginning of the reference standard centerline
(Figure 2). Every point before the first intersection of a centerline and this disc
is not taken into account during evaluation.

Correspondence between centerlines


The evaluation measures are based on a point-to-point correspondence between
the reference standard and the evaluated centerline. This correspondence is
determined by finding the minimum of the sum of the Euclidean lengths of all
point-point connections that are connecting the two centerlines over all valid
correspondences. A valid correspondence for centerline I, consisting of an or-
dered set of points pi (0 ≤ i < n, p0 is the most proximal point of the centerline),
and centerline II, consisting of an order set of points qj (0 ≤ j < m, q0 is the
most proximal point of the centerline), is defined as the ordered set of connec-

6
Figure 3: Correspondence between two centerlines, and average centerline de-
termined via correspondence.

tions C = {c0 , cn+m−1 }, where ck is a tuple [pa , qb ] that represents a connection


from pa to qb , which satisfies the following conditions:
• The first connection c0 connects the start points: c0 = [p0 , q0 ].
• The last connection cn+m−1 connects the end points:
cn+m−1 = [pn−1 , qm−1 ].
• If connection ck = [pa , qb ] then connection ck+1 equals either [pa+1 , qb ] or
[pa , qb+1 ].
These conditions guarantee that each point of centerline I is connected to at
least one point of centerline II and vice versa.
A Dijkstra graph search algorithm on a matrix with connection lengths is
used to determine the minimal Euclidean length correspondence. When de-
termining correspondence between an observer or reference standard centerline
and a centerline from a participant, both centerlines are resampled equidistantly
with the same sampling distance (Figure (3)).

Definition of true positive, false positive, and false negative


Quality measures for submitted centerlines are based on a labeling of points on
the centerline as true positive, false negative or false positive. This labeling, in
its turn, is based on the correspondence between the reference standard center-
line and a submitted centerline. A point of the reference standard is marked
as:
• True positive (TPR), if the distance to at least one of the connected points
on the submitted centerline is less than the local radius.
• False negative (FN) otherwise.
A point on the submitted centerline is marked:
• True positive (TPM) if there is at least one connected point on the refer-
ence standard at a distance less than the radius defined at that reference
standard point.
• False positive (FP) otherwise.

7
Path found by
method
Reference standard
with radius

TPR FP
TPM
End of reference
standard

FN

FP
TPR
TPM

Clipping disc

Not taken
into account

Figure 4: An illustration of the different terms used in the overlap measure OV.
The measure OV represents the ability to track the complete vessel annotated
by the human observers.

Overlap measures
Overlap (OV)
The first overlap measure, OV, represents the ability to track the complete vessel
annotated by the human observers. It is defined as (see also Figure 4):

TPM + TPR
OV = .
TPM + TPR + FN + FP

Overlap until first error (OF)


The second overlap measure, OF, is the ratio of the number of true positive
points on the reference before the first error (TPRfe ) and the total number of
reference points (TPRfe + FNfe ). It is defined as (see also Figure 6):

TPRfe
OF = (1)
TPRfe + FNfe

8
Path found by
method
Reference standard
with radius
FN fe

End of reference
standard
fe
FN
fe
TPR

Clipping disc

Not taken
into account

Figure 5: An illustration of the different terms used in the overlap measure OF.
The measure OF represents the ability to track vessels without making errors.

Figure 6: Overlap measure OF, for overlap until first error.

9
First time
Stenotic above 1.5mm
FPt region Path found by
method

1.5 mm
TPM t TPM
t

TPRt FNt TPRt

Reference standard
with radius

Figure 7: An illustration of the different terms used in the overlap measure OT.
The measure OT represents the ability to track vessels with diameter ≥ 1.5 mm.

Overlap with > 1.5 mm vessel (OT)


This third overlap measure, OT, gives an indication of how well the method is
able to track vessels that have a diameter of 1.5 mm or larger. Vessels with a
diameter of 1.5 mm or larger are assumed to be clinically relevant [2, 3], and
thus this measure determines the overlap in these clinically relevant range. The
point closest to the end of the reference standard with a radius larger than or
equal to 0.75 mm is determined. Only points on the reference standard between
the start of the reference standard and this point are taken into account and only
points on the (semi-)automatic centerline connected to these reference points
are used when defining the true positives (TPMt and TPMt ), false negatives
(FNt ) and false positives (FPt ).
An exception is made if the complete reference standard has a diameter
of more than 1.5 mm. In that case (semi-)automatic centerline points ’after’
the reference standard are not taken into account. It is implemented by not
taken into account centerline points that are connected to the last point of the
reference standard, except the first centerline point that is connected to the last
reference point; this point is taken into account.
The OT measure is formalized as follows (see also Figure 7):

TPMt + TPRt
OT = . (2)
TPMt + TPRt + FNt + FPt

Inter-observer overlap variability


The inter-observer overlap variability was calculated by comparing the uncor-
rected paths with the reference standard. The three overlap measures were
calculated for each uncorrected path and the true positives, false positives and
false negatives for each observer were combined into inter-observer variability
per centerline as follows:

10
100 100

80 80

60 60

IO IO
40 40

20 20

0.2 0.4 0.6 0.8 1 0.5 1 1.5 2 2.5 3

(a) Overlap to score (b) Accuracy to score

Figure 8: Figure (a) shows an example of how overlap measures are transformed
into scores. Figure (b) shows this transformation for the accuracy.

P P
TPR(i) + TPM(i)
OVio = P P P P
TPR(i) + TPM(i) + FP(i) + FN(i)
P
TPRfe (i)
OFio = P P
TPRfe (i) + FNfe (i)
P P
TPRt (i) + TPMt (i)
OTio = P P P P ,
TPRt (i) + TPMt (i) + FPt (i) + FNt (i)

where i = {0, 1, 2} indicates the observer.

Overlap score
The performance of the method is scored with a measure related to the inter-
observer variability. For methods that perform better than the observers the
OV, OF, and OT measures are converted to scores by linearly interpolating be-
tween 100 and 50 points, respectively corresponding to an overlap of 1.0 and an
overlap similar to the inter-observer variability. If the method performs worse
than the inter-observer variability the score is obtained by linearly interpolat-
ing between 50 and 0 points, respectively corresponding to the inter-observer
variability and an overlap of 0.0.
(
(Om /Oio ) ∗ 50 Om ≤ Oio
ScoreO = Om −Oio (3)
50 + 50 ∗ 1−Oio Om > Oio ,

where Om and Oio define the OV, OF, or OT performance of respectively


the method and the observer. An example of this conversion is shown in Figure
(8(a)).

11
Accuracy measures
Average distance (AD)
The first accuracy measure is the average distance between the reference stan-
dard and the automatic centerline. The average distance is defined as the
summed distance of all the connections between the two equidistantly sampled
centerlines, divided by the number of connections.

Average distance inside vessel (AI)


The second accuracy measure represents the accuracy of tracking, provided that
the tracking results are inside the vessel. The measure is calculated in the same
way as AD, except that connections that have a length larger than the annotated
radius are excluded.

Average distance to the clinical relevant part of a vessel (AT)


This measure represents how well the method can track vessels that are clinically
relevant, i.e. vessels with a diameter larger than 1.5 mm. The difference with
the AD measure is that distances and scores for the connections that connect
TPMt , TPRt , FNt , and FPt points are averaged.

Inter-observer accuracy variability


The inter-observer accuracy variability at every point of the reference standard
is defined as the expected error that an observer locally makes while annotating
the centerline. It is determined at each point as the root mean squared difference
(RMSD) of the uncorrected annotated centerline and the reference standard:
q X
Aio (x) = 1/n (d(p(x), pi ))2

where n = 3 (three observers), and d(p(x), pi ) is the average distance from


point p(x) on the reference standard to the connected points on the centerline
annotated by observer i.

Accuracy score
The tracking accuracy of the method is related per connection to the observer
performance. A connection is worth 100 points if the distance to the reference
standard is 0 mm, it is worth 50 points if the distance is equal to the inter-
observer variability at that point. Methods that perform worse than the inter-
observer variability are rewarded per connection 50 points times the fraction of
the inter-observer variability and the method accuracy.
(
100 − 50(Am (x)/Aio (x)) Am (x) ≤ Aio (x)
ScoreA (x) = (4)
(Aio (x)/Am (x)) ∗ 50 Am (x) > Aio (x),

12
where Am (x) and Aio (x) define the distance from the method centerline to the
reference centerline and the inter-observer accuracy variability at point x. An
example of this conversion is shown in Figure (8(b)).
The average score over all connections yields the AD score for the centerline,
the average over all connections that connect TPR and TPM points yields the
AI score, and the AT score is defined as the average score over all connections
that connect a point in the clinical relevant section.

Comparing the methods


The resulting scores for all the methods are ranked per vessel and the average
rank of the six measures for all the 64 or 32 vessels (for respectively testing 1
or testing 2) defines the quality of each method. The method with the lowest
average rank wins the challenge.

Technical details
Directory structure
The training data and testing data are stored in archives with directories for
each dataset. The directories uniquely describe the datasets. The training
datasets are numbered ’00’ to ’07’ and are stored in the directories ’dataset00’
to ’dataset07’. The testing 1 set is stored in the directories ’dataset08’ to
’dataset23’. The datasets that will be used for testing during the workshop
are stored in directories named ’dataset24’ to ’dataset31’.
Each directory datasetXX contains an image file, named imageXX.mhd and
imageXX.raw, and four directories for the vessels, these are named vessel0,
vessel1, vessel2, and vessel3. These directories contain the reference standard
and point A,B,S and E for each vessel.

Image data format


All image data is stored in Meta format containing an ASCII readable header
and a separate raw image data file. This format is ITK compatible. Full doc-
umentation is available at http://www.itk.org/Wiki/MetaIO. An application
that can read the data is SNAP (http://www.itksnap.org/). If you want to
write your own code to read the data, note that in the header file you can find
the dimensions of each file. In the raw file the values for each voxel are stored
consecutively with index running first over x, then y, then z. The pixel type is
unsigned short. A gray value (GV) of 0 corresponds to -1024 Hounsfield units
(HU) and 1024GV corresponds to 0HU (i.e. HU(x) = GV(x) − 1024).

Reference standard files


The files named ’reference.txt’ contain the reference standard paths for each
vessel. The files contain the world position of the x-, y-, and z-coordinate of

13
each path point, the radius at that point (ri ) and the inter-observer variability
of that position (ioi ), in case of the averaged reference standard. Every point is
on a different line in the file starting with the most proximal point and ending
with the most distal point of the vessel. The voxel coordinate of each point can
be calculated by dividing the world coordinate by the voxel size of the image.
The voxel size can be found in the ’ElementSpacing’ line of the .mhd file. A
typical ’reference.txt’ file looks like this:

x0 y0 z0 r0 io0
x1 y1 z1 r1 io1
x2 y2 z2 r2 io2
x... y... z... r... io...
xn yn zn rn ion

with n the number of points of the path.

Point files
The files ’pointA.txt’, ’pointB.txt’, ’pointS.txt’, and ’pointE.txt’ contain respec-
tively the A,B,S, and E point for each vessel. These files contain three values,
corresponding with the x-,y- and z-coordinate of the respective point.

Submitting results
A participant should create an archive similar to the directory structure of
the training and testing data. It should contain a directory for each dataset.
These directories should be named ’dataset08’ to ’dataset23’ if one is submit-
ting results on the testing 1 data, results on the testing 2 data are stored in the
directories ’dataset24’ to ’dataset31’. The directories should contain 4 subdirec-
tories, named vessel0 to vessel3, with a file called ’result.txt’. This file should
contain the extracted centerline. It should contain one point per line, ordered
from proximal to distal. Each point should be described by three values corre-
sponding to the x-,y- and z-coordinate of each point. These points should be in
world-coordinates; similar to the input point and reference files.

Downloading the results


Shortly after uploading extracted centerlines the participant will be able to
download three result tables in LATEX format. These three tables should be
included in the paper describing the method that is submitted for the workshop.
(Submission deadline: July 7, 2008).
Examples of these tables (with random numbers) are shown in table 2, 3,
and 4. Because the performance ranking of the participants’ methods will be
published at the workshop, the tables that participants can download before the
workshop will not include ranks.

14
Updating the result tables
The organizers will provide the participants at the day of the workshop with
tables that include ranks and from that moment on participants can also down-
load the tables with ranks from the website. Participants should update their
document with the new tables.

Submitting results on the training data


A participant can submit results on the training data. This can be done if the
participant wants to publish the performance on the training set in its paper
or to test the submission system. The submitted archive should in that case
contain directories named ’dataset00’ to ’dataset07’.

The workshop papers


All workshop papers will appear in a special issue of the Insight Journal. Ad-
vantages are that papers can easily be accessed and referenced. Participants
have to submit their paper (before July 7) via the Insight Journal submission
system. Using the Insight Journal for publishing the papers does not mean
participants have to publish their code and the organizers will also not use the
open review process of the Insight Journal. Furthermore, participants do not
have to transfer their copyrights, so they can submit their paper without any
problems to another workshop, conference, or journal. The papers have a page
limit of 8 pages, have to be in the Insight Journal style, and do not have to be
anonymized.

The comparative journal paper


The organizers will write a journal paper about the challenge, the website, and
the workshop together with all the participants (that submitted a paper with
results on the pre-workshop testing data). The paper will be submitted to the
Medical Image Analysis journal.

Evaluation software
A C++ implementation of the evaluation measures is provided by the organizers.

Challenge during the workshop


The eight unseen datasets for testing 2 will be released at the very start of
the workshop. The testing 2 data will be formatted exactly the same as the
testing 1 data. Participants will have approximately 4 hours for extracting the
32 centerlines from the eight datasets.

15
Table 2: Average overlap per dataset.
Dataset OV OF OT Avg.
nr. % score rank % score rank % score rank rank
8 81.3 81.9 31 62.9 83.9 20 83.9 77.0 41 30.7
9 70.8 65.2 45 90.5 71.7 09 78.2 84.8 17 23.7
10 81.1 83.9 18 69.8 84.3 47 75.6 76.4 10 25.0
11 90.8 79.0 12 75.4 77.8 09 78.2 85.6 08 09.7
12 74.0 90.8 35 86.2 82.5 21 89.3 89.9 06 20.7
13 75.7 78.3 44 88.2 74.5 28 81.7 80.6 34 35.3
14 73.4 78.2 43 77.6 87.3 17 78.9 94.3 46 35.3
15 91.2 72.7 31 87.6 81.1 25 92.8 71.6 14 23.3
16 81.6 87.1 33 75.4 79.7 26 78.6 76.0 32 30.3
17 82.9 76.7 26 79.3 69.9 50 86.9 81.1 42 39.3
18 87.1 77.7 44 83.4 87.6 35 96.0 80.8 16 31.7
19 72.2 76.3 20 84.5 71.2 13 80.6 74.2 48 27.0
20 83.6 69.3 30 77.1 77.1 45 76.4 87.1 29 34.7
21 83.5 81.9 18 84.3 70.6 14 72.1 70.1 19 17.0
22 87.2 80.7 22 79.9 80.1 21 91.6 74.6 34 25.7
23 90.7 69.8 49 76.8 89.9 39 82.8 85.9 09 32.3
Avg. 81.7 78.1 31.3 79.9 79.3 26.2 82.7 80.6 25.3 27.6

Table 3: Average accuracy per dataset.


Dataset AD AI AT Avg.
nr. mm score rank mm score rank mm score rank rank
8 1.36 77.3 32 0.79 75.7 01 1.46 77.6 10 14.3
9 1.27 77.9 32 1.23 89.7 04 0.74 83.0 13 16.3
10 0.49 77.9 32 0.94 85.3 03 1.27 83.4 19 18.0
11 1.07 92.4 32 1.13 81.6 09 1.37 72.5 17 19.3
12 1.12 76.8 30 1.08 73.1 10 0.71 84.6 12 17.3
13 1.05 78.2 25 0.94 75.9 04 0.86 78.2 18 15.7
14 0.86 83.0 09 1.11 82.7 01 1.11 73.4 12 07.3
15 1.54 77.0 27 1.10 87.7 03 1.06 84.2 14 14.7
16 1.14 78.8 27 1.43 76.8 03 1.49 80.5 14 14.7
17 1.14 88.2 21 1.24 69.8 08 0.47 75.8 17 15.3
18 1.13 82.0 20 1.32 80.4 03 0.76 88.1 16 13.0
19 1.44 71.0 11 0.90 84.1 03 1.10 85.0 09 07.7
20 1.41 86.4 28 1.66 81.3 06 1.23 76.8 19 17.7
21 1.44 68.5 32 1.51 82.7 10 1.11 81.7 16 19.3
22 1.35 76.6 11 1.02 81.5 09 1.24 70.3 14 11.3
23 0.97 83.0 12 1.18 81.9 02 0.89 80.0 14 09.3
Avg. 1.17 79.7 23.8 1.16 80.6 04.9 1.05 79.7 14.6 14.4

Table 4: Results summary.


Measure % / mm score rank
min. max. avg. min. max. avg. min. max. avg.
OV 60.0% 98.2% 81.7% 60.0 99.7 78.1 07 50 26.6
OF 60.4% 99.9% 79.9% 60.0 99.2 79.3 06 50 29.2
OT 60.7% 99.8% 82.7% 60.1 99.5 80.6 03 50 24.2
AD 0.30 mm 2.00 mm 1.17 mm 61.6 100.0 79.7 05 35 20.4
AI 0.31 mm 1.99 mm 1.16 mm 60.6 99.9 80.6 00 10 04.4
AT 0.32 mm 2.00 mm 1.05 mm 60.3 99.5 79.7 08 20 13.6
Total 00 50 19.8

16
References
[1] T. van Walsum, M. Schaap, C. Metz, A. van der Giessen, and W. Niessen,
“Averaging center lines: Mean shift on paths,” in Medical Image Computing
and Computer-Assisted Intervention - MICCAI 2008, 2008.

[2] S. Leschka, H. Alkadhi, A. Plass, L. Desbiolles, J. Grünenfelder, B. Marincek,


and S. Wildermuth, “Accuracy of msct coronary angiography with 64-slice
technology: first experience.,” Eur Heart J, vol. 26, pp. 1482–1487, Aug
2005.

[3] D. Ropers, J. Rixe, K. Anders, A. Küttner, U. Baum, W. Bautz, W. G.


Daniel, and S. Achenbach, “Usefulness of multidetector row spiral computed
tomography with 64- x 0.6-mm collimation and 330-ms rotation for the non-
invasive detection of significant coronary artery stenoses.,” Am J Cardiol,
vol. 97, pp. 343–348, Feb 2006.

17

Вам также может понравиться