Вы находитесь на странице: 1из 17

Accepted Article

PROFESSOR JAE-HONG LEE (Orcid ID : 0000-0002-2375-0141)

Article type : Original Manuscript

Diagnosis of cystic lesions using panoramic and CBCT images based on deep learning neural
network

Jae-Hong Lee*, Do-Hyung Kim, Seong-Nyum Jeong

Department of Periodontology, Daejeon Dental Hospital, Institute of Wonkwang Dental Research, Wonkwang
University College of Dentistry, Daejeon, Korea

Running Title: Deep learning-based cyst detection

*Correspondence:

Jae-Hong Lee, PhD, Department of Periodontology, Daejeon Dental Hospital, Wonkwang University College of
Dentistry, 77, Dunsan-ro, Seo-gu, Daejeon 35233, Korea

E-mail: ljaehong@gmail.com, Tel.: +82-42-3661114, Fax: +82-42-3661115

This article has been accepted for publication and undergone full peer review but has not been through the
copyediting, typesetting, pagination and proofreading process, which may lead to differences between this
version and the Version of Record. Please cite this article as doi: 10.1111/ODI.13223

This article is protected by copyright. All rights reserved


Accepted Article
Abstract

Objectives: The aim of the current study was to evaluate the detection and diagnosis of three types of odontogenic
cystic lesions (OCLs)—odontogenic keratocysts, dentigerous cysts, and periapical cysts—using dental panoramic
radiography and cone beam computed tomographic (CBCT) images based on a deep convolutional neural network
(CNN).

Methods: The GoogLeNet Inception-v3 architecture was used to enhance the overall performance of the detection and
diagnosis of OCLs based on transfer learning. Diagnostic indices (area under the ROC curve [AUC], sensitivity,
specificity, and confusion matrix with and without normalization) were calculated and compared between pretrained
models using panoramic and CBCT images.

Results: The pretrained model using CBCT images showed good diagnostic performance (AUC = 0.914, sensitivity =
96.1%, specificity = 77.1%), which was significantly greater than that achieved by other models using panoramic
images (AUC = 0.847, sensitivity = 88.2%, specificity = 77.0%) (P = 0.014).

Conclusions: This study demonstrated that panoramic and CBCT image datasets, comprising three types of
odontogenic OCLs, are effectively detected and diagnosed based on the deep CNN architecture. In particular, we
found that the deep CNN architecture trained with CBCT images achieved higher diagnostic performance than that
trained with panoramic images.

Keywords: cysts; deep learning; odontogenic cysts; supervised machine learning

This article is protected by copyright. All rights reserved


1 INTRODUCTION
Accepted Article
Deep learning, which is a subset of artificial intelligence technologies, is undergoing rapid development and has
garnered substantial public attention in recent years (Hricak, 2018), Among deep learning models, the deep
convolutional neural network (CNN) architecture is arguably the most widely studied as it provides superior
performance for detection, classification and quantification, and segmentation of image data owing to the
development of self-learning algorithms and improvements in computing power (Soffer et al., 2019). Therefore, the
deep CNN architecture shows great potential and useful properties for improving analysis of various medical imaging
datasets such as plain radiographs or three-dimensional imaging modalities (Suzuki, 2017; J. G. Lee et al., 2017).
Several deep CNN architectures have already been approved by the Food and Drug Administration and are being
applied to clinical practice (Gulshan et al., 2016; Kim et al., 2017).

Odontogenic cystic lesions (OCLs) are a pathological epithelial lined cavities containing fluid, semi-fluid, or solid
contents(Binnie, 1999). The dentigerous and periapical cysts that occupy most OCLs are benign and noninvasive,
whereas the odontogenic keratocyst (OKC) is highly likely to recur and exhibit locally aggressive behavior and has
the potential to undergo malignant transformation (Stoelinga, 2012; Kaczmarzyk et al., 2012). Therefore, early
detection and diagnosis of OCLs are expected to reduce morbidity and mortality through long-term follow-up and
early intervention.

In the dental field, the usefulness of deep learning is assessed for the detection, classification, and segmentation of
anatomical variables for orthodontic landmarks, dental caries, periodontal disease, and osteoporosis, but these
applications are still at very preliminary stages (J. H. Lee et al., 2018b; a; J. S. Lee et al., 2018; Chonho Lee et al.,
2019). In particular, to the best of our knowledge, no published studies have directly compared dental panoramic
radiography and cone beam computed tomographic (CBCT) images, for the identification and classification of major
OCLs based on deep CNN architectures. Therefore, the present study was conducted to evaluate and draw
comparisons between panoramic and CBCT images, for detection and diagnosis among three types of major OCLs
based on a deep CNN architecture that was trained with supervised learning.

2 METHODS

2.1 Datasets

This study was conducted at the Department of Periodontology, Daejeon Dental Hospital, Wonkwang University and
approved by the Institutional Review Board of Daejeon Dental Hospital, Wonkwang University (approval no.
W1908/002-001). Based on histopathological examinations by a board-certified oral pathologist at Daejeon Dental
Hospital, Wonkwang University, a dental panoramic and CBCT image dataset (INFINITT PACS, Infinitt, Seoul,
Korea) containing diagnoses of three OCL types (OKC, dentigerous cyst, and periapical cyst) was acquired between
January 2014 and December 2018 (Figure 1). Even if histologically confirmed, all cases that were difficult to

This article is protected by copyright. All rights reserved


distinguish because of the severe distortion, artificial noise, blur, and poor quality in the radiographic image were
Accepted Article
excluded. Among the axial plane images of the CBCT, only two to four image slices clearly showing the
characteristics, such as regional patterns and outline borders, of each cyst were adopted as datasets. All confirmed
panoramic and CBCT images were deidentified, and regions of interest (ROIs) were cropped and resized to maintain
their original aspect ratio (299 × 299 pixels). Then, the brightness and contrast were normalized using global contrast
normalization and zero-phase component analysis whitening, as proposed (Goodfellow et al., 2013).

2.2 Preprocessing and image augmentation

The dataset consisted of 2,126 images, including 1,140 (53.6%) panoramic and 986 (46.4%) CBCT images. There
were 260 (12.2%) panoramic and 188 (8.8%) CBCT images classified as OKCs, 463 (21.8%) panoramic and 396
(18.6%) CBCT images classified as dentigerous cysts, and 417 (19.6%) panoramic and 402 (18.9%) CBCT images
classified as periapical cysts. We randomly split 80% (panoramic images, n = 912 and CBCT images n = 789) of the
dataset for training (panoramic images, n = 684 and CBCT images n = 592) and validation (panoramic images, n =
228 and CBCT images n = 197), and 20% (panoramic images, n = 228 and CBCT images n = 197) for testing. We
used the training set for deep learning. As a technical and strategic method to avoid overfitting, the training and
validation dataset was randomly augmented 100 times (panoramic images, n = 68,400 and CBCT images n = 59,200)
using horizontal and vertical flipping, rotation (in the range of 20°), width and height shifting (in the range of 0.2),
shearing (in the range of 0.5), and zooming (in the range of 0.8–1.2) (Shin et al., 2016).

2.3 Architecture of the deep convolutional neural network

In this study, we exploited a pretrained deep CNN architecture derived from the GoogLeNet Inception v3 model,
which was developed by the Google research team and is known for excellent performance in image detection and
classification (Figure 2) (Chollet, 2017). This architecture consists of three convolutional layers, nine inception
modules including various scales of convolution classifiers, two fully connected layers, and softmax functions
(Szegedy et al., 2016). Our algorithm was used for preprocessing to avoid the weakness of small cases and enhance
the overall detection and diagnosis performance of OCLs based on transfer learning (Szegedy et al., 2016). We
optimized the weights by adjusting the hyperparameters including the learning rate (in the range of 0.0001–0.1), batch
size (in the range of 16–64), and dropout rate (in the range of 0.25–0.75) and by using batch normalization (Abadi et
al., 2016). A brief description of the terms related to deep learning is provided in Appendix 1.

2.4 Statistical analysis

Chi-squared tests were used to compare categorical data (sex, age group, and location) in the three OCL groups.
Diagnostic indices (area under the receiver operating characteristic curve [AUC], sensitivity, specificity, and

This article is protected by copyright. All rights reserved


confusion matrix with and without normalization) were calculated and compared between deep CNN architectures
Accepted Article
using panoramic and CBCT images, based on a Keras framework in Python (Python 3.7.2, Python Software
Foundation, Wilmington, DE, USA). All P values < 0.05 were considered to be statistically significant, and 95%
confidence intervals (CIs) were also calculated.

3 RESULTS

3.1 Baseline characteristics

The baseline characteristics of the patients, who were diagnosed with OKCs, dentigerous cysts, and periapical cysts,
are presented in Table 1. A total of 247 patients enrolled in this study, comprising 167 (67.6%) males and 80 (32.4%)
females. In terms of age, the number of individuals in their 20s was the smallest (n = 14; 5.7%), and the number of
those aged 40–59 was the highest (n = 98; 39.7%). The OCLs were most frequently located in the mandibular molar
region (n = 133; 53.8%), and the least frequently in the maxillary premolar region (n = 4; 1.6%).

3.2 Diagnosis of oral cystic lesion

The pretrained deep CNN architecture using CBCT images provided better diagnostic accuracy and AUC, which were
significantly greater than those achieved by other models using panoramic images (AUC difference = 0.067, 95% CI
0.013–0.122, P = 0.014) (Table 2 and Figure 3). When using panoramic images, the AUC was 0.847 (95% CI 0.760–
0.911), the sensitivity was 88.2%, and the specificity was 77.0%. For CBCT images, the AUC was 0.914 (95% CI
0.841–0.961), the sensitivity was 96.1%, and the specificity was 77.1%.

Figure 4 presents the confusion matrix, with and without normalization, showing the diagnostic results of the OCLs.
When using panoramic images, the total diagnostic accuracy was 84.6%; the diagnostic accuracy was highest for
periapical cysts (87.0%) and lowest for OKCs (81.8%). For CBCT images, the total diagnostic accuracy was 91.4%;
the diagnostic accuracy was highest for periapical cysts (93.7%), and lowest for OKCs (87.2%).

4 DISCUSSION

In the last few years, several deep learning models have been proposed and have achieved significant success in
various fields. In particular, supervised learning of deep CNN architectures have shown more pronounced and
efficient results that surpass human experts in most types of medical imaging (Gulshan et al., 2016; Esteva et al.,
2017). In this paper, we have successfully proved that the deep CNN architecture can be used in the detection and
diagnosis of major OCLs including OKCs, dentigerous cysts, and periapical cysts using panoramic and CBCT images
and have confirmed the possibility of its use in clinical practice with computer-aided diagnostic (CAD) systems.

This article is protected by copyright. All rights reserved


Verification of mixed radiopaque and radiolucent lesions with irregular and blurred edges in radiological images is
Accepted Article
very important in correctly detecting and classifying between OCLs. Radiopaque–radiolucent texture features from an
ROI in radiological images is also one of the crucial characteristics for detecting and diagnosing OCLs (Nurtanio et
al., 2013). In previous studies, boundaries of OCLs were directly segmented by medical and dental professionals, but
recent studies have shown that automatic or semiautomatic edge detection techniques can segment OCLs in digital
radiological images more accurately and efficiently. Active contour models, which are computer-generated contours
used to detect object boundaries, have shown an average accuracy rate of OCL segmentation of 99.67% using
panoramic images based on ROC analysis (Nurtanio et al., 2011).

When the crown or root of an impacted or nonvital tooth is associated with a solitary and unilocular cyst, it is difficult
to clearly distinguish among the three major types of OCLs based only on radiological findings (Scholl et al., 1999;
Borghesi et al., 2018). Nevertheless, our results achieved good and predictable accuracy (total accuracy = 87.8%).
Despite having trained more panoramic images than CBCT images, we found that the CBCT image dataset–based
deep CNN architecture showed higher diagnostic accuracy, sensitivity, and specificity than results achieved using
panoramic images (panoramic images = 84.7%, 95% CI 76.0–91.1; CBCT images = 91.4%, 95% CI 84.1–96.1). This
result indicates that CBCT images have important advantages such as higher detail and fewer artifacts at the
anatomical boundaries of the ROI and background than conventional panoramic images (Scarfe et al., 2006).

OKCs account for approximately 20% of OCLs and occur and reoccur mainly in the posterior mandible of males
(MacDonald-Jankowski, 2011). Approximately 75% of dentigerous cysts occur in the mandibles, mostly in young and
middle-aged adults, whereas periapical cysts mainly occur in middle-aged to older adults (Dunfee et al., 2006). Recent
studies have reported better diagnostic performance and prognostic stratification when applying conventional risk and
sociodemographic factors to the deep learning algorithm (Muhammad et al., 2017; Yala et al., 2019). Therefore,
hybrid deep learning models using both conventional risk and sociodemographic factors and radiological images are
expected to show improved diagnostic accuracy compared with conventional deep CNN architectures.

In this study, we used a pretrained deep CNN architecture derived from the GoogLeNet Inception v3 model and fine-
tuned with various training parameters including learning rate, batch size, dropout rate, and number of fully connected
layers, which were carefully determined during the preliminary trials. Despite the wide variability of the shape, size,
irregular border, and density and viscosity of the cystic contents of OCLs in radiological images, the pretrained deep
CNN architecture showed efficient edge detection and texture feature extraction through convolutional and dense
layers with hierarchical structure representations (Wang, 2016).

Despite its potential application to CAD systems in clinical practice, we identified several limitations in our approach.
First, the absolute size of the training dataset was small. Owing to the data-driven nature of the deep learning
algorithm, performance increases proportionally with the size of the training dataset. Therefore, to extract the essential
local features and avoid overfitting, a large and high-quality dataset is very important for creating a good detection
and diagnosis model. In order to overcome the overfitting problem in the training procedure, we adopted data
augmentation and transfer learning techniques with fine-tuning. Nevertheless, the small dataset is a major limitation of

This article is protected by copyright. All rights reserved


this study.
Accepted Article
Second, the dataset in this study did not include ameloblastoma, one of the major diseases that should be differentially
diagnosed against OKC. From a radiological perspective, ameloblastoma exhibits more bone expansion and often
showed multi-locular (soap-bubble) lesions with well-demarcated borders compared with OKC. Further studies are
required in which an ameloblastoma image dataset is included in the major OCL dataset.

5 CONCLUSION

The result shows that our radiological image dataset, comprising three types of OCL, is effectively detected and
diagnosed with the presented deep CNN architecture. However, the diagnostic accuracy of OCLs using radiological
assessment alone is less than that using histological examination, and accurate diagnosis with radiological images only
is still challenging. We hope that the current study provides insight for future large dataset–based deep learning
research.

ACKNOWLEDGEMENTS

This work was supported by Wonkwang University in 2020.

CONFLICTS OF INTEREST

None to declare.

AUTHOR CONTRIBUTIONS

Jae-Hong Lee conducted a literature review, organized the data and drafted the manuscript. Do-Hyung Kim was
responsible for data interpretation and edited the manuscript. Seong-Nyum Jeong critically reviewed the manuscript.

This article is protected by copyright. All rights reserved


Accepted Article
REFERENCES

Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., . . . Zheng, X. (2016). TensorFlow:
Large-Scale Machine Learning on Heterogeneous Distributed Systems. arXiv e-print: arXiv:1603.04467.
Binnie, W. H. (1999). Periodontal cysts and epulides. Periodontology 2000 21: 16-32.
Borghesi, A., Nardi, C., Giannitto, C., Tironi, A., Maroldi, R., Di Bartolomeo, F., & Preda, L. (2018).
Odontogenic keratocyst: imaging features of a benign lesion with an aggressive behaviour. Insights Into
Imaging 9(5): 883-897. https://doi.org/10.1007/s13244-018-0644-z
Chollet, F. (2017). Keras. Available: https://github.com/fchollet/keras.
Dunfee, B. L., Sakai, O., Pistey, R., & Gohel, A. (2006). Radiologic and pathologic characteristics of benign and
malignant lesions of the mandible. Radiographics 26(6): 1751-1768. https://doi.org/10.1148/rg.266055189
Esteva, A., Kuprel, B., Novoa, R. A., Ko, J., Swetter, S. M., Blau, H. M., & Thrun, S. (2017). Dermatologist-
level classification of skin cancer with deep neural networks. Nature 542(7639): 115-118.
https://doi.org/10.1038/nature21056
Goodfellow, I. J., Warde-Farley, D., Mirza, M., Courville, A., & Bengio, Y. (2013). Maxout networks. arXiv e-
print arXiv:1302.4389.
Gulshan, V., Peng, L., Coram, M., Stumpe, M. C., Wu, D., Narayanaswamy, A., . . . Webster, D. R. (2016).
Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal
Fundus Photographs. JAMA 316(22): 2402-2410. https://doi.org/10.1001/jama.2016.17216
Hricak, H. (2018). 2016 New Horizons Lecture: Beyond Imaging-Radiology of Tomorrow. Radiology 286(3): 764-
775. https://doi.org/10.1148/radiol.2017171503
Kaczmarzyk, T., Mojsa, I., & Stypulkowska, J. (2012). A systematic review of the recurrence rate for keratocystic
odontogenic tumour in relation to treatment modalities. International Journal of Oral and Maxillofacial
Surgery 41(6): 756-767. https://doi.org/10.1016/j.ijom.2012.02.008
Kim, J. R., Shim, W. H., Yoon, H. M., Hong, S. H., Lee, J. S., Cho, Y. A., & Kim, S. (2017). Computerized
Bone Age Estimation Using Deep Learning Based Program: Evaluation of the Accuracy and Efficiency. AJR.
American Journal of Roentgenology 209(6): 1374-1380. https://doi.org/10.2214/AJR.17.18224
Lee, C., Tanikawa, C., Lim, J.-Y., & Yamashiro, T. (2019). Deep Learning based Cephalometric Landmark
Identification using Landmark-dependent Multi-scale Patches. arXiv e-print arXiv:1906.02961.
Lee, J. G., Jun, S., Cho, Y. W., Lee, H., Kim, G. B., Seo, J. B., & Kim, N. (2017). Deep Learning in Medical
Imaging: General Overview. Korean Journal of Radiology 18(4): 570-584.
https://doi.org/10.3348/kjr.2017.18.4.570

This article is protected by copyright. All rights reserved


Lee, J. H., Kim, D. H., Jeong, S. N., & Choi, S. H. (2018a). Detection and diagnosis of dental caries using a deep
Accepted Article learning-based convolutional neural
https://doi.org/10.1016/j.jdent.2018.07.015
network algorithm. Journal of Dentistry 77: 106-111.

Lee, J. H., Kim, D. H., Jeong, S. N., & Choi, S. H. (2018b). Diagnosis and prediction of periodontally
compromised teeth using a deep learning-based convolutional neural network algorithm. Journal of
Periodontal & Implant Science 48(2): 114-123. https://doi.org/10.5051/jpis.2018.48.2.114
Lee, J. S., Adhikari, S., Liu, L., Jeong, H. G., Kim, H., & Yoon, S. J. (2018). Osteoporosis detection in
panoramic radiographs using a deep convolutional neural network-based computer-assisted diagnosis system:
a preliminary study. Dento Maxillo Facial Radiology: 20170344. https://doi.org/10.1259/dmfr.20170344
MacDonald-Jankowski, D. S. (2011). Keratocystic odontogenic tumour: systematic review. Dento Maxillo Facial
Radiology 40(1): 1-23. https://doi.org/10.1259/dmfr/29949053
Muhammad, H., Fuchs, T. J., De Cuir, N., De Moraes, C. G., Blumberg, D. M., Liebmann, J. M., . . . Hood,
D. C. (2017). Hybrid Deep Learning on Single Wide-field Optical Coherence tomography Scans Accurately
Classifies Glaucoma Suspects. Journal of Glaucoma 26(12): 1086-1094.
https://doi.org/10.1097/IJG.0000000000000765
Nurtanio, I., Astuti, E. R., Purnama, I. K. E., Mochamad Hariadi, & Purnomo, M. H. (2013). Classifying Cyst
and Tumor Lesion Using Support Vector Machine Based on Dental Panoramic Images Texture Features.
IAENG International Journal of Computer Science 40(4): 04.
Nurtanio, I., Purnama, I. K. E., Hariadi, M., & Purnomo, M. H. (2011). Cyst and Tumor Lesion Segmentation on
Dental Panoramic Images using Active Contour Models. IPTEK The Journal of Technology and Science
22(3): 152-158.
Scarfe, W. C., Farman, A. G., & Sukovic, P. (2006). Clinical applications of cone-beam computed tomography in
dental practice. Journal (Canadian Dental Association) 72(1): 75-80.
Scholl, R. J., Kellett, H. M., Neumann, D. P., & Lurie, A. G. (1999). Cysts and cystic lesions of the mandible:
clinical and radiologic-histopathologic review. Radiographics 19(5): 1107-1124.
https://doi.org/10.1148/radiographics.19.5.g99se021107
Shin, H. C., Roth, H. R., Gao, M., Lu, L., Xu, Z., Nogues, I., . . . Summers, R. M. (2016). Deep
Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics
and Transfer Learning. IEEE Transactions on Medical Imaging 35(5): 1285-1298.
https://doi.org/10.1109/TMI.2016.2528162
Soffer, S., Ben-Cohen, A., Shimon, O., Amitai, M. M., Greenspan, H., & Klang, E. (2019). Convolutional
Neural Networks for Radiologic Images: A Radiologist's Guide. Radiology 290(3): 590-606.
https://doi.org/10.1148/radiol.2018180547
Stoelinga, P. J. (2012). Kaczmarzyc et al.: A systematic review of the recurrence rate for keratocystic odontogenic
tumour in relation to treatment modalities. International Journal of Oral and Maxillofacial Surgery 41(12):
1585-1586; author reply 1586-1587. https://doi.org/10.1016/j.ijom.2012.08.003
Suzuki, K. (2017). Overview of deep learning in medical imaging. Radiological Physics and Technology 10(3): 257-
273. https://doi.org/10.1007/s12194-017-0406-5

This article is protected by copyright. All rights reserved


Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., & Wojna, Z. (2016). Rethinking the Inception Architecture for
Accepted Article Computer Vision. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR): 2818-2826.
Wang, R. (2016). Edge Detection Using Convolutional Neural Network. In L. Cheng, Q. Liu, & A. Ronzhin (Eds.),
Advances in Neural Networks – ISNN 2016: 13th International Symposium on Neural Networks, ISNN 2016,
St. Petersburg, Russia, July 6-8, 2016, Proceedings (pp. 12-20). Cham: Springer International Publishing.
Yala, A., Lehman, C., Schuster, T., Portnoi, T., & Barzilay, R. (2019). A Deep Learning Mammography-based
Model for Improved Breast Cancer Risk Prediction. Radiology 292(1): 60-66.
https://doi.org/10.1148/radiol.2019182716

TABLES

This article is protected by copyright. All rights reserved


Table 1. Baseline characteristics of the patients, diagnosed with odontogenic keratocysts, dentigerous cysts, and
Accepted Article
periapical cysts

Odontogenic keratocyst Dentigerous cyst Periapical cyst


P-value
Variables (n = 34) (n = 104) (n = 109)
Sex
Male 21 (61.8%) 78 (75.0%) 68 (62.4%) 0.106
Female 13 (38.2%) 26 (25.0%) 41 (37.6%)
Age group (years)
<20 7 (20.6%) 6 (5.8%) 1 (0.9%) <0.001
20–39 16 (47.1%) 27 (26.0%) 49 (45.0%)
40–59 8 (23.5%) 51 (49.0%) 39 (35.8%)
≥60 3 (8.8%) 20 (19.2%) 20 (18.3%)
Location
Mx. anterior 2 (5.9%) 7 (6.7%) 67 (61.5%) <0.001
Mx. premolar 0 (0.0%) 0 (0.0%) 4 (3.7%)
Mx. molar 4 (11.8%) 2 (1.9%) 5 (4.6%)
Mn. anterior 4 (11.8%) 1 (1.0%) 6 (5.5%)
Mn. premolar 0 (0.0%) 3 (2.9%) 9 (8.3%)
Mn. molar 24 (70.6%) 91 (87.5%) 18 (16.5%)
Mx, maxilla; Mn, mandible

P-values were calculated using the chi-squared test. Italics denote statistical significance (P < 0.05).

Table 2. AUC, sensitivity, and specificity for the detection and diagnosis of oral cystic lesions using dental panoramic
radiography and cone beam computed tomographic images based on deep learning neural network

AUC SE 95% CI Sensitivity (%) Specificity (%)


Panoramic images 0.847 0.040 0.760–0.911 88.2 77.0

This article is protected by copyright. All rights reserved


CBCT images 0.914 0.028 0.841–0.961 96.1 77.1
Accepted Article
AUC, area under the curve; SE, standard error; CI, confidence interval

FIGURE LEGEND

Figure 1. Dental panoramic radiography and cone beam computed tomographic (CBCT) image dataset, which
included diagnoses of three oral cystic lesion types—keratocysts, dentigerous cysts, and periapical cysts—based on
histopathological examinations.

Figure 2. Simplified overall scheme of pretrained deep CNN architecture derived from the GoogLeNet Inception v3
model.

This article is protected by copyright. All rights reserved


Figure 3. Pairwise comparison of receiver operating characteristic curves among the pretrained deep CNN
Accepted Article
architectures using panoramic and CBCT images. The deep CNN architectures achieved an AUC of 0.847 (95% CI
0.760–0.911) based on panoramic images and an AUC of 0.914 (95% CI 0.841–0.961) based on CBCT images, which
was statistically significant (P = 0.014).

Figure 4. Confusion matrix with and without normalization, showing the diagnostic results of oral cystic lesions. (A,
C) without normalization, (B, D) with normalization.

This article is protected by copyright. All rights reserved


Accepted Article

odi_13223_f1.tif

This article is protected by copyright. All rights reserved


Accepted Article
odi_13223_f2.tif

This article is protected by copyright. All rights reserved


Accepted Article

odi_13223_f3.tif

This article is protected by copyright. All rights reserved


Accepted Article

odi_13223_f4.tif

This article is protected by copyright. All rights reserved

Вам также может понравиться