Objectives/Hypothesis
To develop a deep-learning-based automatic diagnosis system for identifying nasopharyngeal carcinoma (NPC) from noncancer (inflammation and hyperplasia), using both white light imaging (WLI) and narrow-band imaging (NBI) nasopharyngoscopy images.
Study Design
Retrospective study.
Methods
A total of 4,783 nasopharyngoscopy images (2,898 WLI and 1,885 NBI) of 671 patients were collected and a novel deep convolutional neural network (DCNN) framework was developed named Siamese deep convolutional neural network (S-DCNN), which can simultaneously utilize WLI and NBI images to improve the classification performance. To verify the effectiveness of combining the above-mentioned two modal images for prediction, we compared the proposed S-DCNN with two baseline models, namely DCNN-1 (only considering WLI images) and DCNN-2 (only considering NBI images).
Results
In the threefold cross-validation, an overall accuracy and area under the curve of the three DCNNs achieved 94.9% (95% confidence interval [CI] 93.3%–96.5%) and 0.986 (95% CI 0.982–0.992), 87.0% (95% CI 84.2%–89.7%) and 0.930 (95% CI 0.906–0.961), and 92.8% (95% CI 90.4%–95.3%) and 0.971 (95% CI 0.953–0.992), respectively. The accuracy of S-DCNN is significantly improved compared with DCNN-1 (P-value <.001) and DCNN-2 (P-value = .008).
Conclusion
Using the deep-learning technology to automatically diagnose NPC under nasopharyngoscopy can provide valuable reference for NPC screening. Superior performance can be obtained by simultaneously utilizing the multimodal features of NBI image and WLI image of the same patient.
Level of Evidence
3 Laryngoscope, 2021
Δεν υπάρχουν σχόλια:
Δημοσίευση σχολίου
Σημείωση: Μόνο ένα μέλος αυτού του ιστολογίου μπορεί να αναρτήσει σχόλιο.