Deep Learning Models for Dental Conditions Classification Using Intraoral Images
DOI: http://dx.doi.org/10.62527/joiv.8.3.1914
Abstract
This paper presents the digitalization of dentistry medical records to support the dentist in the patient examination process. A dentist uses manual input to fill out the evaluation form by drawing and labeling each patient’s tooth condition based on their observations. Consequently, it takes too long to finish only one examination. For time efficiency, using AI-based digitalization technology can be a promising solution. To address the problem, we made and compared several classification models to recognize human dental conditions to help doctors analyze patient teeth. We apply the YOLOv5, MobileNet V2, and IONet (proposed CNN model) as deep learning models to recognize the five common human dental conditions: normal, filling, caries, gangrene radix, and impaction. We tested the ability of YOLO classification as an object detection model and compared it with classification models. We used a dataset of 3.708 intraoral dental images generated by various augmentation methods from 1.767 original images. We collected and annotated the dataset with the help of dentists. Furthermore, the dataset is divided into three parts: 90% of the total dataset is used as training and validation data, then divided again into 80% training data and 20% validation data. 10% of the total dataset will be used as testing data to compare classification performance. Based on our experiments, YOLOv5, as an object detection model, can classify dental conditions in humans better than the classification model. YOLOv5 produces an 82% accuracy testing value and performs better than the classification model. MobileNet V2 and IONet only get 80% and 70% testing accuracy. Although statistically, there is not much of a difference between the test accuracy values for YOLOv5 and MobileNet v2, the speed in classifying dental objects using YOLOv5 is more efficient, considering that YOLOv5 is an object detection model. There are still challenges with the deep learning technique used in this research, but these can be addressed in further development. A more complex model and the enlargement of more data, ensuring it is varied and balanced, can be used to address the limitations.
Keywords
Full Text:
PDFReferences
P. Waleed, F. Baba, S. Alsulami, and B. Tarakji, “Importance of dental records in forensic dental identification,” Acta Informatica Medica, vol. 23, no. 1, pp. 49–52, 2015, doi: 10.5455/aim.2015.23.49-52.
T. Danifatis Sunnah, E. Setya Wardhana, and H. Bondan Aurora, “The Relationship Between Legal Understanding, Attitudes And Dentist’s Behavior On Completing Medical Records In Semarang,” 2021.
J. H. Lee, D. H. Kim, S. N. Jeong, and S. H. Choi, “Detection and diagnosis of dental caries using a deep learning-based convolutional neural network algorithm,” J Dent, vol. 77, no. June, pp. 106–111, 2018, doi: 10.1016/j.jdent.2018.07.015.
S. S. Yadav and S. M. Jadhav, “Deep convolutional neural network based medical image classification for disease diagnosis,” J Big Data, vol. 6, no. 1, 2019, doi: 10.1186/s40537-019-0276-2.
Y. E. Almalki et al., “Deep Learning Models for Classification of Dental Diseases Using Orthopantomography X-ray OPG Images,” Sensors, vol. 22, no. 19, 2022, doi: 10.3390/s22197370.
J. Hwang, “Hwang, J ve diğ., (2019), An overview of deep learning in the field of dentistry. Imaging Science in Dentistry, 49(1), 1–7.,” pp. 1–7, 2019.
M. Horvat and G. Gledec, “A comparative study of YOLOv5 models performance for image localization and classification,” 33rd Central European Conference on Information and Intelligent Systems, no. September, pp. 349–356, 2022.
M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L. C. Chen, “MobileNetV2: Inverted Residuals and Linear Bottlenecks,” Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 4510–4520, 2018, doi: 10.1109/CVPR.2018.00474.
D. Kim, J. Choi, S. Ahn, and E. Park, “A smart home dental care system: integration of deep learning, image sensors, and mobile controller,” J Ambient Intell Humaniz Comput, vol. 14, no. 2, pp. 1123–1131, 2023, doi: 10.1007/s12652-021-03366-8.
A. Schlickenrieder et al., “Automatized detection and categorization of fissure sealants from intraoral digital photographs using artificial intelligence,” Diagnostics, vol. 11, no. 9, 2021, doi: 10.3390/diagnostics11091608.
M. T. G. Thanh, N. Van Toan, V. T. N. Ngoc, N. T. Tra, C. N. Giap, and D. M. Nguyen, “Deep Learning Application in Dental Caries Detection Using Intraoral Photos Taken by Smartphones,” Applied Sciences (Switzerland), vol. 12, no. 11, 2022, doi: 10.3390/app12115504.
K. Moutselos, E. Berdouses, C. Oulis, and I. Maglogiannis, “Recognizing Occlusal Caries in Dental Intraoral Images Using Deep Learning,” Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS, pp. 1617–1620, 2019, doi: 10.1109/EMBC.2019.8856553.
E. Y. Park, H. Cho, S. Kang, S. Jeong, and E. K. Kim, “Caries detection with tooth surface segmentation on intraoral photographic images using deep learning,” BMC Oral Health, vol. 22, no. 1, pp. 1–9, 2022, doi: 10.1186/s12903-022-02589-1.
N. Eto, J. Yamazoe, A. Tsuji, N. Wada, and N. Ikeda, “Development of an artificial intelligence-based algorithm to classify images acquired with an intraoral scanner of individual molar teeth into three categories,” PLoS One, vol. 17, no. 1 January, pp. 1–10, 2022, doi: 10.1371/journal.pone.0261870.
S. I. Kesuma, “REKAM MEDIS ELEKTRONIK PADA PELAYANAN RUMAH SAKIT DI INDONESIA: ASPEK HUKUM DAN IMPLEMENTASI,” vol. 1, no. 1, 2023.
M. Desai and M. Shah, “An anatomization on breast cancer detection and diagnosis employing multi-layer perceptron neural network (MLP) and Convolutional neural network (CNN),” Clinical eHealth, vol. 4, no. 2021, pp. 1–11, 2021, doi: 10.1016/j.ceh.2020.11.002.
M. Rani and G. K. Andurkar, “Human Gesture Recognition Using CNN,” in 2022 2nd Asian Conference on Innovation in Technology (ASIANCON), 2022, pp. 1–7. doi: 10.1109/ASIANCON55314.2022.9909307.
N. Qatrunnada, M. Fachrurrozi, and A. Syahrini, “Cat Breeds Classification using Convolutional Neural Network for Multi-Object Image,” vol. 3, no. 2, pp. 26–35, 2022.
A. Noeman and D. Handayani, “Detection of Mad Lazim Harfi Musyba Images Uses Convolutional Neural Network,” IOP Conf Ser Mater Sci Eng, vol. 771, no. 1, 2020, doi: 10.1088/1757-899X/771/1/012030.
Z. Q. Zhao, P. Zheng, S. T. Xu, and X. Wu, “Object Detection with Deep Learning: A Review,” IEEE Trans Neural Netw Learn Syst, vol. 30, no. 11, pp. 3212–3232, 2019, doi: 10.1109/TNNLS.2018.2876865.
A. Bochkovskiy, C.-Y. Wang, and H.-Y. M. Liao, “YOLOv4: Optimal Speed and Accuracy of Object Detection,” 2020.
D. Snegireva and A. Perkova, “Traffic Sign Recognition Application Using Yolov5 Architecture,” in 2021 International Russian Automation Conference (RusAutoCon), 2021, pp. 1002–1007. doi: 10.1109/RusAutoCon52004.2021.9537355.
N. Chelouati, F. Fares, Y. Bouslimani, and M. Ghribi, “Lobster detection using an Embedded 2D Vision System with a FANUC industrual robot,” in 2021 IEEE International Symposium on Robotic and Sensors Environments (ROSE), 2021, pp. 1–6. doi: 10.1109/ROSE52750.2021.9611755.
X. Zhu, S. Lyu, X. Wang, and Q. Zhao, “TPH-YOLOv5: Improved YOLOv5 Based on Transformer Prediction Head for Object Detection on Drone-captured Scenarios,” Proceedings of the IEEE International Conference on Computer Vision, vol. 2021-Octob, pp. 2778–2788, 2021, doi: 10.1109/ICCVW54120.2021.00312.
H. D. Avarur, A. Lal, N. P, A. Sinha, G. N. Vyshnavi, and S. Kannan, “Human Gait recognition Using Cross View Micro Gait Dataset with Light weight MobileNet,” in 2023 International Conference on Recent Advances in Electrical, Electronics, Ubiquitous Communication, and Computational Intelligence (RAEEUCCI), 2023, pp. 1–5. doi: 10.1109/RAEEUCCI57140.2023.10134510.
S. J. Nelson, Wheeler’s dental anatomy, physiology and occlusion-e-book. Elsevier Health Sciences, 2014.
Y. S. Moon, B. Gyu Han, H. Seok Yang, and H. Gyeong Lee, “Low contrast image enhancement using convolutional neural network with simple reflection model,” Advances in Science, Technology and Engineering Systems, vol. 4, no. 1, pp. 159–164, 2019, doi: 10.25046/aj040115.
M. Miron, S. Moldovanu, and A. L. Culea-Florescu, “A Multi-Layer Feed Forward Neural Network for Breast Cancer Diagnosis from Ultrasound Images,” in 2022 26th International Conference on System Theory, Control and Computing (ICSTCC), 2022, pp. 421–425. doi: 10.1109/ICSTCC55426.2022.9931772.
Ž. Vujović, “Classification Model Evaluation Metrics,” International Journal of Advanced Computer Science and Applications, vol. 12, no. 6, pp. 599–606, 2021, doi: 10.14569/IJACSA.2021.0120670.
R. A. Arun and S. Umamaheswari, “Effective and Efficient Multi-Crop Pest Detection Based on Deep Learning Object Detection Models,” J. Intell. Fuzzy Syst., vol. 43, no. 4, pp. 5185–5203, Jan. 2022, doi: 10.3233/JIFS-220595.
B. Wu, C. Pang, X. Zeng, and X. Hu, “ME-YOLO: Improved YOLOv5 for Detecting Medical Personal Protective Equipment,” Applied Sciences (Switzerland), vol. 12, no. 23, 2022, doi: 10.3390/app122311978.
S. M. S. Salahin, M. D. S. Ullaa, S. Ahmed, N. Mohammed, T. H. Farook, and J. Dudley, “One-Stage Methods of Computer Vision Object Detection to Classify Carious Lesions from Smartphone Imaging,” pp. 176–190, 2023.
X. Hu, L. Chu, J. Pei, W. Liu, and J. Bian, “Model Complexity of Deep Learning: A Survey,” Mar. 2021, [Online]. Available: http://arxiv.org/abs/2103.05127
E. D. Fadhillah, P. C. Bramastagiri, R. Sigit, S. Sukaridhoto, A. Brahmanta, and B. S. B. Dewantara, “Smart Odontogram: Dental Diagnosis of Patients Using Deep Learning,” in 2021 International Electronics Symposium (IES), 2021, pp. 532–537. doi: 10.1109/IES53407.2021.9594027.
J. M. Johnson and T. M. Khoshgoftaar, “Survey on deep learning with class imbalance,” J Big Data, vol. 6, no. 1, Dec. 2019, doi: 10.1186/s40537-019-0192-5.